Virtualizing Oracle Databases – The Time Has Come!
Overall, virtualization of IT applications and databases is quite pervasive. Estimates from industry analysts show that some applications and databases have virtualization penetration rates of 80 to 90%. Overall the estimates for datacenter virtualization range from 60 to 70%. One curious exception is the rate of virtualization for Oracle Databases. Some estimates put the Oracle Database virtualization rate below 20%. The big question is why so low for Oracle Database?
While I have never seen any formal research documenting the reasons, ad-hoc discussions with many DBAs and Architects and other Oracle users indicates that some of the major reasons for their reluctance to virtualize include:
Fear of performance degradation
Concern over availability and stability
And an “If it ain’t broke, don’t fix it” view
And for mission critical Oracle Databases those are valid concerns. Any outage or performance degradation is costly. Status quo is the safest approach. But what I am hearing from customers and the Oracle community at large is that the time has come for virtualization. The improvements in configuration flexibility, reduced deployment times, dramatically improved disaster recovery and cost savings are great motivators for virtualization by themselves. One of the early adopters for virtualizing and Oracle infrastructure was EMC. lets hear what EMC’s Chief Database Architect, Darryl Smith, has to say about the benefits of EMC’s virtualization efforts with EMC’s Oracle Infrastructure.
So EMC found great performance, improved availability and a reduction in database licenses all because of their move to virtualize their Oracle infrastructure. Here is more of Darryl talking about Oracle virtualization and the cloud.
EMC took the next logical step from initial virtualization and moved their Oracle infrastructure to a full cloud implementation with even more benefits thanks to the improved Oracle workload mobility.
EMC is a great example of why there appears to be a growing tide of Oracle users who are ready to ride the wave of virtaulization. To learn more about EMC’s virtualization efforts and results, these two whitepapers on Cisco.com will provide a more complete overview of their journey:
This blog post promises to avoid telling you about all the fantabulous (I know that’s not a word) growth expected in the number of hosted virtual desktops to be deployed by 2016. What I do want to share, is how Cisco is ramping up our investments in accelerating your path to virtual desktop success, and how we’re tapping into the fundamentals of our Unified Computing System (UCS) to deliver new VDI efficiencies; the same efficiencies that have made Cisco the 2nd most preferred x86 blade server vendor* worldwide, in just 4 years! So why are so many organizations moving away from their legacy compute solution, and choosing UCS for VDI workloads and more?
Differentiated capabilities that address VDI pain points: TCO and Manageability
It’s no secret to anyone that VDI is not simple to deploy. You essentially have to bring together multiple seemingly disparate solution elements (server, storage, virtualization, broker, network, security, etc.) and make them work in a cohesive manner, and then be certain that your implementation will scale from a small pilot of 50 users to hundreds, thousands, or more! Clearly with such complexity, the last thing you need is a complex compute infrastructure underneath it all. There are 3 key things at the heart of this, that speak to why UCS is better for VDI:
1.) Server-resident flash. Our “On-Board” Architecture for VDI intercepts the rapidly proliferating use of flash based storage solutions that offer expansive IOPS capacity and huge performance. UCS takes it a step further by offering an integrated solution leveraging our partner Fusion-io. We’ve additionally delivered reference architectures that extend the use cases and attractiveness of flash-based solutions with appliance approaches (that direct-connect the storage array to our fabric interconnect) as well as more traditional multi-tiered architectures. More on that in a moment…
2.) We’ve made it easier to provision and manage the hosts for your virtual desktop deployments.UCS Service Profile Templates enable rapid deployment from bare metal, creating a zero-touch, mistake-proof, stateless operations model. Now, when you add the On-Board, server-resident flash to the configuration, you extend the reach of this management model to include high-performance, economical storage, completely provisioned and managed as part of the blade configuration/profile! No SAN or associated expertise required! Perfect for floating, non-persistent desktops.
3.) Granular visibility across the virtualized infrastructure. With user desktops now running amidst other mission-critical workloads in the data center, there’s more reason than ever to ensure that you can impart QoS, security and manageability across the multitude of virtual machine traffic flows traversing the data center. Cisco Virtual Machine Fabric Extender (VM-FEX) and Cisco Nexus 1000v provide the visibility and controls that make this possible, extending physical world policy and administration to virtual.
A long time ago, it used to be comforting, to hear the words “One Size Fits All”. As though our interests were surely represented within that catch-all, assuring us that we weren’t going to get left out in the rain. You could safely make that impulse-driven purchase, bring it home (or have it delivered), and know with certainty, that you wouldn’t be disappointed. It’s almost laughable to think that we subscribed to this way of thinking for about 50 years. But thankfully, we live, work and play in a world where it’s not about one-size-fits-all, and the only things we’ll accept as such, are wristwatches, and bicycle helmets! (unless you have a gargantuan sized cranium)
And so it is with your IT environment – “One-Size-Fits-All” feels too much like hand-cuffs (which coincidentally are also one-size-fits-all). We’ve done away with the notion that a solution that’s optimized for a Fortune 500, is going to be at all suitable for a medium-sized business with almost 1,000 employees. While both organizations might have a strategic imperative around workspace mobility, and have set out to virtualize the desktops of say, 5% of their workforce, they’ll approach this problem in two completely different ways.
One of these organizations will have an extensive , multi-tiered networking and security infrastructure, optimized for virtual machine traffic, the other may not.
One of these organizations will have a mature SAN infrastructure in place, with embedded resources and expertise, and lots of existing mission-critical data already housed there. The other may not.
One of these organizations will have a high percentage of virtualized workloads and a highly automated/orchestrated environment for rapidly spinning up new infrastructure. The other may not.
Certainly these two environments are not going to take the same solution approach to deploying virtual desktops? They will however, share many of the same key objectives/demands: future proof scalability, resiliency, streamlined provisioning and operations, consistent user experience for the 1st user as well as the 1000th. And they’ll want all of this with the lowest possible TCO.
Last month, Cisco introduced our expanded suite of solution architectures for desktop virtualization. This portfolio was struck with the objective of ensuring our customers would never have to settle for a One-Size-Fits-All approach to deploying VDI, recognizing that they’re deploying this solution from a multitude of possible starting points in their IT maturity. With four new solution architectures, each built on Cisco Unified Computing System (UCS), and each backed by design guides and reference configurations co-developed with industry-leading partners in storage and storage-optimization technologies, we’ve taken the risk and guesswork out of choosing the deployment methodology that’s right-sized for your organization. Check out my friend Ashok’s more detailed post on the new reference architecture portfolio.
Sure, there are many events and conferences going on this week, but stick a reminder on your calendar to watch this week’s episode of Engineers Unplugged. Ed Saipetch (@edsai) of Speaking in Tech and other fames and Andre Leibovici (@andreleibovici) of VMware talk about the evolution of BYOD (Bring Your Own Device), VDI, EUC, and the changes brought about by new devices.
Bringing the 1970s office to you, unicorn style.
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
Practice drawing unicorns
What have been your challenges (IT or client side) as we move into the world of mobile employment and endlessly proliferating devices and apps? Post a comment here, or join the discussion on Twitter, #EngineersUnplugged.
It’s great to see Cisco and many companies across the industry make a major change in the use of Open Source via the newly form project hosted by the Linux Foundation called OpenDaylight. This consortium is an industry-wide, open and transparent effort to catalyze innovation and accelerate an application ecosystem for software-defined networking. With all the partners involved we are working to not only further development and adoption of SDN but also to foster a new developer community. A consortium like this has been long overdue and it’s great to finally see it come to fruition.
We are incredibly pleased to partner with Arista, Big Switch Networks, Brocade, Citrix, Dell, Ericsson, Fujitsu, HP, IBM, Intel, Juniper Networks, Microsoft, NEC, Nuage Networks, PLUMgrid, RedHat and VMware on the Project. This is the largest effort to date to drive Software-Defined Networking across the industry and into new markets. While the initial goal is to build a common, industry backed SDN Platform, the broader objective is to give rise to an entire ecosystem of developers that can freely utilize the code, contribute to the project and commercialize the offerings. I further expect the ecosystem to expand into areas like tools and services.
Cisco has donated our core “Cisco ONE” controller code to the project and has officially open sourced the code under the Eclipse Public License. The community has come together around this code to form the architecture (see below) for the Open SDN Framework. Beyond donations of code, Project members are supporting the project via both financial investment and via developers we are committing to work full-time on the project overall. Donations from other members of the Project can be seen here and we expect this list to only grow.
As Open Source increasingly becomes a standard for customers and developers, we look at this as a new way to meet our customer needs and also help developers innovate in new ways without the barriers of vendor lock-in. Open Source is increasingly important for our customers and developers as well and as they evolve, we evolve. Cisco to date has supported Open Source through efforts such as OpenStack and now OpenDaylight and we look at Open Source as a critical pillar in our software strategy moving forward. By allowing developers to freely use these solutions we hope to enable a new developer ecosystem for software-defined networking and more. We are fully committed to enabling developers, both current and new, to deliver innovating applications and services that will help customers across the board realize the value of SDN faster than before.
The OpenDaylight architecture and code offering to date includes a modular southbound plugin architecture for multi-vendor environments. In addition, OpenDaylight offers an extensible northbound framework with both Java & REST APIs to ensure multiple developer skill-sets can build applications to the platform. We are also planning to build a onePK plugin for OpenDaylight to enable multiple users to drive network intelligence into their SDN applications. As you can see from below we will also be supporting key standards with this effort, including OpenFlow.
It’s important to note that you don’t launch a community; you build one. By investing in OpenDaylight we hope that our customers, partners and developers across multiple industries will now have the ability to build applications that frankly make the network easier to use and more automated. As an industry we are moving in a new direction and further up the stack and OpenDaylight offers new opportunities for application creation and monetization beyond the networking layer.
It’s a true rarity when you see both partners and competitors come together for the good of the community, and contribute code for the universal good of the customer. All OpenDaylight participants have committed to open source guidelines that include open communication, ethical and honest behavior, code and roadmap transparency and more. An Open Source project is only as successful as the community of developers and the level of code quality, and OpenDaylight’s Board of Directors (which includes multiple parties cross-industry) will be ensuring that partners, code contributors and project committers all abide by the same guidelines for the success of the project over the success of their own company’s offerings.
For more information, please see www.opendaylight.org. Code will be available for download soon, and we are looking for interested individuals for commitments across the board – from technical offerings to application development, and we welcome contributions from both individuals and other organizations. All ideas are welcome, and we look forward to multiple new innovative solutions coming from this.
Congratulations to all our partners and individuals who helped to make this happen, including the hard work done by the Linux Foundation. It’s truly an amazing accomplishment and we expect to see much more in the near future.