Continuing on our theme of virtual network overlays and programmable networks, today we’ll look at how to increase workload mobility over more data center and cloud resources. If server virtualization increases resource utilization and reduces costs, and data center consolidation is a good thing, then it follows that the larger the resource pool that your virtual workloads can migrate over, the more cost effective your IT operation can be. And if your mobility diameter spans multiple sites, you can obviously improve your fault tolerance as well. We call this increasing your mobility diameter, and we’ll complement what we’ve already learned about VXLAN and virtual overlays with some new technologies to seamlessly scale your diameter up. (Sounds like some sort of bizarre reverse Weight Watchers program, doesn’t it?).
As we noted in our VXLAN overview, VXLANs enable private virtual overlays over layer 3 boundaries via their MAC in UDP encapsulation and the cool way they filter MAC address broadcasts to only the right subnets. However, when you are doing full on application migration over a layer 3 boundary, VXLAN alone isn’t going to do it alone. In order to extend virtual workload mobility beyond layer 2 boundaries, Cisco came up with Overlay Transport Virtualization (OTV) that can work in conjunction with VXLAN to extend application mobility to any point the VXLAN virtual overlay can reach. And not surprisingly, the media wizards over at TechWise TV have a great video that takes all the complexity of OTV and makes it cartoonishly simple.
At Cisco Live London 2012, we announced that the Nexus 1000Vdistributed virtual switch (DVS) architecture will scale to support 10K+ ports across hundreds of servers. This is a multi-fold increase over our current support of 2K ports and 64 servers. What is driving the need to scale? Two reasons: More VMs and broader VM mobility.
The number of VMs is growing leaps and bounds in data centers and cloud computing environments, which in turn is driving the need to scale virtual switch ports. Depending on who you ask, we have already reached or are about to reach the tipping point where 50% of enterprise workloads have been virtualized. In most IT environments today, you get a VM by default for computing needs; to run an app on a bare metal physical server requires special approval. And needless to say, Moore’s Law continues to drive dense multi-core CPUs with extended memory architectures – thus enabling many more virtual machines to be instantiated on a single physical server. We have seen UCS customers deploy 10 – 30 VMs per server for production workloads, and 50+ (in some cases 100+) VMs per server for non-production workloads and virtual desktops. Increased adoption of public cloud computing resources, as well as growing deployments of private clouds in enterprises is also rapidly increasing the VM count. Also, customers often assign multiple vNICs per VM, e.g. a NIC for data traffic, another for management, a third for backup and so on. These factors are contributing to increased demand for virtual Ethernet (vEth) ports on the Nexus 1000V DVS. Read More »
As we start off this New Year, how about including a resolution to improve application delivery? In Best Practices for Application Delivery in Virtualized Networks – Part I , we covered key application delivery challenges that have come up due to the complexities of managing the many types of applications that enterprises use today, and further complicated by data center consolidation and virtualization. We then covered some best practices, courtesy of Dr. Jim Metzler’s 2011 Application Service Delivery Handbook, which recommended taking a lifecycle approach to planning and managing application performance.
A key step to the lifecycle approach is to implement network and application optimization tools, such as WAN Optimization solutions and Application Delivery Controllers, including server load balancers. Of course, these solutions are not new to the market and already address many of the needs that exist with delivering enterprise applications in virtualized data centers -- namely, the need to ensure network reliability, availability and security for users accessing these applications. In this post, we will discuss a recent study by IDC, where IT decision makers across Europe and the US spoke out about their strategies for using server load balancers to deal with emerging challenges.
. What important attributes do you look for in your server load balancers?
Part of Cisco’s Data Center strategy includes Data Center Interconnect (DCI). DCI is a solutions-based approach to virtualize an organization’s 2 or more Data Centers. That is, multiple Data Centers can be architected so that they can seamlessly share resources while also delivering new services addressing today’s business challenges and opportunities.
I’m a Product Manager in our Systems Architecture and Strategy Unit (SASU) where we develop DCI enabled architectures *and* put them through our solutions test bed. Our output includes White Papers, Industry Presentations as well as Design and Implementation Guides with the Cisco Validated Design (CVD) designation.
My ultimate goal here is to share what’s happening and help point you in the right direction as you make your DCI decision or just want to learn about the solution in general. To get you started, please check out our DesignZone as well as more specific DCI content here.
CiscoLive London was an incredible trip and gosh it was only 30 days ago – our first little project out of that voyage is TechWiseTV85 our latest episode on Data Center technologies. Data Center Optimization: The Next Stage is now available for your viewing pleasure in our ‘still has that new website smell’ environment we affectionately call the CVC (Cisco Virtual Connection).
This show was another exercise in self-restraint as the DC team had brought out an amazing selection – if we were hoping that a global show would mean a smaller show…we were out of luck.