A couple of colleagues of mine wrote a document on live Workload Mobility and Disaster Recovery for Tier-1 applications. I think you should check it out and here’s a couple of key points that I want to highlight:
- A single physical Cisco, EMC, VMware infrastructure
- Both vMotion and SRM validated on same infrastructure
- Tier-1 Enterprise Applications tested
Read More »
Tags: Business Continuance, Cisco, DCI, disaster recovery, EMC, LISP, Microsoft Sharepoint, mobility, Oracle 11g, OTV, RecoverPoint, Replication, SRM, Tier 1 Applications, vMotion, VMware, VPLEX, VPLEX Metro, Workload Mobility
I previously discussed using LISP to optimize your client-server traffic so today I’ll discuss the reverse direction: Egress Path Optimization from the Server to the Client. Let’s go over the need for Path Optimization in the direction from Server-to-Client with some pictures and explanations.
The Virtual Machine (VM) server is configured with a default gateway IP address, 192.168.1.1, which is the next hop IP address that the VM will forward packets towards as the traffic returns to the client outside the data center. In this data center environment, we’ve deployed the default gateway using the First Hop Redundancy Protocol (FHRP). In reality, FHRP is an umbrella technology term that includes Hot Standby Routing Protcol (HSRP) and Virtual Router Redundancy Protocol (VRRP), two main technologies that provide transparent failover and redundancy at the first hop IP router. Please see info on FHRP here.
Also notice that the VM default gateway is the same as the HSRP Virtual IP Address (VIP). The HSRP VIP binds itself to one of the physical HSRP Routers via an HSRP election process using Layer 2 control packets between the two physical HSRP Routers and this means that the VM default gateway, since it points to a VIP, may move between physical HSRP Routers, and of course which is then intent and design when using any type of FHRP.
In the above picture, the Path is Optimized from Server to Client, so now let’s take a look at what happens when we migrate the VM to the new data center.
Read More »
Tags: cloud, data center, Data Center Interconnect, DCI, FHRP, HSRP, LISP, mobility, N7K, Nexus 7000, OTV, vMotion, Workload Mobility
Today I want to bring up DCI use case that I’ve been thinking about: capacity expansion. As you know, the purpose of DCI is to connect two or more Data Centers together so that they share resources and deliver services. The capacity expansion use case is when you have temporary traffic bursts, cloud bursts, either planned or unplanned, maintenance windows, migrations or really any temporary service event that requires additional service capacity.
To start addressing the challenge of meeting these planned and unplanned cloud burst and capacity expansion requirements, check out the new ACE + OTV feature called Dynamic Workload Scaling announced recently.
Read More »
Tags: ACE, Burst, Capacity Expansion, Cisco, cloud, Cloud Burst, data center, Data Center Interconnect, DC, DCI, DWS, Dynamic Workload Scaling, locality, Nexus 7000, OTV, SASU, Systems Architecture and Strategy Unit, virtual machine, VM, VM Locality
As I continue to ramp up my understanding of Cisco’s innovative datacenter technologies and joint solutions with our open ecosystem partners, I had opportunity to sit down with Jake Howering, Product Manager for Cisco’s Data Center Interconnect (DCI) solution.
DCI technologies are key to connecting data centers, and simplifying the mobility and scalability of physical and virtualized application workload to address various real world scenarios.
Jake’s one of the very sharp Product Managers I’ve met. Good news is that Jake has joined the blogosphere and will be actively involved is discussions around Cisco DCI solution. Welcome Jake!
Within 30 minutes of discussion, Jake and I touched upon the basic concepts of DCI and the innovative solutions we have brought to market jointly with partners like EMC, NetApp, and VMware. Here is the summary of our discussion around DCI and what it means to the customers:
Read More »
Tags: Cisco, Data Center Interconnect, DCI, EMC, FlexCache, LISP, netapp, OTV, VMware, VPLEX
Part of Cisco’s Data Center strategy includes Data Center Interconnect (DCI). DCI is a solutions-based approach to virtualize an organization’s 2 or more Data Centers. That is, multiple Data Centers can be architected so that they can seamlessly share resources while also delivering new services addressing today’s business challenges and opportunities.
I’m a Product Manager in our Systems Architecture and Strategy Unit (SASU) where we develop DCI enabled architectures *and* put them through our solutions test bed. Our output includes White Papers, Industry Presentations as well as Design and Implementation Guides with the Cisco Validated Design (CVD) designation.
My ultimate goal here is to share what’s happening and help point you in the right direction as you make your DCI decision or just want to learn about the solution in general. To get you started, please check out our DesignZone as well as more specific DCI content here.
Tags: Data Center Interconnect, DCI, EoMPLS, Intr, OTV, Overlay Transport Virtualization, VPLS