As a Cloud Architect, I’ve had the privilege to work with CTOs and CIOs across the globe to uncover the key factors driving Business Continuity and Workload Mobility across their cloud infrastructures. We’ve worked with enterprises, large and small, and service providers to answer their top five concerns in our new Business Continuity and Workload Mobility solution for the Private Cloud.
1) Can you provide business continuity, workload mobility, and disaster recovery for my unique mix of applications, with lower infrastructure costs and less complexity for my operations teams? Yes.
2) Can you provide a multi-site design that reduces business outages and costly downtime, allowing my critical applications to be more secure and available? Yes.
3) Can my operations teams perform live migrations of applications across sites while maintaining user connections, security, and stateful services? Yes.
4) Does your multi-site solution allow me to utilize idle standby capacity during “normal” operations, and reclaim that capacity as needed during an outage event? Yes.
5) Can your Cisco Validated Design greatly reduce my deployment risks and simplify my design process, saving my business significant time, money, and resources? Yes.
A Proven Multi-site Design, Built on the Most Widely Deployed Cloud Infrastructure
We addressed each of these pain points as we designed, built, and validated our new multi-site business continuity and workload mobility solution. Our multi-site solution is built upon Cisco’s cloud foundation, the Virtual Multi-service Data Center (VMDC) that’s been deployed at hundreds of the world’s top enterprises and service providers. In our latest VMDC release, we’ve extended our cloud design to support multi-site topologies and critical use cases for private cloud customers. This validated design simply connects regional and long-distance data centers within your private cloud to address some critical IT functions, including:
application business continuity across data center sites;
stateful workload mobility across data center sites, will maintaining user connections and security;
application disaster recovery and avoidance across data center sites; and
application geo-clustering and load balancing across data center sites.
Choose the Cloud Infrastructure that Fits Your Unique Business Needs
The VMDC Business Continuity and Workload Mobility solution (CVD Design Guide) is grounded in the reality of today’s cloud environment, providing different design choices that match your applications needs. We realize there is no “one size fits all” cloud design, that’s why we support both physical and virtual resources, multiple hypervisors and storage choices, and security compliant designs with industry certifications like FISMA, PCI, and HIPPA.
Key Factors Driving Business Continuity and Workload Mobility in the Private Cloud Read More »
What is Next-Gen Workload Mobility for the Private Cloud?
Enterprises across the globe have been asking for simpler ways to provide multi-site Business Continuity and Workload Mobility for applications hosted in their Private Cloud. The Cloud promises a more agile operational environment and that promise has been fulfilled to a large extent within their data centers. But many Enterprises are challenged to unlock this same agility across multi-site Cloud topologies. For example, Enterprise CTOs and CIOs have asked us directly to provide simplified Workload Mobility of critical apps between sites to give their operations teams more flexibility.
Many competitive solutions offer basic VM mobility between sites and storage replication, but do not address the rest of the application environment including: security, stateful services, network containers, tenancy, and most importantly both physical and virtual resources.
What good does it do to move a VM to a new site if the rest of the application environment is left behind causing a potential security hole?
How to move a LIVE 3-tier app like Microsoft SharePoint to a new site (without impacting users)
As we all know, business critical applications require a robust service environment to operate securely across the cloud. In our example below, the application environment provides firewall and load balancing services for each tier of the SharePoint application; web, app, and database tiers. These services are stitched together using a secure Network Container that carve out a slice of resources across the data center for SharePoint. Most Enterprises and SPs use a mix of physical and virtual resources including firewalls, load balancers, VPN termination, IDS, and network switching. Many of these services create stateful connections to users, so….
If you perform a live migration of SharePoint to a new site, stateful connections to firewalls and load balancers need to be preserved to maintain security and TCP connections to active users.
Broken user connections = Service disruption (that's not good)
You must also provide identical security and services for new SharePoint users even though the application has moved to a new site.
Broken Network Services = Potential Security hole (that's even worse)
How does Next-Gen Workload Mobility actually work?
Let’s share some test results from our new Business Continuity and Workload Mobility Solutionto illustrate how we performed live SharePoint migrations to a new site (75 km away) while maintaining security, stateful services, and user connections. Oh yes, automatically without manual intervention.
Baseline topology for Microsoft SharePoint deployed in our Private Cloud
We first deployed the SharePoint Web, App, and Database tiers in a secure network container in Data Center 1 using service orchestration, simple and easy. Refer to the figure below for a topology picture.
SharePoint Web Tier is in a Public Zone, and uses a virtual firewall (VSG) and Citrix load balancer
SharePoint App Tier and Database Tier (SQL) are in a Protected Zone and use an ASA Firewall and Citrix load balancer
Our validated design provides LAN extensions, extended clusters, secure network containers, virtual switching, and storage replication between Metro sites
SharePoint is up and running in Data Center 1, supporting hundreds of users with secure connections. Now let’s move SharePoint to a new site without the users knowing it.
Step 1: Perform Live SharePoint Migration to Data Center 2….while maintaining secure user connections!
We performed a Live vMotion of SharePoint (Web, App, Database) to new hosts in Data Center 2, described in the figure below. Data Center 2 is 75 km away. Our SharePoint migration had minimal disruption (2 seconds or less) and maintained security, stateful services, and all user connections across our multi-site Cloud. Pretty sweet! A few highlights from our validated design are provided below.
Our virtual switch (Nexus 1000v), virtual firewall (VSG), and UCS automatically updated Port and Security Profiles at the new site, so our virtual switching and application firewalls were preserved without lifting a finger.
Layer 2 Extensions permit tromboning back to Data Center 1 to maintain connections to physical appliances (stateful firewalls and load balancers), also without manual intervention.
Our Network Container was automatically extended between Metro sites, maintaining security, tenancy, QoS, IP addressing, and user connections. SharePoint was discovered on the new host in Data Center 2 within seconds, using this extended Network Container.
Now let’s move the rest of the network container to Data Center 2 in less than one second!
Step 2: Redirect users to a new Network Container in Data Center 2….in less than 1 second!
With the aid of service orchestration, we simply created a new network container in Data Center 2. This new container included the same configuration, connections, and services (firewalls, load balancers) as the original container in Data Center 1. Once created, we simply redirected external users to the SharePoint application running in Data Center 2, as described below. The redirection of users happened in less than one second, pretty amazing. A simple routing update delivered through service orchestration performed the redirection. In this step, user connections were broken and new connections were re-established to the already running SharePoint application in less than one second! A few highlights from our validated design are provided below.
Layer 2 Extensions allowed the preservation of IP Addressing for Apps and Services during migration. There is no need to “re-IP” your applications just because they’ve moved to a different city.
The complete Network Container including physical and virtual resources was moved with minimal disruption (sub-second) to users
Our Multi-site Cloud solution supports a typical application environment, including both physical and virtual resources, with scaling for large and small private clouds
We also support Cold workload moves of less critical workloads that don’t require these stringent stateful requirements.
For More Info:
We encourage you to follow my blog series and check out our new business continuity and workload mobility solution (VMDC DCI), which describes key business drivers, Cisco DCI innovations, and validated designs that our customers are deploying in their private clouds.
Deploy with confidence! (and sleep better knowing your Cloud is more reliable and secure)
CVD Design Guide - Cisco Business Continuity and Workload Mobility solution (VMDC DCI )
Solution Overview - Cisco Business Continuity and Workload Mobility solution (VMDC DCI)
BrightTalk Session - VMDC DCI for Business Continuity and Workload Mobility in the Private Cloud (webcast)
MPLS based Layer 2 VPN has been around for over 10 years since the inception of IETF Pseuduowire Edge to Edge (PWE3) Working Group. Many drafts and standards have been added, since then, to address different applications and to improve scale and convergence in different topologies. L2VPN as a whole is widely deployed in both service providers and enterprises, from Ethernet services, to fixed and mobile convergence, to enterprise campus layer-2 transport.
Recently, one emerging driver that has been picking up a lot of momentum is to use L2VPN for Data Center Interconnect (DCI). Data centers are often situated in different locations, to be geo-redundant for the purpose of workload mobility and business continuity. At the same time the physical location of the data center has to be transparent to users and to applications. Hence the need for layer-2 connectivity between sites. While Ethernet over MPLS (EoMPLS) and Virtual Private LAN Service (VPLS) have been used for this purpose, DCI presents new requirements and challenges not fully addressed today. To keep the data center always on, and to utilize all the resources and links as efficiently as possible, data centers need all-active redundancy and load balancing. The technology should be as simple as possible to provision and manage Read More »
A team of us at Cisco has been working, together with industry colleagues, on defining and standardizing a new Layer 2 VPN solution known as Ethernet Virtual Private Network or E-VPN. In this post, I will discuss the key requirements that helped shape this solution, and attempt to shed some light on the drivers for the technology and how it enables the evolution of Service Provider L2VPN offerings. Read More »
The ASR 9000 product family has recently come out with a new feature called nV Edge (nV = Network Virtualization). This feature unifies the data center edge control, data and management planes. So, I’ll note a couple things here on this feature and then tell you why I think it has potential to be truly awesome.
My good friend Rabiul Hasan just wrote a proof of concept document just posted to Design Zone that provides the configuration and setup details. I encourage you to go check it out here.