Is your network cloud ready? We at NetCraftsmen, a Cisco Gold Partner, are hearing this question more often. Let’s discuss how to tell if your network is cloud ready, and how to get there if you’re not. Even if your organization already has a public cloud presence, I hope you’ll find some ideas in the following material.
Good question! Some businesses view public cloud services as a hassle or security issue waiting to happen. Still, there are a number of good business reasons for using the cloud:
- Get top tier datacenter facilities, security, and practices faster and cheaper than doing it in-house
- Owning and operating a datacenter is not a core skill for your company
- Inexpensive and rapid software / service development environment
- Develop cloud-based highly scalable applications
- Rapid deployment of additional servers or platforms
- Rapid scaling up of compute or storage capacity to handle massive amounts of data or customer interactions
- Enable competing in the Internet of Things / Internet of Everything
- Have a Disaster Recovery site or “availability zones” without setting up a second datacenter, or by cloning cloud resources
It’s also true that there are legitimate concerns about public cloud.
Yes, a transition to public cloud requires some new skills, some network re-design, and some new legal and security considerations. Why not start mastering the skills and reducing the barriers now?
What do you need to do to your current infrastructure to be ready to implement a cloud solution?
One big design factor is WAN transport. Many organizations use MPLS, perhaps dual-provider MPLS or MPLS with DMVPN backup, as their WAN connection between sites. Data and servers are mostly now located in a few datacenters, although organizations like banks with many sites using slow WAN links due to cost may still leave directory, print, and file services in branches. And there are other exceptions.
That works fine with private datacenters and private cloud.
With public cloud, the WAN transport may change. If you love MPLS, you may be able to find an MPLS provider who can connect directly to the cloud provider. That would, for a fee, provide private access to your cloud server instances. The common alternative is to instead use the Internet or IPsec VPN for access.
This relates to how your users currently access the Internet. If your users use the MPLS WAN to reach a designated datacenter, then pass through a robust security edge to the Internet, you are doing centralized Internet. That is efficient in terms of edge security controls and devices to acquire and manage. It is inefficient in that you are backhauling remote site traffic to the main site, meaning you require a bigger WAN pipe and a bigger Internet pipe. For public cloud, that extra hop adds latency.
Decentralized Internet is where each remote site has local Internet access. With decentralized Internet, your users either use distributed security / anti-malware devices, or cloud-based anti-malware (DLP, web site control, etc.) services.
There’s more than one right answer here. Having local Internet is becoming increasingly popular. Cisco has recognized this with the whole IWAN set of features. See also:
Local Internet can be leveraged for DMVPN as backup to an MPLS WAN. And with the Cisco CSR virtual router, you can put one or more virtual ASR1K (CSR) routers into the cloud to act as DMVPN hubs. Amazon even makes that easy for you to try: Amazon CSR (Nearly) Free Trial.
Using the CSR in the cloud also keeps WAN access to the public cloud under your control, rather than requiring change request interaction with the cloud provider. Cisco IWAN also can be an enabler for using dual ISPs at all remote sites above a certain size.
Another thing you can do for your infrastructure is to create documented standard site designs (e.g. small, medium, large, datacenter). That will cut your support costs, reduce the mean time to repair when something goes awry, prepare you for automation (that’s my excuse to mention SDN here), and simplify pilot trials and planning to use the cloud. One-off per-site customized designs are out; standardized re-usable designs are in.
While you’re at it, consider following Cisco Validated Designs (CVDs), and other identified Best Practices, always a good idea. That way, you’re more likely to align with new designs as technology evolves.
A final consideration is to not only have solid virtualization in place, with matching staff skills, but to work with technologies that facilitate moving Virtual Machines between data centers. Data Center Interconnect can play a role in this. Cisco InterCloud might also be of interest.
A Word about Private Cloud
Private cloud can be a start on public cloud, without the same level of security concerns – a good first step.
Consider, however, that if all you are doing is standing up some racks of servers, network, and storage, you really are not going very far down the cloud path. Public cloud providers operate with a web front end for ordering and deploying services. Automated management and other services to keep things running and highly available, and years of learning curve have resulted in established and documented process and procedures to minimize downtime. Do you want to re-invent that, or would it be better to leverage all that external expertise for your business? If you do want to automate your private cloud, you might take a look at Cisco Intelligent Automation for Cloud.
Cloud providers with huge datacenters have huge economies of scale as well: power, cooling, purchases, staff headcount per server, R&D resulting in automation, etc. They can and must shave pennies of cost per server to stay competitive. Most corporations and governmental entities cannot do that in-house. For some, private cloud will have to be the destination. To leverage the cloud provider cost structures, hybrid and public cloud needs to be the destination.
What Goes Into the Cloud?
There are at least two big things to think about as to what to put into the cloud:
Latency needs to be considered. If you are used to operating with users near the datacenter, then depending on where your cloud provider’s instantiation of your servers actually is, you might end up with considerably more latency.
For example, if your company operates around Washington, D.C., with datacenter in Reston, suddenly shifting your datacenter or key apps to Texas or California is not a no-brainer. Such a move could result in an application becoming slow or unusable. That is because higher latency reveals sloppy coding, application constraints, file system issues, and in general application heavy ping-pong network behaviors.
Preparing for this might involve testing applications using a latency and error injecting tool in the development lab, or testing with Virtual Machine instances running components of the application in a remote cloud location.
A related approach is to “put the pod into the cloud.” By that, I mean that delivering an application or service these days generally requires a bunch of cooperating servers. If you put some but not all of them into the cloud (think “far away”), ugly things may come out of the woodwork. Separating apps from their database front end is generally going to lead to S.L.O.W. (Serious Latency, Outcome = Waiting). Some application architectural thinking is required there: what services need to be replicated in the cloud? Or will the application need re-architecting?
If you’re doing data replication, particularly synchronous replication, or vMotion, you also need to be aware of latency. For example, someone recently suggested a design with Layer 2 Datacenter Interconnect between North America and England for long distance vMotion. With vSphere 6, that might now work (150 msec max end-end round-trip latency). Before that was announced, it would not work.
Security is also a consideration, obviously. That’s the next section.
My favorite scary cloud story is the one where a hacker gained full admin privileges (perhaps by social engineering) and then deleted a virtual company’s server instances and backups from Amazon. Gone in a flash, out of business, game over!
My conclusion: Yes, you have to think about what you’re doing. You’ll need to have properly secured privileged credentials, to raise the bar by using certificates for authentication, etc. Cloud server instances and data stores are comparable to corporate servers exposed to the Internet. Except that the cloud admins may have access and other privileges.
One added factor is that the administrative accounts for cloud servers, storage, and backup ought to be different, with carefully guarded credentials, perhaps involving different personnel. (Think disgruntled ex-employee who is about to become a felon: how much could that one person damage?)
I’m not about to attempt to tell you how to do cloud security in 300 words or less. Obviously it’s something you need to research and think carefully about. Google “cloud security book” to get started. Meanwhile, here are some security considerations for a public cloud transition:
- How do you protect your cloud vendor admin accounts? Who can create/delete stuff?
- How do you protect your server instances?
- How do you protect your data in transit, or at rest?
- Do you trust and/or verify vendor-based storage encryption?
- Have you checked security ratings or other indicators of strong cloud vendor process and other security controls and segmentation?
- Does the cloud provider meet audit and compliance standards that your business requires?
I should say “FedRAMP” here. FedRAMP is a standardized federal security assessment process. Even if you’re not a U.S. government shop, that level of certification might be re-assuring. As far as liability, only your legal staff knows for sure.
Lesson learned from the above scary story: separate your backup from your server instances and active data stores. Don’t have a single point of hardware, provider, admin access, or other failure.
Having said that, be aware that the “Hotel California” effect may apply: it can be costly exporting your data from the cloud. So you need to think about that.
Cloud Readiness Steps and Skills
Getting started means thinking about design issues and building cloud skills. Some common steps:
- Build a private cloud
- Consider shifting email to Google or Microsoft, or shifting desktop software hassles to Office365 or Google Apps, and support mobile users better at the same time!
- Leverage other cloud-based services, e.g. com, Lands End Business Outfitters for company store logo clothing, etc.
- If your firm is large, explore automated service provisioning, taking major cloud providers as examples of what can be done.
- Put the private cloud in colocation space instead of your main datacenter
- Pilot decentralized Internet and cloud-based user security services. Cisco, for example, has Cisco Cloud Web Security (formerly ScanSafe). Other vendors also have a growing presence in this product space.
- Start doing some work in the cloud (dev or low risk)
- Hire cloud savvy developers
- Learn the traffic patterns of your apps (yes, developers can think about networking and latency, or hire consultants who can do that). Look for low-hanging fruit (apps that are Cloud Ready)
- Reconsider your WAN, start working with IWAN, DMVPN, etc.
- Standardize sites
- Align your network with CVDs and Best Practices.
Consider your organizational structure. It helps if developers, server admins, network staff, security staff, and storage staff are all communicating. Some now favor the DevOps approach. Some practitioners of agile and DevOps reshuffle staff so that teams consist of people with cross-training. In such a team, someone might be the most network savvy person on the team, another strong at the hypervisor side of things, etc. Doing that may reduce specialized expertise but may lead to a better team effort. Think basketball team with one star versus a team with very good but no stand-out player. (Maryland women’s basketball team, Final Four 2015.)