Cisco Blogs


Cisco Blog > Perspectives

Is Your Network Cloud Ready?

Is your network cloud ready? We at NetCraftsmen, a Cisco Gold Partner, are hearing this question more often. Let’s discuss how to tell if your network is cloud ready, and how to get there if you’re not. Even if your organization already has a public cloud presence, I hope you’ll find some ideas in the following material.

Is Your Network Cloud Ready

Why Cloud?

Good question! Some businesses view public cloud services as a hassle or security issue waiting to happen. Still, there are a number of good business reasons for using the cloud:

  • Get top tier datacenter facilities, security, and practices faster and cheaper than doing it in-house
  • Owning and operating a datacenter is not a core skill for your company
  • Inexpensive and rapid software / service development environment
  • Develop cloud-based highly scalable applications
  • Rapid deployment of additional servers or platforms
  • Rapid scaling up of compute or storage capacity to handle massive amounts of data or customer interactions
  • Enable competing in the Internet of Things / Internet of Everything
  • Have a Disaster Recovery site or “availability zones” without setting up a second datacenter, or by cloning cloud resources

It’s also true that there are legitimate concerns about public cloud.

Yes, a transition to public cloud requires some new skills, some network re-design, and some new legal and security considerations. Why not start mastering the skills and reducing the barriers now?

Infrastructure

What do you need to do to your current infrastructure to be ready to implement a cloud solution?

One big design factor is WAN transport. Many organizations use MPLS, perhaps dual-provider MPLS or MPLS with DMVPN backup, as their WAN connection between sites. Data and servers are mostly now located in a few datacenters, although organizations like banks with many sites using slow WAN links due to cost may still leave directory, print, and file services in branches. And there are other exceptions.

That works fine with private datacenters and private cloud.

With public cloud, the WAN transport may change. If you love MPLS, you may be able to find an MPLS provider who can connect directly to the cloud provider. That would, for a fee, provide private access to your cloud server instances. The common alternative is to instead use the Internet or IPsec VPN for access.

This relates to how your users currently access the Internet. If your users use the MPLS WAN to reach a designated datacenter, then pass through a robust security edge to the Internet, you are doing centralized Internet. That is efficient in terms of edge security controls and devices to acquire and manage. It is inefficient in that you are backhauling remote site traffic to the main site, meaning you require a bigger WAN pipe and a bigger Internet pipe. For public cloud, that extra hop adds latency.

Decentralized Internet is where each remote site has local Internet access. With decentralized Internet, your users either use distributed security / anti-malware devices, or cloud-based anti-malware (DLP, web site control, etc.) services.

There’s more than one right answer here. Having local Internet is becoming increasingly popular. Cisco has recognized this with the whole IWAN set of features. See also:

Local Internet can be leveraged for DMVPN as backup to an MPLS WAN. And with the Cisco CSR virtual router, you can put one or more virtual ASR1K (CSR) routers into the cloud to act as DMVPN hubs. Amazon even makes that easy for you to try: Amazon CSR (Nearly) Free Trial.

Using the CSR in the cloud also keeps WAN access to the public cloud under your control, rather than requiring change request interaction with the cloud provider. Cisco IWAN also can be an enabler for using dual ISPs at all remote sites above a certain size.

Another thing you can do for your infrastructure is to create documented standard site designs (e.g. small, medium, large, datacenter). That will cut your support costs, reduce the mean time to repair when something goes awry, prepare you for automation (that’s my excuse to mention SDN here), and simplify pilot trials and planning to use the cloud. One-off per-site customized designs are out; standardized re-usable designs are in.

While you’re at it, consider following Cisco Validated Designs (CVDs), and other identified Best Practices, always a good idea. That way, you’re more likely to align with new designs as technology evolves.

A final consideration is to not only have solid virtualization in place, with matching staff skills, but to work with technologies that facilitate moving Virtual Machines between data centers. Data Center Interconnect can play a role in this. Cisco InterCloud might also be of interest.

A Word about Private Cloud

Private cloud can be a start on public cloud, without the same level of security concerns – a good first step.

Consider, however, that if all you are doing is standing up some racks of servers, network, and storage, you really are not going very far down the cloud path. Public cloud providers operate with a web front end for ordering and deploying services. Automated management and other services to keep things running and highly available, and years of learning curve have resulted in established and documented process and procedures to minimize downtime. Do you want to re-invent that, or would it be better to leverage all that external expertise for your business? If you do want to automate your private cloud, you might take a look at Cisco Intelligent Automation for Cloud.

Cloud providers with huge datacenters have huge economies of scale as well: power, cooling, purchases, staff headcount per server, R&D resulting in automation, etc. They can and must shave pennies of cost per server to stay competitive. Most corporations and governmental entities cannot do that in-house. For some, private cloud will have to be the destination. To leverage the cloud provider cost structures, hybrid and public cloud needs to be the destination.

What Goes Into the Cloud?

There are at least two big things to think about as to what to put into the cloud:

  • Latency
  • Security

Latency needs to be considered. If you are used to operating with users near the datacenter, then depending on where your cloud provider’s instantiation of your servers actually is, you might end up with considerably more latency.

For example, if your company operates around Washington, D.C., with datacenter in Reston, suddenly shifting your datacenter or key apps to Texas or California is not a no-brainer. Such a move could result in an application becoming slow or unusable. That is because higher latency reveals sloppy coding, application constraints, file system issues, and in general application heavy ping-pong network behaviors.

Preparing for this might involve testing applications using a latency and error injecting tool in the development lab, or testing with Virtual Machine instances running components of the application in a remote cloud location.

A related approach is to “put the pod into the cloud.” By that, I mean that delivering an application or service these days generally requires a bunch of cooperating servers. If you put some but not all of them into the cloud (think “far away”), ugly things may come out of the woodwork. Separating apps from their database front end is generally going to lead to S.L.O.W. (Serious Latency, Outcome = Waiting). Some application architectural thinking is required there: what services need to be replicated in the cloud? Or will the application need re-architecting?

If you’re doing data replication, particularly synchronous replication, or vMotion, you also need to be aware of latency. For example, someone recently suggested a design with Layer 2 Datacenter Interconnect between North America and England for long distance vMotion. With vSphere 6, that might now work (150 msec max end-end round-trip latency). Before that was announced, it would not work.

Security is also a consideration, obviously. That’s the next section.

Security

My favorite scary cloud story is the one where a hacker gained full admin privileges (perhaps by social engineering) and then deleted a virtual company’s server instances and backups from Amazon. Gone in a flash, out of business, game over!

My conclusion: Yes, you have to think about what you’re doing. You’ll need to have properly secured privileged credentials, to raise the bar by using certificates for authentication, etc. Cloud server instances and data stores are comparable to corporate servers exposed to the Internet. Except that the cloud admins may have access and other privileges.

One added factor is that the administrative accounts for cloud servers, storage, and backup ought to be different, with carefully guarded credentials, perhaps involving different personnel. (Think disgruntled ex-employee who is about to become a felon: how much could that one person damage?)

I’m not about to attempt to tell you how to do cloud security in 300 words or less. Obviously it’s something you need to research and think carefully about. Google “cloud security book” to get started. Meanwhile, here are some security considerations for a public cloud transition:

  • How do you protect your cloud vendor admin accounts? Who can create/delete stuff?
  • How do you protect your server instances?
  • How do you protect your data in transit, or at rest?
  • Do you trust and/or verify vendor-based storage encryption?
  • Have you checked security ratings or other indicators of strong cloud vendor process and other security controls and segmentation?
  • Does the cloud provider meet audit and compliance standards that your business requires?

I should say “FedRAMP” here. FedRAMP is a standardized federal security assessment process. Even if you’re not a U.S. government shop, that level of certification might be re-assuring. As far as liability, only your legal staff knows for sure.

Backup

Lesson learned from the above scary story: separate your backup from your server instances and active data stores. Don’t have a single point of hardware, provider, admin access, or other failure.

Having said that, be aware that the “Hotel California” effect may apply: it can be costly exporting your data from the cloud. So you need to think about that.

Cloud Readiness Steps and Skills

Getting started means thinking about design issues and building cloud skills. Some common steps:

  • Build a private cloud
  • Consider shifting email to Google or Microsoft, or shifting desktop software hassles to Office365 or Google Apps, and support mobile users better at the same time!
  • Leverage other cloud-based services, e.g. com, Lands End Business Outfitters for company store logo clothing, etc.
  • If your firm is large, explore automated service provisioning, taking major cloud providers as examples of what can be done.
  • Put the private cloud in colocation space instead of your main datacenter
  • Pilot decentralized Internet and cloud-based user security services. Cisco, for example, has Cisco Cloud Web Security (formerly ScanSafe). Other vendors also have a growing presence in this product space.
  • Start doing some work in the cloud (dev or low risk)
  • Hire cloud savvy developers
  • Learn the traffic patterns of your apps (yes, developers can think about networking and latency, or hire consultants who can do that). Look for low-hanging fruit (apps that are Cloud Ready)
  • Reconsider your WAN, start working with IWAN, DMVPN, etc.
  • Standardize sites
  • Align your network with CVDs and Best Practices.

Consider your organizational structure. It helps if developers, server admins, network staff, security staff, and storage staff are all communicating. Some now favor the DevOps approach. Some practitioners of agile and DevOps reshuffle staff so that teams consist of people with cross-training. In such a team, someone might be the most network savvy person on the team, another strong at the hypervisor side of things, etc. Doing that may reduce specialized expertise but may lead to a better team effort. Think basketball team with one star versus a team with very good but no stand-out player. (Maryland women’s basketball team, Final Four 2015.)

Tags: , , , , , ,

#CiscoChampion Radio S2|Ep 13. 4G

CiscoChampion2015200PX#CiscoChampion Radio is a podcast series by Cisco Champions as technologists. Today we’ll be talking about 4G with Cisco Systems Engineer, Cellular Wireless Technology, David Mindel.

Listen to the Podcast.

Learn about the Cisco Champions Program HERE.
See a list of all #CiscoChampion Radio podcasts HERE.
Learn when the next round of Cisco Champions candidate nominations will be accepted. Email us HERE.

Cisco SMEs
David Mindel, Systems Engineer, Cellular Wireless Technology

Cisco Champion Guest Hosts
Chris Hildebrandt, @childebrandt42, Network & Collaboration Engineer
Michael O’Nan, @Michael_Onan, Network & Collaboration Engineer

Moderator
Lauren Friedman (@Lauren)

Highlights
What is 4G LTE?
How it relates to IWAN
Use cases for 4G
Hardware used with 4G
Managing incoming connections on 4G
Wow features
Using 4G for last mile network access
4G and geo-fencing Read More »

Tags: , , ,

High performance backup storage: Cisco UCS C3160

I don’t know about you, but the thought of using a “server” as a “backup storage” resource may sound a bit odd at first. After this post, you may change your tune. Let’s dig into this a bit.

I’m sure you’ve heard of the Cisco UCS Unified Computing line of servers and their associated Fabric Interconnect technologies. Additionally, you may know that there are M-Series, B-Series and C-Series form factors for the various configuration options that are in high demand for the modern data center. Which reminds me, you should check out this PDF poster of all of the current UCS components; it is my go-to resource to see how the different UCS offerings can be arranged and interconnected.

So let’s zoom in on the Cisco UCS C3160. It has a few key specifications that caught the interest of a number of keen architects in my extended professional networks which led to this notion of putting the C3160 in place as high performance and high capacity backup storage system. The most interesting specification is that the C3160 can hold up to 60 small form factor drives. Two additional small form factor SSD drives are in place for the boot volume. What this means is that these 60 drives can be used as a backup storage repository.  RAID levels are available on this configuration as well, in particular the Cisco 12G SAS Modular RAID controller supports RAID levels 0, 1, 5 and 6. I’d recommend RAID level 6 for this large of a storage resource in terms of drive capacity (up to 4 TB) and the sheer number of drives coupled with rebuild times and have some spares in place. That being said, there is easily over 200 TB available for backup storage in one C3160 server. Let’s take the following figure:

Ciscoblog-April 2015-FigA

The C3160 provides large amounts of backup storage with excellent connectivity

Read More »

Tags: , , , ,

#CiscoChampion Radio S2|Ep 12. 802.11ac Wave 2

CiscoChampion2015200PX#CiscoChampion Radio is a podcast series by Cisco Champions as technologists. Today we’ll be talking about 802.11ac Wave 2 with Cisco Product Marketing Engineer Mark Denny and Cisco Product Marketing Manager Allen Huotari.

Listen to the Podcast.

Learn about the Cisco Champions Program HERE.
See a list of all #CiscoChampion Radio podcasts HERE.

Cisco SMEs
Mark Denny, Cisco Product Marketing Engineer
Allen Huotari, Cisco Product Marketing Manager

Cisco Champion Guest Hosts
Stewart Goumans, @WirelessStew, Mobility Consultant
Sam Clements, @samuel_clements, Mobility Practice Manager

Moderator
Lauren Friedman (@Lauren)

Highlights
What is 802.11ac Wave 2 and why it is important?
Multi-user MIMO
Wave 2 and backwards compatibility
Troubleshooting Wireless Networks and Wave 2
Infrastructure and Power Requirements with Wave 2
AC toolbox recommendations
Wave 2 and disabling NCS rates

Tags: , ,

4+1 Practices for Effective Lifelong IT Learning (Part 1)

The debate of what we should be learning seems to be a more frequent topic today. For instance, there’s been a long-standing question for each new networker: after learning a little about routing and switching, does a relative newbie dive deep into route/switch? Move on to learn voice? Or security? Data Center? Or for emerging technologies like SDN, should we learn SDN as defined by the Open Networking Foundation, or ACI, or both? Should we build programming skills to become network programmers, or programming for network automation, or stick with traditional config/verify/troubleshooting skills?

So we can talk to coworkers and discuss/argue about what technologies we should learn… but then we all seem to agree that learning throughout our careers is hugely important. (In fact, the day I was wrapping up this blog post, the Cisco Champion podcast included several people making that very same point, in agreement.) And then we stop talking about learning, because we all agree. We agree that learning is important, and don’t talk about how to learn effectively.

Our long-term career prospects depend in part on learning about existing and emerging technology. But how good are our learning skills? Are we happy with the results? How can we get better at learning?

Today’s post begins a 2-part post that offers a top 4+1 list of answering that last question: how do we get better at learning? Rather than us just agreeing that learning is important, and moving on, let’s treat the process of learning as an important process, and learn how to do it better. Read More »

Tags: , , ,