Cisco Blogs


Cisco Blog > Data Center and Cloud

Cloud applications want to move

The term “cloud computing” covers many topics. One hot area is infrastructure as a service (IaaS), where servers and other data center resources are provided on-demand over the web and through a programming interface.  The classic example is Amazon’s Elastic Compute Cloud (EC2).  IaaS became popular at first as a way to deliver new applications, often from new companies.  Examples included software-as-a-service companies  like Animoto or Mashery, and bursty compute-intensive applications like those at Gigavox and UnivaUD.  More recently enterprises have been studying IaaS as part of their IT infrastructure, and some companies have begun pilots.  How do enterprises want to use IaaS?  Does it differ from the ways that new companies started the use of IaaS?   In this post, I’ll examine a few classic use cases for enterprises, and draw a few lessons.

In conversations with enterprises and in the trade press, three groups of applications are often cited as near-term uses of today’s IaaS.  I call them Disasters, Development, and Deadlines:

 

  • Disasters:   Critical applications always have to be stored and run in a second location in case a data center goes offline.  This is routine for some companies; one Miami organization I know plans for 5-10 days per year in ‘disaster’ mode, due the hurricane and tropical storm warnings that come every summer.  When they adopted virtualization a few years ago, it helped a lot.  In the backup data center, they no longer had to buy an exact copy of the equipment in the main data center — any model of server and storage equipment would run the critical virtual machines.  IaaS is even better — they pay for servers only on the days when they need them.
  • Development:   Enterprises employ teams of programmers and software testers to build applications for use by employees or customers.  In the process, many computers are used temporarily, for preparing and testing software under development.  A programmer may use a single computer to test a program in isolation, then more machines to test integration with other systems, and finally even more machines to simulate many users.   Using a cloud, the programmer can order up exactly as many machines as required, automatically and without having to coordinate with anyone else’s schedule.  It cuts costs and gets results faster.
  • Deadlines:  Any website may need a boost when a deadline is involved.  For example, there are government websites that publicize grant opportunities and accept grant applications.  Most of the time, the sites get little traffic, with small number of visitors at any hour.  But when the grant applications are due, there can be a crunch of thousands of would-be grantees, each uploading grant documents and filling in online forms before the deadline arrives.  The spike in use is even worse for college scheduling systems, which are lightly used for all but a few days each semester for registration.  Deadlines show up in enterprise applications as well, from monthly financial reporting to employee reimbursement submissions.  The elasticity of cloud computing – ordering machines when needed and releasing them when finished — improves performance and cut costs when deadlines are involved.

 

One thing stands out from these stories:  These are current IT applications that benefit from the elasticity of IaaS.  This means that a workload that currently runs in a corporate data center must also run in a cloud.  “Workload portability” across data centers, and between corporate data centers and public clouds, is a new requirement.  (“Workload mobility,” where an application moves between data centers without stopping, is even more demanding.)

The first public clouds were clumsy for moving workloads; they imposed architectural features that were different from enterprise data centers.  Amazon EC2 offered a layer-3 network without Ethernet features like broadcast or static IP numbers.  EC2 also offered disks that didn’t automatically preserve their changes upon reboot. IBM’s Blue Cloud initially supported only limited applications.

With time, Amazon EC2 came to offer more general IaaS features.   Other clouds like Flexiscale and Rackspace also offered IaaS service, starting with a more conventional data center model.  Most recently, VMware has offered service providers a cloud package called vCloud Express that makes the cloud environment more closely resemble a VMware-based enterprise data center.  VMware claims that they have enrolled 1000 service providers into their vCloud program, with companies like Savvis, OpSource, and Terremark currently offering services. 

Today most IaaS providers are pursuing workload portability.  They also advertise “simplicity.”  When it comes to disasters, development and deadlines, I think “simple” will mean “most similar to conventional data centers”, so that today’s data center managers can use them with little new training, and with minimal changes to their applications.

The devil will be in the details. As I mentioned in an earlier post, there are several network services that are conventional in data centers that are needed for cloud computing, including load balancers and firewalls, but which vary from one cloud provider to another.  In addition, subtle details about storage (“Can I make a snapshot of a running system?”) or compute (“Can I add RAM between reboots?”) may also be important to IT customers of cloud services.

Cisco takes a keen interest in workload portability and mobility as a basic element of its long-term cloud vision.  Cisco equipment is designed for both conventional and cloud data centers; as enterprises shift applications to take advantage of clouds and IaaS, they can expect to see compatible network services wherever they move.

Comments Are Closed