Where Virtualization Does and Doesn’t Make Sense – An Optimization Primer

July 29, 2014 - 2 Comments

bioShot-sWritten By Wayne Cullen, Senior Manager, Service Provider Architectures

Along with cloud computing, M2M, collaboration, and hoodie sweatshirts, virtualization is a trend du jour. Like all trends, it’s based on an old idea (dating back to the mainframe era) that has now been reimagined for new purposes. One of the newest roles for virtualization is network functions such as those in switches, routers, and network appliances, including firewalls and load balancers—thanks to Network Functions Virtualization (NFV). And this is just the beginning of what is going to be virtualized in your network.

Being a Selective Virtualizer

Virtualization can provide some big cost savings and reduce network complexity. But virtualization is like chocolate. You eat too much and some bad things can happen. The early days of virtualization (when servers were virtualized) provide a cautionary tale. Server virtualization lowered CapEx but led to skyrocketing operational costs because much more complex processes—hence highly-skilled staff—were required.

The lesson: Be selective in virtualizing your resources and functions. And focus your time optimizing your network to lower TCO with a flexible, adaptable infrastructure as part of your virtualization efforts.

How and Where to Optimize Your Network for Virtualization

  • Reduce complexity through automation and orchestration. That will go a long way towards speeding your operations, enhancing your service agility, and lowering your OpEx. Judge the merits of any virtualization solution you deploy based on its ability to reduce overall complexity.
  • Avoid virtualizing in the wrong places. Good places to virtualize: Functions with high computing needs and low to modest networking performance requirements (e.g., IMS and DNS). Where virtualization makes less sense: Network functions with high-performance networking requirements (e.g., high bandwidth load, low latency, high predictability such as those in core data center switching and WAN backbone routing).
  • Put virtualized functions where they best belong. A virtualized HD video feed might be best located close to the customer for the highest quality and performance. A distributed popular HD video feed might be best placed centrally where there is cheaper storage. Balance cost efficiency with ease of scalability, customer experience, and drive infrastructure agility when deciding where to locate a virtualized function.
  • Evaluate physical network virtualization deployment options. Locate virtualized functions on x86 rack or blade servers in data centers, central offices, points of presence, or customer sites? A careful end-to-end analysis of the performance benefits and costs are crucial when deciding where to locate virtualized functions on physical hardware. The optimal infrastructure will support all choices.
  • Leverage virtualization for capacity planning. Be prepared for unpredictable capacity demands by designing a more flexible, agile infrastructure. Virtualization applied correctly can scale upward and downward to better use server and energy resources.
  • Tie multiple functions together with orchestration. The right orchestration solution will automatically respond to application and services requirements, applying functions optimally. That means in the correct order, with the correct CPU, storage, and network capacity, and in the most optimal locations.
  • Lower CapEx with better resource utilization. Virtualization properly applied, on general purpose servers, using automation and greater intelligence, leads to higher resource utilization and reuse to lower capital budgets.

Happy virtualizing!

For more on the topic, read and share this new brief whitepaper.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. You said “The early days of virtualization (when servers were virtualized) provide a cautionary tale. Server virtualization lowered CapEx but led to skyrocketing operational costs because much more complex processes – hence highly-skilled staff – were required.”

    Apparently, the other reason why server optimization via virtualization can experience diminishing returns over time is due to the fact that some of the significant initial savings (from eliminating excess hardware, etc) can’t be sustained via the ongoing use of the software. Plus, rising software license renewal fees for traditional virtualization further reduce the potential for incremental savings — unless you switch to a open source hypervisor with a subscription model.

    Therefore, I’m wondering, are the savings from network virtualization likely to be subject to the same sort of diminishing returns over time?

    • Thank you for your comment David and for highlighting another very important consideration in the overall costs faced by service providers. That is probably a topic all in itself for a paper etc.

      Briefly though:
      There are two main aspects: virtualization software licensing and network function software licensing.
      The former already has cause some concern because it becomes a significant cost burden with scale.

      I think the licensing of software elements will shake out over time as producers and consumers of them converge on their true value.

      In some use cases but not all, the virtualization software (hypervisor) will get eliminated because operators will need to use bare metal for performance and predictability reasons.

      Open source has a cost for the operators too – someone has to integrate it and maintain it and likely continue to develop it to suit the operators evolving needs.

      Once this all matures though, the net economic benefit just due to increased flexibility, potential for higher resource utilization, and potential for power savings should be very positive.

      The message for operators is to closely look at the whole economics including projecting large scale and over the technology and service expected lifetime.