Cisco Blogs

Cisco’s Cloud Verse, VCE and EANTC Cloud Mega Test

May 22, 2012 - 0 Comments


Tom Chatham is a Principal vArchitect with VCE Corporate Engineering responsible for delivering VCE solutions, customer solution testing, technical marketing events and evangelizing private cloud. 16 years of experience in the industry, most of that time spent focused on storage, virtualization and unified computing. Including extensive network infrastructure, systems architecture and business continuity.

Tom is at EMC World in Las Vegas  these days and on Twitter @tchatham – Check booths 410 or 515 .

I asked Tom to share  his experience and point of view on the EANTC Cloud Mega Test  – here is what he sent me


“Over the past four to five months, there has been significant buzz about VCE’s role in the EANTC Cloud Mega Test.  I was lucky enough to be a part of the test team, and I wanted to share some of my experiences in working on this fantastic project with EANTC and Cisco.

It started with a bang, of course.  Back in late January, Light Reading published their first report on the testing EANTC had done of Cisco’s CloudVerse architecture. I was at Cisco Live London where details of the test were first shared and members of the CloudVerse team were in attendance to share the results. Over the next couple of months, EANTC followed that up with other reports in the series.  All in all, they covered the Cisco Unified Data Center that is the foundation for cloud services, Cloud Intelligent Networks, Cloud Applications & Services, and Long-haul Optical Transport used in delivering cloud-based services.  Of course, I wasn’t involved in all of that.

As with all of the Mega Test programs (the Mobile Mega Test and Medianet Mega Test being the ones that Light Reading conducted previously), these programs are a big deal.  Cisco spends millions of dollars – literally – on lab infrastructure, engineers and communications for each one of these tests.  Light Reading has EANTC come in to provide independent, objective oversight and testing.  And when the report comes out, there is a lot of buzz in the industry on exactly what went on.  It’s not every day we get to play in a multi-million dollar sandbox!  I was one of several dozen people from Cisco, VMware, VCE, EMC and Ixia working on this project.

As the buzz about the test bounced around in the industry, a sidebar conversation emerged about VCE’s involvement in the test. As you may know from social media, I’m a Principal vArchitect with VCE Corporate Engineering.  Essentially, my job is to make sure that customers get the most out of VCE’s technology – VblockTM Systems.  The Vblock system is pre-engineered, pre-tested converged infrastructure that combines Cisco’s computing and networking equipment, EMC’s storage equipment, and virtualization from VMware.  VCE itself operates as a joint venture between Cisco and EMC with investments from VMware and Intel.

One of the things that was missed in the excitement over the test results themselves was the fact that the Vblock system played a big part in the Cloud Mega Test.

Of course, the test wasn’t intended to examine just the Vblock system.  The Mega Tests have always been about putting together all of the moving parts necessary for Service Providers to deliver a specific service to their subscriber base.  This Mega Test focused on the cloud, so the data center, the network, the applications, and the transport were all part of the test.  Still, Cisco needed *something* to act as the data center infrastructure… and what easier way to quickly implement a data center than rolling in a big Vblock Series 700?

I was fortunate enough to be one of the people who got to help Cisco with this.  With the timeline for the tests being as tight as they were, everyone at Cisco wanted to bring in large chunks of technology that they knew would “just work.”  They didn’t want to spend time piecing together all of the components and worrying if a firmware rev. on some element somewhere would unexpectedly wreak havoc at the worst time.  This is exactly the problem that the Vblock system solves.

The system we used for the Cloud Mega Test was a Vblock Series 700 MX which was an ideal choice because in any test setup there is a massive amount of work to do in a short period of time. Cisco, VCE and EMC lent experts in each of the technologies being tested. Being able to roll in an operational computing platform, already running ESXi, with Storage presented and Nexus 1000v configured greatly reduced initial setup time. In fact a lot of the Light Reading content focuses on the solution and not on the Vblock system since it has become a foundational element.  Choosing a Vblock Series 700 MX (VMAX based) instead of a 300 (VNX based) came down to the fact that VMAX provides a scalability factor that VNX can’t match. I’m In this case, the VMAX was a better choice because it offers up to eight engines with more cache, fiber channel connectivity and flat out IO paths than the VNX. VCE has service providers that use both VMAX and VNX depending on the business model they are trying to support. VMAX typically fits most of their requirements especially with the recently announced VMAX SP (VMAX Service Provider).  VMAX SP will provide APIs and pre-defined architecture that that fits into a SPs business model.

VCE has spent a lot of time developing a multi-tenant solution architecture for Vblock systems; I was close to this effort last year and continue to support it from time to time. Cisco’s Cloud Mega Test and VCE’s secure multitenant solution provides guidelines for tenant isolation, compliance, governance, auditing, security, logging, QoS and a framework allowing the customer to choose which pieces they want to implement. One of VCE’s core values is allowing partners to integrate with our platform giving our customers choice over which products they want to use.

The best part about using Vblock systems for this effort is that we build them in the factory. Our professional services organization is a well oiled machine, able to take the logical config document we created during our planning discussions and apply it to the hardware components. With vCenter and ESXi installed onto all the blades with storage and Nexus 1000v fully configured, all we do is roll the cabinets into the data center and connect power and network. Integration into the Cloud Mega Test was a breeze. This allowed our team to focus on building the test virtual machines, vCloud Director environment and vCloud Connector implementation. When marketing asked me to comment on the Vblock system setup for the Cloud Mega Test I initially thought that I didn’t have to do much with the hardware since everything I did was with the VMware stack. This is clearly why VCE is succeeding is the market. We have architected the platform to serve up virtualized workloads and it excels. I know this sounds like fluff but many of us have spent years as partners delivering solutions where the vendor is trying to solve business problems with a bill of materials. Having to be the person to take a truck load of goods and turn it into a functional solution is time-consuming and nerve racking. It has been nice to be involved with projects such as the Cloud Mega Test where we get to focus further up the stack. I have to admit that I didn’t modify the VMAX configuration during the setup; I did however fix a problem with another vendor’s storage system.

The VMAX storage array performed exactly as expected, which was a good thing but not that exciting of a story for readers. I think the take away here is that EMC VMAX offers the most horsepower when it comes to storage – up to 2400 drives, 128 fiber channel ports, 1TB of cache across eight engines. Most storage arrays sold have two storage processors, less cache and a handful of fiber channel ports. There is no better way to provide storage to a mixed workload in a multi-tenant environment. With respect to servers, we’ve moved from rack mount and old school blade chassis to fabric interconnects, converged 10gb Ethernet, and converged network adapters (Ethernet/Fiber Channel) configurable as 2 to 128 virtual interfaces. VCE employs structured architecture and validated support matrix with a dedicated support organization. Today, up to eight UCS chassis are supported in a Vblock system which means up to 64 blades can be managed through one interface.

Many of the Cisco engineers on the Mega Test project were involved with the Vblock system before VCE was formed and Vblock became a product. I have been with VCE for a year and a half, and probably another 6 months prior to that, Cisco built one of the very first Vblock systems in RTP. A lot of fantastic stuff comes from these marketing efforts. Watch closely as much of the CloudVerse Mega Test turns into products from Cisco, VCE or your favorite Service Provider.

Being part of the EANTC Cloud Mega Test was a great experience.  Not only did I get to work with a lot of high-end technology, but I worked with a bunch of amazing people too.  The folks at EANTC are incredibly bright, and they didn’t cut Cisco any slack on the test plan.  The Cisco folks were super, too.  It’s amazing just how broad a solution portfolio they have at Cisco.  They have literally everything that they need to build a public cloud infrastructure.

The best part was that this whole experience confirmed for me just how valuable the Vblock system can be for our customers.  Without the pre-testing and validation that we do here at VCE, there is no way we would have been able to pull off this test plan.  I would probably still be in the Cisco labs, tweaking some setting somewhere to work out just one more kink.  But the Vblock system came through with flying colors in a really rigorous environment.

I’m really hoping that Cisco does more of these Mega Test programs, because I’d love to do it again.  If you happen to be at EMC World this week, swing by booth 410 or 515. I’d be happy to chat about the test further. Leave comments below and suggest what you might want to see tested next.  Perhaps they’ll take our suggestions… who knows!

BTW, if you want to read the official reports, you can find them here:

Best,  Tom  ”


In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.