I could not have asked for a better start to the New Year. 2014 was quick off the starting blocks with January already setting the tone for the rest of the year as momentum continues to pick up in data center and cloud networking. Here are some highlights of the state of the business, new product introductions and additions to the ACI partner ecosystem -
ACI customer traction continues to get stronger; Ecosystem continues to add new members
Cisco Live! in Milan provided first-hand evidence of the strong interest from customers to learn more about Cisco’s Application Centric Infrastructure (ACI). Customer briefing sessions were packed and demos in the World of Solutions had strong interest. Hundreds of customers are engaged in proof-of-concepts just in the first 30 days. Select customers have been provided the Application Policy Infrastructure Controller (APIC) simulator to get them early exposure and help harden the APIC ahead of its general availability targeted for Q2 this year.
The open architecture of the APIC means that it is easier for new technology partners to come on board and integrate. New members coming on board the ACI ecosystem include A10 networks, Cloudera,MapR and Catbird. Expect solution data sheets to be made available closer to the APIC availability.
Watch Soni Jiandani provide details of the momentum building around ACI -
The vision of ACI was also extended to Campus and WAN environments with the announcement of the APIC Enterprise module. Stay tuned for more on this space.
Nexus 9000 continues to break new records – Miercom and Lippis test reports available
The Nexus 9000 has been shipping since Q4 of 2013 and is already breaking new records. Miercom released a report that detailed how the Nexus 9500 is offering the highest performance and lowest latency in 40GbE competitive studies. This supplements test reports released here earlier by Lippis and Ixia that focus on performance, availability, power efficiency and programmability.
I had the opportunity to chat with David Yen a few days ago on a number of topics--one of the things he touched on was how he sees the data center evolving. Now seeing as David is the Senior Vice President and General Manager of our Data Center Group, there are more than just idle musings. Here is a snippet of our conversation:
Omar Sultan: David, you talk about the evolution to an application-defined fabric--from a practical perspective, what does that mean to our customers?
David Yen: We are seeing a shift from a static, IT-controlled environment to a highly dynamic, user-driven environment. The net effect is to bring IT and the business closer together so that is good, but there are some practicalities that need to be addressed in the process. Amon the things we are focused on is making IT easier to consume for app owners and making this dynamic new environment easier to manage for IT.
OS: So, what are we doing to help customers make this transition?
DY: Well, we have been giving them the tools to prepare for this on-demand world for over five years now--our entire Unified DC portfolio—Unified Fabric, Unified Computing and Unified Management --is built around making data center resources flexible and more responsive to quickly changing user demands.
Unified Fabric allows customers to quickly and easily provision network and storage access wherever and whenever they need it. Similarly, UCS Service Profiles allow a UCS server to quickly and automatically adapt to the specific needs of a new workload. We have an entire portfolio of complimentary VM-networking technologies that then ensure there is consistency between the physical and virtual environments. Finally, Unified Management orchestrates, automates, and puts the infrastructure at your fingertips. Today, you can completely configure infrastructure for your apps with a few mouse-clicks. And with Cisco ONE, we are now adding the programmatic interfaces so apps and other systems will be able to directly configure their infrastructure for themselves.
While we have been doing this for a while now, it seems some companies are just catching-up. Recently, we saw a competitor claim leadership in the data center, but if you closely examine their claims, they announced things we have been shipping for a while: cloud-optimized architecture: check, on-demand resources: check, orchestration and management tools: check, L2 Multi-Path: check. Its actually more interesting to note what’s missing—things like network and compute integration, hybrid cloud capabilities, service chaining and multi-hypervisor support. Speeds and feeds are always important, but if that’s all you can talk about, then you are not going to be relevant to today’s conversation.
OS: Where are we going next with the data center fabric?
DY: Looking ahead, there are a couple of areas we will look to address. First of all, while we know that customers are aggressively moving to VM and cloud-based workloads, there is going to be a significant transition period and most enterprise data centers will remain a mix of physical, virtual and cloud workloads and we want to give customers a more comprehensive approach to dealing with this. At the end of the day, the data center should be able to deal with all types of workloads as equal citizens. We don’t have that today in the industry--we have to resort to gateways and other mechanisms to span across physical, virtual and cloud domains--while that’s OK in the interim, its problematic in the long-term.
The other area we will address is increasing operational simplicity. In this dynamic environment, it is neither feasible nor desirable for network operations to be involved in every config change. Ultimately we need to be able to do things at machine speed. You have seen some initial steps in that direction with the Nexus 1000V and its hypervisor integration or new technologies like Power-On Auto Provisioning. Our work with Cisco Open Network Environment has given us the tools and mechanisms to open networks up to facilitate these machine-to-machine or application-to-machine conversations through APIs like onePK and REST and through support of SDN controllers and agents like OpenFlow.
OS: David, why should customers remain confident about Cisco’s vision?
DY: Betting on Cisco is not an act of faith--time and again, we have lead market transitions and delivered the technologies customers need to take advantage of those transitions. We are still, by far, the preferred networking choice, even in the most demanding environments like Massively Scalable DCs, where we are in production for 9 or 10 of the largest providers. We have more than 40,000 NX-OS customers and over 11 million 10GbE ports out there. This gives us unmatched insight into what customers are actually doing and where they are going with their networks. Similarly, we will be delivering VM network solutions across all four major hypervisors, which gives us unmatched breadth of experience in that space. Central to this longevity is avoiding technical blinders. UCS was a great example of our willingness to start off with customer needs in mind. Everything was on the table and that led us to breakthroughs like a brand new operations model based on service profiles. This willingness to risk and lead has translated into to remarkable growth in a very demanding market against a number of capable and entrenched competitors.
As I look at the competition, I see two hurdles they must clear. The first is simply one of simple experience. Its one thing to have a theoretical understanding of a technology and its quite another thing to have actually built and supported it. We have being shipping our Nexus 1000V virtual switch for four years now--we are into third generation functionality like hybrid cloud transport, cloud-based routing services, service chaining and multi-hypervisor support. Compare this to companies that are just getting around to shipping their first virtual switch and will still be working through first generation features and problems.
The second hurdle is a matter of getting caught up in a technical agenda instead of focusing on the customer’s agenda. Software in networking is all the rage right now, for some very good reasons, but you see companies that want to shift all the network functionality into the software because that suits the narrative they want to tell. Now you and I both know, there are some things that absolutely are better handled in software, but, by the same token, there are things are better handled in hardware. We have control over both and that gives us the freedom to put functions where they are best handled. We think that will always give us an advantage over companies that are locked into a particular narrative and must make compromises to support that story.
To hear more from David, and trust me, he has some interesting and entertaining things to say, check out his Solution Keynote on Monday, June 24 at CiscoLive in Orlando.
Consider these impressive stats shared in a keynote from Cisco’s CTO and CSO Padmasree Warrior last week at Cisco Live, London:
50 Billion “things” including trees, vehicles, traffic signals, devices and what not will be connected together by 2020 (vs. 1000 devices connected in 1984)
2012 created more information than the past 5000 years combined!
2/3rd of the world’s mobile data will be video by 2015.
These statistics may seem a bit surprising, but the fact is, they cannot be ignored by CIOs and others chartered with the responsibility of managing IT infrastructure.
Impact on Enterprise and SP Infrastructure strategies
Further, these trends are not silo’d and are certainly not happening in a vacuum. For example, Bring-your-Own Device (BYOD) and the exponential growth of video endpoints, may be happening in the “access”, but they are causing a ripple effect upstream in the data center and cloud environments, and coupled with new application requirements, are triggering CIOs across larger Enterprise and Service Providers to rapidly evolve their IT infrastructure strategies.
It is much the same with cloud infrastructure strategies. Even as Enterprises have aggressively adopted the journey to Private Cloud, their preference for hybrid clouds, where they can enjoy the “best of both worlds” – public and private have grown as well. However, the move to hybrid clouds has been somewhat hampered by challenges as outlined in my previous blog: Lowering barriers to hybrid cloud adoption – challenges and opportunities.
The Fabric approach
To address many of these issues, Cisco has long advocated the concept of a holistic data center fabric, heart of its Unified Data Center philosophy. The fundamental premise of breaking silos, and bringing together disparate technology silos across network, compute and storage is what makes this so compelling. At the heart of it, is the Cisco Unified Fabric, serving as the glue.
As we continue to evolve this fabric, we’re making three industry-leading announcements today that help make the fabric more scalable, extensible and open.
Let’s talk about SCALING the fabric first:
Industry’s highest density L2/L3 10G/40G switch: Building upon our previous announcement of redefining fabric scale, this time we introduces a New Nexus 6000 family with two form factors – 6004 and 6001. We expect these switches to be positioned to meet increasing bandwidth demands, for spine/leaf architectures, and for 40G aggregation in fixed switching deployments. We expect the Nexus 6000 to be complementary to the Nexus 5500 and Nexus 7000 series deployments, and is not to be confused with the Catalyst 6500 or Nexus fabric interconnects.
The Nexus 6000 is built with Cisco’s custom silicon, and 1 micro-second port to port latency. It has forward propagated some of the architectural successes of the Nexus 3548, the industry’s lowest latency switch that we introduced last year. Clearly, as in the past, Cisco’s ASICs have differentiated themselves against the lowest common denominator approach of the merchant silicon, by delivering both better performance as well as greater value due to the tight integration with the software stack.
The Nexus 5500 incidentally gets 40G expansion modules, and is accompanied by a brand new Fabric Extender – the 2248PQ, which comes with 40G uplinks as well. All of these, along with the 10G server interfaces, help pair the 10G server access with 40G server aggregation.
Also as part of the first step in making the physical Nexus switches services ready in the data center, a new Network Analysis Module (NAM) on the Nexus 7000 also brings in performance analytics, application visibility and network intelligence. This is the first services module with others to follow, and brings in parity with the new vNAM functionality as well.
Industry’s simplest hybrid cloud solution: Over the last few years, we have introduced several technologies that help build fabric extensibility -- the Fabric Extender or FEX solution is very popular extending the fabric to the server/VM, as are some of the Data Center Interconnect technologies like Overlay Transport Virtualization (OTV) or Location ID Separation Protocol (LISP), among others. Obviously each have their benefits.
The Nexus 1000V Intercloud takes these to the next level by allowing the data center fabric to be extended to provider cloud environments in a secure, transparent manner, while preserving L4-7 services and policies. This is meant to help lower the barriers for hybrid cloud deployments and is designed to be a multi-hypervisor, multi-cloud solution. It is expected to ship in the summer timeframe, by 1H CY13.
This video does a good job of explaining the concepts of the Intercloud solution:
Cloud computing has evolved from the hype cycle of the last few years, to being an integral part of the Enterprise IT strategy as well as a fundamental service provider offering. The types of cloud constructs have evolved as well – public, private, hybrid and community clouds are all the basic variants, with more sophisticated application-specific cloud offerings continuing to evolve.
While the journey to the private cloud has been continuing and relatively maturing, at least in the more developed countries, and public cloud services offerings are becoming relatively ubiquitous, adoption and deployment of hybrid cloud offerings have had a relatively modest uptake.
The reason for this is not because the allure of hybrid clouds is unappealing, or that it has few use-cases. It is quite the opposite. There are several use-cases all of which are applicable to real-world IT deployments today:
Workload migration: Seamless migration of workloads from the data center or private cloud to the public cloud for better capacity utilization.
Dev/QA operations: Testing of new applications can induce requirement for additional temporary capacity and having an extensible hybrid cloud is quite appealing, instead of investing in on-premise infrastructure.
Cloud-bursting: To handle the needs of bursty applications, temporary capacity allocation in public cloud environments can be extremely cost-effective, providing the convenience of “infrastructure-on-demand”
Disaster recovery: Providing data resiliency in case of failure of on-premise resources
If the use-cases are real and the benefits are so apparent, why have Enterprise not gone all out to deploy more robust hybrid clouds? Why have only few Enterprise and selective applications followed this model?
I can think of a few. To make it real, let’s consider the use-case of migrating a virtual machine (VM) from the private cloud to a provider cloud, as an example to illustrate some of the challenges:
This is my first year as an attendee at the Gartner DC conference. I’ve been here once before working demos on the tradeshow floor, but this year it’s purely about information gathering. Tradeshows floors are great. You get to wander around and chat with a captive audience of your industry peers, partners, and “frenemies” collecting pens and light up bouncy balls. Based on where the swag really ends up, I think the pen purchasers really need to start thinking about logo branded crayon packs. But there is so much to learn in the conferences even in the most unexpected sessions.
My primary take aways from the initial keynotes were that Hadoop is a strong early adoption application candidate for cloud in a non-virtual context (Hadoop in the data center was recently covered in Jason Rapp’s blog) , that commodity compute is the leader in cloud computing (I cried a little on the inside with this one), and that personnel development and team building/creation is one of the biggest factors in an IT success story.
For day one the celebrity keynote was from Captain Chesley Sullenberger which seemed out of place before listening to him. His talk about teamwork, process, and respect leading to his success in pulling off that harrowing landing on the Hudson spanned well from the people aspect of organizations, and was a very enjoyable listen.
These take aways seem to me even more critical as IT organizations have to quickly evolve their data centers to meet demanding business requirements, without expecting additional resources .
Gartner does a very nice job of interactive polling within their conference. For the starting keynote the audience poll (~2,000?) revealed that budgets edging up, but for the greatest number of attendees are mainly flat.
It seems that 34% of the audience has to deal with a flat budget, 20% of the attendees benefit from a marginal increase (<5%), and 14% experience a small decrease (<5%)
Talking about data center evolution, as a Cisco guy, I had absolutely to attend (by choice ) David Yen’s presentation. David is our Sr VP & GM in charge of our DC Technology Group, so he’s the big picture for anything Cisco in the Data Center. He is a Phd, with a very large experience in compute, applications and network, acquired through executive role at Sun Microsystems, Juniper and Cisco. David’s talk was about the evolution of the data center and the relevance of Cisco -You may want to check the blog from Giuliano Di Vitantonio, VP Marketing Data Center and Cloud with slides and videos “ The Evolving Data Center : Perspectives from the Gartner DC Conferences“ In his presentation David Yen covered some of the background for the evolution of the data center model, and the gains to be expected in the fabric model we see through Fabric Path in optimization of the new East/West data patterns.
This all has a strong relationship to our Unified Computing System solution. Which as a server platform “loaded with features “ might be perceived at some disadvantage in comparison to commodity compute, we’re happy to see that in reality our customers have placed us at #3 in datacenter compute world wide, and #2 in the US for an implementation that is only three years into the market, thanks to providing strong management capabilities, system agility, and dynamic integrated network functionality, as well as great TCO. As proof points , you may want to check Bill Shields blogs on this topic, but also the Cisco Buil& Price website with promotions of the month.
This Conference gave me also the opportunity to discuss other “more technical ” topics such as security for cloud and virtual services.
So stay tuned, as I will be back in January for additional conversations.