The last several months have been a roll with several customers, channel partner and technology partner engagements. With the ACI starter kits and lab bundles shipping, customers can bring this solution into their labs and subsequently into their production Pods with the Application Policy Infrastructure Controller (APIC) and the Nexus switching platforms. We see a healthy interest in these kits with customers as they explore its SDN capabilities. Several ecosystem partners like F5 and Citrix have started to ship device packages. We just came off a company wide sales conference at Las Vegas a couple of weeks ago that was hugely energizing. Policy as a means to drive automation, security and scale is now the major focus area for SDN as outlined originally by Cisco as more industry vendors now endorse the vision as evidenced by initiatives like OpenStack Congress. Investment protection continues to be a major Overall the new fiscal year promises to be an exciting one.
Soni Jiandani on SDN Central -- Click for Q&A
Following up on the Unleashing IT magazine (ACI special edition) released last month, I wanted to share the momentum we’re experiencing with customers and partners as the acceleration continues. As John Chambers had outlined during the last earnings call, the adoption rate has been off to a tremendous start with some of the customers and partners featured in the video above.
We also continue to take the opportunity to answer questions as the vision around ACI continues to crystallize and rapidly evolves from concept to hard reality. This week we took the opportunity to have a Q&A session with SDN central. Soni Jiandani, SVP of Insieme Networks Business Unit at Cisco led the conversation. The featured interview can be accessed here. Soni crisply articulates the ACI value proposition while addressing some of the top of mind questions that come from the media.
In this episode of Engineers Unplugged, Rawlinson Rivera (@punchingclouds) and Maish Saidel-Keesing (@maishsk) discuss the role of software defined storage in the enterprise data center, and the impact and evolution of the job roles in the data center. Hint: always be learning.
Speaking of learning, this may be an example of how not to draw a unicorn.
Rawlinson River and Maish Saidel-Keesing are unafraid to break new ground in unicorn artistry.
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Data traffic has grown dramatically in the recent years, leading to increased deployment of network service appliances and servers in enterprise, data center, and cloud environments. To address the corresponding business needs, network switch and router architecture has evolved to support multi-terabit capacity. However, service appliance and server capacity remained limited to a few gigabits, far below switch capacity.
Cisco Intelligent Traffic Director (ITD) is an innovative solution to bridge the performance gap between a multi-terabit switch and gigabit servers and appliances. It is an hardware based multi-terabit layer 4 load-balancing, traffic steering and clustering solution on the Nexus 7000 and 7700 series of switches.
There’s good pain and there’s bad pain. This pain was the muscle ache after a hard game of flag football. We had wasted some energy but were winning. Our customers were our coaches; they were precise about which parts of our game they loved and which parts they didn’t care for. They loved the management policy engine within UCS Manager but *did not* need the levels of redundancy and resilience in the hardware. And they really … really wished that we added some aspects to our game, specifically improvements to power and space efficiencies. Our customers were either trying to eke out more from their existing data centers or trying to reduce their co-location costs.
So our cloud scale customers,
i) loved our management policy engine
ii) didn’t rely on hardware redundancy/resilience
iii) needed better power and space efficiencies
I’ll note here that during this time we gained incredible respect for our cloud scale customers. These customers are either disrupting traditional industries or are innovators who are reinventing themselves to take advantage of the “internet everywhere” age. That’s a tough business, and whoa is competition fierce! ……. being 2nd best on the internet often means you are a distant loser. Read More »
I find Linux containers among the most fascinating technology trends of recent past. Containers couple lightweight, high performance isolation and security with the ability to easily package services and deploy them in a flexible and scalable way. Many companies find these value-props compelling enough to build, manage and deploy enterprise applications. Adding further momentum to container adoption is Docker, a popular open source platform for addressing key requirements of Linux container deployment, performance and management. If you are into historical parallels, I can equate the Docker evolution and growth to the Java programing language which brought in its wake the promise of “write once run everywhere”. Docker containers bring the powerful capability of “build once and run everywhere”. It is therefore not surprising to see a vibrant eco-system being built up around Docker.
The purpose if this blog is to discuss the close alignment between Cisco ACI and containers. Much like containers, Cisco ACI provides accelerated application deployment with scale and security. In doing so, Cisco ACI seamlessly brings together applications across virtual machines (VM), bare-metal servers and containers.
Let us take a closer look at how Containers address issues associated with hypervisor based virtualization. Hypervisor based virtualization has been a dominant technology in past two decades, with compelling ROI via server consolidation. However, it is well known that hypervisors bring workload dependent overheads while replicating native hardware behaviors. Furthermore, one needs to consider application portability considerations when dealing with hypervisors.
Linux containers, on the other hand, provide self-contained execution environments and isolate applications using primitives such as namespaces and control groups (cgroups). These primitives provide the ability to run multiple environments on a Linux host with strong isolation between them, while bringing efficiency and flexibility. An architectural illustration of Hypervisor based and Container based virtualization is worth a quick glance. It is apparent from below, Docker based containers bring portability across hosts, versioning and reuse. No discussion on Docker containers is complete without mention of DevOps benefits. Docker framework – altogether with Vagrant, for instance -- aligns tightly with DevOps practices. With Docker, developers can focus on their code without concerning about the side effects of running it in production. Operations teams can treat the entire container as a separate entity while managing deployments.
ACI and Containers
Cisco Application Centric Infrastructure (ACI) offers a common policy model for managing IT applications across the entire Data Center infrastructure. ACI is agnostic to the form-factors on which applications are deployed. ACI supports bare-metal servers, Virtual machines and containers, and its native portability makes it a natural fit with Containers. Besides, ACI’s unified policy language offers customers a consistent security model regardless of how the application is deployed. With ACI, workloads running in existing bare-metal and VM environments can seamlessly integrate and/or migrate to a Container environment.
The consistency of ACI’s policy model is striking. In ACI, policies are applied across End Point groups (EPG) which are abstractions of network end points. The end points can be bare-metal servers, VMs or Containers. As a result of this flexibility, ACI can apply policies across a diverse infrastructure that includes Linux Containers. I want to draw attention to the ACI flexible policy model applied to an application workload spanning bare-metal servers, VMs and Docker containers as illustrated below.
You may recall Cisco announced the broad endorsement for OpFlex protocol at Interop Vegas 2014. We are currently working on integrating OpFlex, Open vSwitch (OVS) with ACI to enforce policies across VMs and Containers in earlier part of next calendar year.
As Container adoption matures, managing large number of them at scale becomes critical. Many Open source initiatives are actively working on scalability, scheduling and resource management of containers. OpenStack, Mesos, Kubernetes are among the open source initiatives / communities Cisco is actively engaged in to advance ACI integration with open source tools and solutions.
With containers, we have seen only the tip of the iceberg. Docker containers are beginning to get traction in private clouds and traditional Data centers. Cisco ACI plays a pivotal role in integrating ACI unified policy model across a diverse infrastructure comprising bare-metal, VMs and Containers.