In this episode of Engineers Unplugged, Rawlinson Rivera (@punchingclouds) and Maish Saidel-Keesing (@maishsk) discuss the role of software defined storage in the enterprise data center, and the impact and evolution of the job roles in the data center. Hint: always be learning.
Speaking of learning, this may be an example of how not to draw a unicorn.
Rawlinson River and Maish Saidel-Keesing are unafraid to break new ground in unicorn artistry.
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Data traffic has grown dramatically in the recent years, leading to increased deployment of network service appliances and servers in enterprise, data center, and cloud environments. To address the corresponding business needs, network switch and router architecture has evolved to support multi-terabit capacity. However, service appliance and server capacity remained limited to a few gigabits, far below switch capacity.
Cisco Intelligent Traffic Director (ITD) is an innovative solution to bridge the performance gap between a multi-terabit switch and gigabit servers and appliances. It is an hardware based multi-terabit layer 4 load-balancing, traffic steering and clustering solution on the Nexus 7000 and 7700 series of switches.
There’s good pain and there’s bad pain. This pain was the muscle ache after a hard game of flag football. We had wasted some energy but were winning. Our customers were our coaches; they were precise about which parts of our game they loved and which parts they didn’t care for. They loved the management policy engine within UCS Manager but *did not* need the levels of redundancy and resilience in the hardware. And they really … really wished that we added some aspects to our game, specifically improvements to power and space efficiencies. Our customers were either trying to eke out more from their existing data centers or trying to reduce their co-location costs.
So our cloud scale customers,
i) loved our management policy engine
ii) didn’t rely on hardware redundancy/resilience
iii) needed better power and space efficiencies
I’ll note here that during this time we gained incredible respect for our cloud scale customers. These customers are either disrupting traditional industries or are innovators who are reinventing themselves to take advantage of the “internet everywhere” age. That’s a tough business, and whoa is competition fierce! ……. being 2nd best on the internet often means you are a distant loser. Read More »
I find Linux containers among the most fascinating technology trends of recent past. Containers couple lightweight, high performance isolation and security with the ability to easily package services and deploy them in a flexible and scalable way. Many companies find these value-props compelling enough to build, manage and deploy enterprise applications. Adding further momentum to container adoption is Docker, a popular open source platform for addressing key requirements of Linux container deployment, performance and management. If you are into historical parallels, I can equate the Docker evolution and growth to the Java programing language which brought in its wake the promise of “write once run everywhere”. Docker containers bring the powerful capability of “build once and run everywhere”. It is therefore not surprising to see a vibrant eco-system being built up around Docker.
The purpose if this blog is to discuss the close alignment between Cisco ACI and containers. Much like containers, Cisco ACI provides accelerated application deployment with scale and security. In doing so, Cisco ACI seamlessly brings together applications across virtual machines (VM), bare-metal servers and containers.
Let us take a closer look at how Containers address issues associated with hypervisor based virtualization. Hypervisor based virtualization has been a dominant technology in past two decades, with compelling ROI via server consolidation. However, it is well known that hypervisors bring workload dependent overheads while replicating native hardware behaviors. Furthermore, one needs to consider application portability considerations when dealing with hypervisors.
Linux containers, on the other hand, provide self-contained execution environments and isolate applications using primitives such as namespaces and control groups (cgroups). These primitives provide the ability to run multiple environments on a Linux host with strong isolation between them, while bringing efficiency and flexibility. An architectural illustration of Hypervisor based and Container based virtualization is worth a quick glance. It is apparent from below, Docker based containers bring portability across hosts, versioning and reuse. No discussion on Docker containers is complete without mention of DevOps benefits. Docker framework – altogether with Vagrant, for instance -- aligns tightly with DevOps practices. With Docker, developers can focus on their code without concerning about the side effects of running it in production. Operations teams can treat the entire container as a separate entity while managing deployments.
ACI and Containers
Cisco Application Centric Infrastructure (ACI) offers a common policy model for managing IT applications across the entire Data Center infrastructure. ACI is agnostic to the form-factors on which applications are deployed. ACI supports bare-metal servers, Virtual machines and containers, and its native portability makes it a natural fit with Containers. Besides, ACI’s unified policy language offers customers a consistent security model regardless of how the application is deployed. With ACI, workloads running in existing bare-metal and VM environments can seamlessly integrate and/or migrate to a Container environment.
The consistency of ACI’s policy model is striking. In ACI, policies are applied across End Point groups (EPG) which are abstractions of network end points. The end points can be bare-metal servers, VMs or Containers. As a result of this flexibility, ACI can apply policies across a diverse infrastructure that includes Linux Containers. I want to draw attention to the ACI flexible policy model applied to an application workload spanning bare-metal servers, VMs and Docker containers as illustrated below.
You may recall Cisco announced the broad endorsement for OpFlex protocol at Interop Vegas 2014. We are currently working on integrating OpFlex, Open vSwitch (OVS) with ACI to enforce policies across VMs and Containers in earlier part of next calendar year.
As Container adoption matures, managing large number of them at scale becomes critical. Many Open source initiatives are actively working on scalability, scheduling and resource management of containers. OpenStack, Mesos, Kubernetes are among the open source initiatives / communities Cisco is actively engaged in to advance ACI integration with open source tools and solutions.
With containers, we have seen only the tip of the iceberg. Docker containers are beginning to get traction in private clouds and traditional Data centers. Cisco ACI plays a pivotal role in integrating ACI unified policy model across a diverse infrastructure comprising bare-metal, VMs and Containers.
As new technologies emerge and replace traditional ones, IT teams are discovering that building an infrastructure around new functionality is advantageous in a slew of ways.
One such disruptive technology gaining ground is software defined networking, or SDN.
The premise of SDN is to allow the user to determine how the network behaves by decoupling the control plane from the data plane. Control planes are essentially the “data directors,” instructing the data plane on where to transfer packets of data. The data plane then establishes the best path and carries the data to its destination. By separating these two functions, the user can program the open-source network to act in accordance with business requirements—using a central management interface in a vendor-neutral manner.
Cisco ACI combines hardware, policy-based control systems, and software to deliver management automation, programmatic policy, and dynamic workloads. It’s built around the application, not the network.
What’s the advantage? Doing so enables greater support for scalability, a more dynamic network, and centrally-defined portable policies—all of which lend to faster application provisioning and a more efficient environment.
While many SDN solutions are focused solely on software and virtualization, the reality is that hardware still exists and is an integral part of the network. Cisco ACI leverages existing hardware—because no matter how de-emphasized it may become, the physical infrastructure remains important.
As Cisco senior vice president of marketing Soni Jiandani tells Unleashing IT, “ACI is SDN plus a whole lot more. Other SDN models stop at the network. ACI extends the promise of SDN—namely agility and automation—to the applications themselves. Through a policy-driven model, the network can cater to the needs of each application, with security, network segmentation, and automation at scale. And it can do so across physical and virtual environments, with a single pane of management.”
And Shashi Kiran, senior director of market management at Cisco, shares his views on Cisco ACI in this blog.