Cisco ACI is gaining momentum and mindshare in the industry as testified by the 160 plus licensees for the Application Policy Infrastructure Controller (APIC), and 900 plus customers for the Nexus 9k platform. All of this in less than three months since going live in August 2014. Riding on that wave of success, we are pleased to announce the Cisco ACI Simulator, a physical appliance that provides a simulated Cisco ACI environment. The appliance is a full-featured Cisco APIC controller software along with a simulated fabric infrastructure of leaf switches and spine switches in one physical server.
If you wondered how it is going to help you, think of it as a self-contained environment with Cisco APIC instances with real production software. You can use it to quickly understand ACI features, exercise APIs, and initiate integration with third-party orchestration systems and applications. The ACI simulator will also allow you to use the native command line CLI and GUI via APIs that are available for third-parties. If you are a developer or Cisco partner, this is an ideal way to develop and test your solution. If you are a customer, you can use this in your test lab to create profiles for your enterprise apps with your actual application delivery controllers and security devices. This belongs in any well-architected DevOps environment.
Topology of the simulator
The Cisco ACI Simulator enables you to simulate the Cisco ACI fabric, including the Cisco Nexus 9000 Series Switches supported in a leaf-and-spine topology, to take full advantage of an automated, policy-based, systems management approach. Specifically, the ACI simulator environment comprises 2 ACI spines, 2 ACI leafs, and 3 APIC controllers.
The Cisco ACI Simulator includes simulated switches, so you cannot validate the data path. However, some of the simulated switch ports are mapped to the front-panel server ports which allows you to connect external management entities such as VMware ESX servers, VMware vCenter, VMware vShield, and bare-metal servers; Layer 4 through 7 services; authentication, authorization, and accounting (AAA) systems; and other physical and virtual service appliances. In addition, the Cisco ACI Simulator allows simulation of faults and alerts to facilitate testing and demonstrate features.
The ACI simulator provides a variety of features and benefits, key ones summarized in the table below.
||Topology view, Fabric discovery
|Creation of network constructs
||Build a tenant, private layer 3 network, bridged domain
|Specify Cisco ACI policy constructs
||Create Filters, Contracts
||create Application Network Profiles, End-point groups
||VMware ESXi, vCenter, vshield
|L4-L7 services integration
||Cisco ASA/ASAv, Citrix NetScaler and F5 BIG-IP
|Monitoring and troubleshooting
||View faults, events, managed objects etc through GUI
|Programmability with Northbound API clients
||Python, REST APIS with JSON & XML bindings, PowerShell etc
Additionally, please refer to the Cisco ACI compatibility matrix for a full list of supported capabilities and the Datasheet for detailed specifications. In closing, I want to bring to your attention to the general availability of APIC release 1.0(2i) and Cisco NX-OS release 11.0(2i) for Cisco Nexus 9000 Series ACI-Mode Switches. This release delivers new hardware and software capabilities that will further the customer momentum we are seeing with ACI.
For more information, visit
Tags: CISCO ACI Simulator, Cisco APIC, L4-L7 services integration, Nexus 9000 Platform, programmability, spine-leaf architecture
[Note: This is the third a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not. Part 1 | Part 2 | Part 4]
The Cisco ACI fabric is designed as an application-centric intelligent network. The Cisco APIC policy model is defined from the top down as a policy enforcement engine focused on the application itself and abstracting the networking functions underneath. The policy model unites with the advanced hardware capabilities of the Cisco ACI fabric underlying the business-application-focused control system.
The Cisco APIC policy object-oriented model is built on the distributed policy enforcement concepts for intelligent devices enabled by OpFlex and characterized by modern development and operations (DevOps) applications such as Puppet and Chef.
At the top level, the Cisco APIC policy model is built on a series of one or more tenants, which allows the network infrastructure administration and data flows to be segregated. Tenants can be customers, business units, or groups, depending on organization needs. Below tenants, the model provides a series of objects that define the application itself. These objects are endpoints and endpoint groups (EPGs) and the policies that define their relationships (see figure below). The relationship between two endpoints, which might be two virtual machines connected in a three-tier web application, can be implemented by routing traffic between the endpoints to firewalls and ADCs that enforce the appropriate security and quality of service (QoS) policies for the application and those endpoints.
Endpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy ModelEndpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy Model
For a more thorough description of the Cisco ACI application policy model, please refer to this whitepaper, or this one more specifically on Endpoint Groups.
For this discussion, the important feature to notice is the way that Cisco ACI policies are applied to application endpoints (physical and virtual workloads) and to EPGs. Configuration of individual network devices is ancillary to the requirements of the application and workloads. Individual devices do not require programmatic control as in prior SDN models, but are orchestrated according to the centrally defined and managed policies and according to application policies.
This model is catching hold in the industry and in the open source community. The OpenStack organization has begun work on including group-based policies to extend the OpenStack Neutron API for network orchestration with a declarative policy-based model based closely on EPG policies from Cisco ACI. (Note: “Declarative” refers to the orchestration model in which control is distributed to intelligent devices based on centralized policies, in contrast to retaining per-flow management control within the controller itself.)
Read More »
Tags: Chef, Cisco ACI, Cisco APIC, devops, Group Policy, Open Daylight, OpenStack, Puppet, SDN
I find Linux containers among the most fascinating technology trends of recent past. Containers couple lightweight, high performance isolation and security with the ability to easily package services and deploy them in a flexible and scalable way. Many companies find these value-props compelling enough to build, manage and deploy enterprise applications. Adding further momentum to container adoption is Docker, a popular open source platform for addressing key requirements of Linux container deployment, performance and management. If you are into historical parallels, I can equate the Docker evolution and growth to the Java programing language which brought in its wake the promise of “write once run everywhere”. Docker containers bring the powerful capability of “build once and run everywhere”. It is therefore not surprising to see a vibrant eco-system being built up around Docker.
The purpose if this blog is to discuss the close alignment between Cisco ACI and containers. Much like containers, Cisco ACI provides accelerated application deployment with scale and security. In doing so, Cisco ACI seamlessly brings together applications across virtual machines (VM), bare-metal servers and containers.
Let us take a closer look at how Containers address issues associated with hypervisor based virtualization. Hypervisor based virtualization has been a dominant technology in past two decades, with compelling ROI via server consolidation. However, it is well known that hypervisors bring workload dependent overheads while replicating native hardware behaviors. Furthermore, one needs to consider application portability considerations when dealing with hypervisors.
Linux containers, on the other hand, provide self-contained execution environments and isolate applications using primitives such as namespaces and control groups (cgroups). These primitives provide the ability to run multiple environments on a Linux host with strong isolation between them, while bringing efficiency and flexibility. An architectural illustration of Hypervisor based and Container based virtualization is worth a quick glance. It is apparent from below, Docker based containers bring portability across hosts, versioning and reuse. No discussion on Docker containers is complete without mention of DevOps benefits. Docker framework – altogether with Vagrant, for instance -- aligns tightly with DevOps practices. With Docker, developers can focus on their code without concerning about the side effects of running it in production. Operations teams can treat the entire container as a separate entity while managing deployments.
ACI and Containers
Cisco Application Centric Infrastructure (ACI) offers a common policy model for managing IT applications across the entire Data Center infrastructure. ACI is agnostic to the form-factors on which applications are deployed. ACI supports bare-metal servers, Virtual machines and containers, and its native portability makes it a natural fit with Containers. Besides, ACI’s unified policy language offers customers a consistent security model regardless of how the application is deployed. With ACI, workloads running in existing bare-metal and VM environments can seamlessly integrate and/or migrate to a Container environment.
The consistency of ACI’s policy model is striking. In ACI, policies are applied across End Point groups (EPG) which are abstractions of network end points. The end points can be bare-metal servers, VMs or Containers. As a result of this flexibility, ACI can apply policies across a diverse infrastructure that includes Linux Containers. I want to draw attention to the ACI flexible policy model applied to an application workload spanning bare-metal servers, VMs and Docker containers as illustrated below.
You may recall Cisco announced the broad endorsement for OpFlex protocol at Interop Vegas 2014. We are currently working on integrating OpFlex, Open vSwitch (OVS) with ACI to enforce policies across VMs and Containers in earlier part of next calendar year.
As Container adoption matures, managing large number of them at scale becomes critical. Many Open source initiatives are actively working on scalability, scheduling and resource management of containers. OpenStack, Mesos, Kubernetes are among the open source initiatives / communities Cisco is actively engaged in to advance ACI integration with open source tools and solutions.
With containers, we have seen only the tip of the iceberg. Docker containers are beginning to get traction in private clouds and traditional Data centers. Cisco ACI plays a pivotal role in integrating ACI unified policy model across a diverse infrastructure comprising bare-metal, VMs and Containers.
For more information refer:
Tags: ACI Policy Model, bare metal, Cisco ACI, Cisco APIC, docker, Linux Containers, opflex protocol, virtual machines
The Cisco-Citrix partnership has expanded significantly in recent years from UCS-XenDesktop based Desktop virtualization solutions to span Mobility, Desktop as A Service (DaaS) and most recently ACI-NetScaler joint solutions. I have been fortunate enough to be part of this momentum. And it’s been fun. In this blog, I want to announce another significant milestone on the Cisco ACI-Citrix eco-system front. The Citrix NetScaler Device Package for Cisco ACI is now FCS. You may recall earlier in August, we started shipping Cisco APIC worldwide. Read Blog
Citrix NetScaler needs no introduction and powers some of the world’s largest clouds providing capabilities that smartly and affordably scale application and service delivery infrastructures without additional complexity. Cisco ACI delivers a centralized fabric control and automation framework capable of managing application policies. This framework allows resources to be dynamically provisioned and configured based on application requirements. Citrix NetScaler provides core network services such as load balancing, SSL, SSL-VPN, and firewalls that can be used by applications in an automated, programmatic and simple fashion.
Now let us segue to the Citrix NetScaler Device package integration with Cisco APIC. Citrix NetScaler integrates with Cisco Application Policy Infrastructure Controller (APIC) through open APIs and provides per-app, per-tenant L4-L7 policy configuration and dynamic service chaining and insertion. In addition, the integrated solution also allows exchange of intelligent telemetry information between NetScaler and APIC for application and tenant visibility.
The diagram below illustrates the integration architecture.
The Citrix NetScaler Device Package for Cisco ACI comprises a device Model and a device Script. The device Model defines the functions provided by NetScaler SDX/VPX/MPX such as load-balancing, content switching etc., The device Script provides the adapter functions required for NetScaler to communicate with APIC.
The Citrix NetScaler device package is now available for download
The advantages of deploying Cisco ACI + Citrix NetScaler solution is multi-fold. First and foremost it accelerates application deployment with reliability, security and multi-tenancy on existing NetScaler physical and virtual appliances. All of this without disrupting services operational best practices. Second, NetScaler’s built-in Autoscale feature proactively signals Cisco APIC when to add or drop application capacity. This capability allows customers to efficiently and seamlessly utilize their resources without any added downtime.
The delivery of NetScaler device package is just the beginning of the Cisco ACI and Citrix NetScaler journey. Together, Cisco and Citrix are also focusing on driving standard protocols and open initiatives. Our engineering teams are in the process of defining within IETF standards body, the Network Service Header protocol (NSH) which defines service insertion specifications for application- and service-aware infrastructures. We are also co-authoring the OpFlex, an extensible policy protocol that abstracts service policies independently from device-specific configurations and contribute to Open Daylight.
Tags: ACI eco-system, Cisco ACI, Cisco APIC, Citrix NetScaler, L4-L7 services, NetScaler Device Package for Cisco ACI, OpFlex
Recent few weeks should have been exciting if you are a customer of ACI. First, we announced the shipment of ACI to Data Centers worldwide. Then, F5 announced that its device package for Cisco APIC is FCS. We also had a very successful F5 Agility at New York early in August, showcasing Cisco ACI-F5 Big IP joint solution in breakout sessions, world of solutions Expo and in keynotes Panels. Cisco also recently published a jointly written technical whitepaper, a solutions brief and a Design guide with F5.
In this blog, I want to take you on a quick tour of the Cisco ACI-F5 integrated joint solution.
Traditional approaches to inserting L4-L7 services into a network entail highly manual operations, that takes days or even weeks to deploy. Likewise when an application is retired, removing a service device configuration, such as firewall rules, can be difficult. Cisco APIC can automate service insertion while acting as a central point of policy control. APIC can also automatically configure the service according to the application’s requirements, which allows organizations to automate service insertion and eliminate the challenge of managing the complex techniques of traditional service insertion.
Diagram-1: ACI – F5 Big IP Integration architecture
As depicted in diagram-1 above, F5 BIG –IP integrates with Cisco APIC through well established and open APIs (Simple Object Access Protocol [SOAP] or Representation State Transfer [REST]). The result of the integration is a device package, which is currently available on F5’s software download website. With the device package from F5 loaded on Cisco APIC, customers can achieve automated network and service provisioning across the F5 services fabric, providing end-to-end telemetry and visibility of applications and tenants. Cisco APIC acts as a central point of configuration management and automation for Layer 4 through 7 services and tightly coordinates service delivery, serving as the controller for network automation.
With Cisco ACI -F5 BIG-IP joint solution, customers can preserve richness of F5 Synthesis offering through policy abstraction, offering investment protection, application deployment agility, scale and secure multi tenancy and achieve great operational cost benefits. Existing F5 Physical hardware or virtual editions can be deployed with Cisco ACI. Moreover, in this model, application policy based provisioning of workflows allows for efficient and faster roll out of application across multiple tenants while maintaining operational best practices across L2-L7 teams within an IT organization.
With Cisco ACI and F5, you can overcome your biggest IT agility and cost management challenges, ensuring responsiveness to customers and employees and a more competitive posture. As a result, rather than being a perceived barrier to success, your IT organization can drive innovation and agility to meet business objectives.
To learn more, please register for the technical webinar Cisco and F5 are hosting on Aug 26
Click here to Register
Tags: Cisco ACI, Cisco APIC, F5 Big IP LTM, F5 device package for APIC, L4-L7 services automation