Cisco Blogs


Cisco Blog > Perspectives

Why Network Design in the Modern Era Has Kept Me from Retirement

Network Design in the Modern Era1In youth-oriented Silicon Valley, it’s risky to mention this, but I’ve been around for a long time. In fact, in theory I could retire! I already moved to a small town in the Pacific Northwest where the cost of living is low, and I could spend my days hiking in the mountains.

But actually I can’t retire. Why? The networking field is too interesting! In addition, modern networking, with its emphasis on design, applications, policies, and users, focuses on the same concepts that have interested me from the beginning. Not only that, but I firmly believe that with today’s network design tools, we are positioned to build networks that are faster, larger, and even more user-friendly than ever. How could I retire when that’s the case?

In the Beginning

I started my career as a software developer. This was long before agile software development became popular, but nonetheless there was a focus on agility and flexibility. The goal was to develop software that could be used in multiple ways to support a broad range of users. The focus was on user behavior, application modeling, systems analysis, and structured design. Read More »

Tags: , , , , ,

How you integrate Network Services matters

Guest post from Lori Mac Vittie (@lmacvittie) from F5 Networks 

How you provision all the network things matters

Polymorphism is a concept central to object-oriented programming. The notion of polymorphism is used to extend the capabilities of a basic object, like a mammal, to specific implementations, like cats or dogs or honey badgers, even though they don’t care about such technical distinctions. A good example of this is cats and dogs, which are both of the type “mammal” but that “speak” in a different voice.

polymorphism

Read More »

Tags: , , ,

Take One small step with the ACI Simulator, Make a giant leap in ACI Fabric deployments

Cisco ACI is gaining momentum and mindshare in the industry as testified by the 160 plus licensees for the Application Policy Infrastructure Controller (APIC), and 900 plus customers for the Nexus 9k platform.  All of this in less than three months since going live in August 2014.  Riding on that wave of success, we are pleased to announce the Cisco ACI Simulator, a physical appliance that provides a simulated Cisco ACI environment. The appliance is a full-featured Cisco APIC controller software along with a simulated fabric infrastructure of leaf switches and spine switches in one physical server.

If you wondered how it is going to help you, think of it as a self-contained environment with Cisco APIC instances with real production software. You can use it to quickly understand ACI features, exercise APIs, and initiate integration with third-party orchestration systems and applications. The ACI simulator will also allow you to use the native command line CLI and GUI via APIs that are available for third-parties.  If you are a developer or Cisco partner, this is an ideal way to develop and test your solution.  If you are a customer, you can use this in your test lab to create profiles for your enterprise apps with your actual application delivery controllers and security devices.  This belongs in any well-architected DevOps environment.

Topology of the simulator

The Cisco ACI Simulator enables you to simulate the Cisco ACI fabric, including the Cisco Nexus 9000 Series Switches supported in a leaf-and-spine topology, to take full advantage of an automated, policy-based, systems management approach. Specifically, the ACI simulator environment comprises 2 ACI spines, 2 ACI leafs, and 3 APIC controllers.

acisimulator

The Cisco ACI Simulator includes simulated switches, so you cannot validate the data path. However, some of the simulated switch ports are mapped to the front-panel server ports which allows you to connect external management entities such as VMware ESX servers, VMware vCenter, VMware vShield, and bare-metal servers; Layer 4 through 7 services; authentication, authorization, and accounting (AAA) systems; and other physical and virtual service appliances. In addition, the Cisco ACI Simulator allows simulation of faults and alerts to facilitate testing and demonstrate features.

Benefits/features

The ACI simulator provides a variety of features and benefits, key ones summarized in the table below.

Fabric Management Topology view, Fabric discovery
Creation of network constructs Build a tenant,  private layer 3 network, bridged   domain
Specify Cisco ACI policy constructs Create Filters, Contracts
Application deployment create Application Network Profiles, End-point groups
Virtualization Integration VMware ESXi, vCenter, vshield
L4-L7 services integration Cisco ASA/ASAv, Citrix NetScaler and F5 BIG-IP
Monitoring and troubleshooting View faults, events, managed objects etc through GUI
Programmability with Northbound API clients Python, REST APIS with JSON & XML bindings,   PowerShell etc

 

Additionally,  please refer to the Cisco ACI compatibility matrix for a full list of supported capabilities and the Datasheet for detailed specifications. In closing, I want to bring to your attention to the general availability of APIC release 1.0(2i) and Cisco NX-OS release 11.0(2i) for Cisco Nexus 9000 Series ACI-Mode Switches. This release delivers new hardware and software capabilities that will further the customer momentum we are seeing with ACI.

For more information, visit

www.cisco.com/go/aci

ACI simulator

https://blogs.cisco.com/datacenter

Tags: , , , , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 3 – Group Policies

[Note: This is the third a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not. Part 1 | Part 2 | Part 4]

The Cisco ACI fabric is designed as an application-centric intelligent network. The Cisco APIC policy model is defined from the top down as a policy enforcement engine focused on the application itself and abstracting the networking functions underneath. The policy model unites with the advanced hardware capabilities of the Cisco ACI fabric underlying the business-application-focused control system.

The Cisco APIC policy object-oriented model is built on the distributed policy enforcement concepts for intelligent devices enabled by OpFlex and characterized by modern development and operations (DevOps) applications such as Puppet and Chef.

At the top level, the Cisco APIC policy model is built on a series of one or more tenants, which allows the network infrastructure administration and data flows to be segregated. Tenants can be customers, business units, or groups, depending on organization needs. Below tenants, the model provides a series of objects that define the application itself. These objects are endpoints and endpoint groups (EPGs) and the policies that define their relationships (see figure below). The relationship between two endpoints, which might be two virtual machines connected in a three-tier web application, can be implemented by routing traffic between the endpoints to firewalls and ADCs that enforce the appropriate security and quality of service (QoS) policies for the application and those endpoints.

Endpoint Group Policy

Endpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy ModelEndpoints and Application Workloads Along with Tenants and Application Network Profiles Are the Foundation of the Cisco ACI Policy Model

For a more thorough description of the Cisco ACI application policy model, please refer to this whitepaper, or this one more specifically on Endpoint Groups.

For this discussion, the important feature to notice is the way that Cisco ACI policies are applied to application endpoints (physical and virtual workloads) and to EPGs. Configuration of individual network devices is ancillary to the requirements of the application and workloads. Individual devices do not require programmatic control as in prior SDN models, but are orchestrated according to the centrally defined and managed policies and according to application policies.

This model is catching hold in the industry and in the open source community. The OpenStack organization has begun work on including group-based policies to extend the OpenStack Neutron API for network orchestration with a declarative policy-based model based closely on EPG policies from Cisco ACI. (Note: “Declarative” refers to the orchestration model in which control is distributed to intelligent devices based on centralized policies, in contrast to retaining per-flow management control within the controller itself.)

Read More »

Tags: , , , , , , , ,

Cisco ACI Policy Model spans Physical, Virtual and Container based environments

I find Linux containers among the most fascinating technology trends of recent past. Containers couple lightweight, high performance isolation and security with the ability to easily package services and deploy them in a flexible and scalable way. Many companies find these value-props compelling enough to build, manage and deploy enterprise applications. Adding further momentum to container adoption is Docker, a popular open source platform for addressing key requirements of Linux container deployment, performance and management. If you are into historical parallels, I can equate the Docker evolution and growth to the Java programing language which brought in its wake the promise of “write once run everywhere”.  Docker containers bring the powerful capability of “build once and run everywhere”. It is therefore not surprising to see a vibrant eco-system being built up around Docker.

The purpose if this blog is to discuss the close alignment between Cisco ACI and containers. Much like containers, Cisco ACI provides accelerated application deployment with scale and security. In doing so, Cisco ACI seamlessly brings together applications across virtual machines (VM), bare-metal servers and containers.

Let us take a closer look at how Containers address issues associated with hypervisor based virtualization. Hypervisor based virtualization has been a dominant technology in past two decades, with compelling ROI via server consolidation. However, it is well known that hypervisors bring workload dependent overheads while replicating native hardware behaviors. Furthermore, one needs to consider application portability considerations when dealing with hypervisors.

Linux containers, on the other hand, provide self-contained execution environments and isolate applications using primitives such as namespaces and control groups (cgroups). These primitives provide the ability to run multiple environments on a Linux host with strong isolation between them, while bringing efficiency and flexibility. An architectural illustration of Hypervisor based and Container based virtualization is worth a quick glance. It is apparent from below, Docker based containers bring portability across hosts, versioning and reuse. No discussion on Docker containers is complete without mention of DevOps benefits. Docker framework – altogether with Vagrant, for instance – aligns tightly with DevOps practices. With Docker, developers can focus on their code without concerning about the side effects of running it in production. Operations teams can treat the entire container as a separate entity while managing deployments.

containergraphic4

ACI and Containers

Cisco Application Centric Infrastructure (ACI) offers a common policy model for managing IT applications across the entire Data Center infrastructure. ACI is agnostic to the form-factors on which applications are deployed. ACI supports bare-metal servers, Virtual machines and containers, and its native portability makes it a natural fit with Containers. Besides, ACI’s unified policy language offers customers a consistent security model regardless of how the application is deployed. With ACI, workloads running in existing bare-metal and VM environments can seamlessly integrate and/or migrate to a Container environment.

The consistency of ACI’s policy model is striking. In ACI, policies are applied across End Point groups (EPG) which are abstractions of network end points. The end points can be bare-metal servers, VMs or Containers. As a result of this flexibility, ACI can apply policies across a diverse infrastructure that includes Linux Containers. I want to draw attention to the ACI flexible policy model applied to an application workload spanning bare-metal servers, VMs and Docker containers as illustrated below.

containergraphic3

You may recall Cisco announced the broad endorsement for OpFlex protocol at Interop Vegas 2014. We are currently working on integrating OpFlex, Open vSwitch (OVS) with ACI to enforce policies across VMs and Containers in earlier part of next calendar year.

As Container adoption matures, managing large number of them at scale becomes critical. Many Open source initiatives are actively working on scalability, scheduling and resource management of containers. OpenStack, Mesos, Kubernetes are among the open source initiatives / communities Cisco is actively engaged in to advance ACI integration with open source tools and solutions.

With containers, we have seen only the tip of the iceberg. Docker containers are beginning to get traction in private clouds and traditional Data centers. Cisco ACI plays a pivotal role in integrating ACI unified policy model across a diverse infrastructure comprising bare-metal, VMs and Containers.

For more information refer:

http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-732697.html

http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/dc-partner-red-hat/linux-containers-white-paper-cisco-red-hat.pdf

Tags: , , , , , , ,