Cisco Blogs

Cisco Blog > Data Center

CCIE : ITD and RISE in CCIE Data Center

ITD and RISE are now part of CCIE Data Center:

Intelligent Traffic Director (ITD) is a hardware based multi-terabit layer 4 load-balancing, traffic steering and services insertion solution on the Nexus 5k/6k/7k/9k series of switches.

Domain Written Exam (%) Lab Exam (%)  
1.0 Cisco Data Center L2/L3 Technologies 24% 27% Show Details
2.0 Cisco Data Center Network Services 12% 13% Hide Details
2.1 Design, Implement and Troubleshoot Service Insertion and Redirection

  • 2.1.a Design, Implement and Troubleshoot Service Insertion and Redirection for example LB, vPATH, ITD, RISE

2.2 Design, Implement and Troubleshoot network services

  • 2.2.a Design, Implement and Troubleshoot network services for example policy drivenL4-L7 services
3.0 Data Center Storage Networking and Compute 23% 26% Show Details
4.0 Data Center Automation and Orchestration 13% 14% Show Details
5.0 Data Center Fabric Infrastructure 18% 14% Show Details
6.0 Evolving Technologies 10% N/A Show Details


To learn about RISE (Remote Integrated Services Engine), please see:

To learn about ITD (Intelligent Traffic Director), please see:


Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Server Load balancing with NAT, using Nexus switches: ITD

Server load balancer (SLB) has become very common in network deployments, as the data & video traffic are expanding at rapid rate. There are various modes of SLB deployments today. Application load balancing with network address translation (NAT) has become a necessity for various benefits.

Cisco Intelligent Traffic Director (ITD) is a hardware based multi-terabit layer 4 load-balancing and traffic steering solution on the Nexus 5k/6k/7k/9k series of switches.

With our latest NX-OS Software 7.2(1)D1(1) (also known as Gibraltar MR), ITD supports SLB NAT on Nexus 7k series of switches.

In SLB-NAT deployment, client can send traffic to a virtual IP address, and need not know about the IP of the underlying servers. NAT provides additional security in hiding the real server IP from the outside world. In the case of Virtualized server environments, this NAT capability provides increased flexibility in moving the real servers across the different server pools with out being noticed by the their clients. With respect health monitoring and traffic reassignment, SLB NAT helps applications to work seamlessly without client being aware of any IP change.

ITD won the Best of Interop 2015 in Data Center Category.


ITD provides :

  1. Zero latency load-balancing.
  2. CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
  3. IP-stickiness
  4. Resilient (like resilient ECMP), Consistent hash
  5. Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
  6. Monitoring the health of servers/appliances.
  7. Handles unlimited number of flows.

Documentation, slides, videos:

Email Query or

Connect on twitter: @samar4

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Cisco UCS Integrated Infrastructure for OpenStack: Enabling Reliable OpenStack Cloud for Enterprises

Companies are going through a change in the way they conduct their businesses and digital transformation is paving the way. IT services and applications are distributed and beyond the traditional boundaries of the data center. Cloud adoption is a big part of the transformation. Enterprises are looking at multiple cloud technologies from established and emerging players to handle the transformation.

More and more companies are adopting Fast IT as a standard to meet these challenges.  Dev and test teams are looking to shorten the development life cycle, and innovative cloud developers are looking for platforms that are programmatic and automatable. Companies are looking at flexible, open options to meet these needs in-house. OpenStack, is emerging as the leading open source cloud computing software platform and customers are actively considering it for their businesses.

OpenStack has come a long way in the last few years; the broad OpenStack community is delivering new features and capabilities rapidly. It has gained interest across various customer segments. However it continues to be a challenge for businesses to adopt OpenStack due to the skillset needed to design, optimize and deploy an OpenStack based cloud in their data centers. Another challenge is the support needed to maintain, troubleshoot and evolve with the industry.

Cisco Validated Designs and Implementation Guides offer proven processes and tested configurations that reduce the complexity of deploying OpenStack. Cisco works closely with ecosystem partners like Red Hat and Intel to develop validated solutions for standing up OpenStack based private clouds.

Earlier this year, we released ‘FlexPod Datacenter with Red Hat Enterprise Linux OpenStack Platform,’ a Cisco Validated Design (CVD) for running OpenStack on our trusted FlexPod architecture, composed of Cisco Unified Computing System, Cisco Nexus family of switches, and NetApp unified storage systems. For customers interested in running OpenStack with Ceph storage, another CVD is in the works.

Next week, we will release a new CVD jointly developed with Red Hat and Intel. The new validated solution combines the power of Cisco UCS Integrated Infrastructure with the most recent OpenStack distribution from Red Hat, so our customers can more easily and quickly deploy OpenStack private clouds.

Cisco UCS Integrated Infrastructure for OpenStack

Cisco UCS Integrated Infrastructure is a reliable, scalable industry-leading platform that matches the needs of an agile business. Cisco’s innovative solutions for stateless computing, programmability and automation are enabled within the context of OpenStack through easily available, open source plugins.

Red Hat Enterprise Linux OpenStack Platform is a stable and tested distribution of OpenStack. OSP director is an integrated and centralized tool for deployment and management of OpenStack and Ceph.

Intel has made key contributions towards making OpenStack enterprise-ready such as support for live migration capability and scalability.

All the capabilities and features of Cisco UCS and contributions by Intel are delivered and deployed through Red Hat Enterprise Linux OpenStack Platform 7 (OSP 7), for a seamless and stable experience. With this solution customers can manage compute, network, storage, hypervisors and virtual machines from the OpenStack environment. Our fundamental focus is to deliver an enterprise ready OpenStack platform solution with validated configuration, to increase speed of deployment and reduce risk.

For easy adoption of OpenStack, Cisco will be the single point-of-contact for installation and on-going support for the entire solution including the infrastructure and OpenStack . Cisco will work with Red Hat to provide coordinated support for faster resolution.

Simplifying OpenStack deployment is critical for the success of our customers and to ensure IT adoption of the technology; we are committed to delivering that to our customers with our eco-system partners.

Please join us at the OpenStack Summit in Tokyo between Oct 27-29th to learn about the solutions we are building to address customer needs, our participation and contributions to OpenStack, or for any discussions on OpenStack.

Our Cisco Validated Design for deploying Red Hat Enterprise Linux OpenStack Platform on Cisco UCS Integrated Infrastructure will be available for download in the Cisco Design Zone.

Tags: , , , , , , , , , ,

Experience Day 0, 1, 2 and N Operations @ PuppetConf

Craig Huitema blogged about Cisco’s SDN strategy and one of the key pillars is programmable networks. Cisco’s programmable networks is based on Nexus operating system NX-OS and our Robb Boyd from TechWiseTV covers it here and goes in more depth about NX-API REST (Object model) here and here.

Also go here if you missed our September 25th SDxCentral DemoFriday where we looked at use cases and demos related to NX-Toolkit and NX-API REST. Bottom line is to drive operational agility in the data center by enabling IT admins to manage Nexus switches as a Linux server with open interfaces and integrating DevOps tools.

One of the DevOps tools is Puppet. Integrating Puppet Enterprise agent is an integral part of programmable networks as I touched on it in my previous blog.

As we break lifecycle management into Day 0, 1, 2 and N to install, configure, optimize and upgrade the network to meet application and user requirements, Puppet plays a key role in each step.

Day0_1_2_N Operations

Come and visit Cisco’s booth at PuppetConf October 7 – 9 to see demos and learn more about the integration of Puppet and its benefits on Day 0, 1, 2, and N. Also, visit our sponsor theater on Thursday, Oct 8 at 12:10 PM in the main exhibit hall as well as our breakout session Friday, October 9 at 2:30 PM. We will share how Cisco’s strategy of openness has helped the developer community.

To stay up to date on the latest version of the CiscoPuppet Module source code, visit this GitHub repository that allows network administrators to manage Cisco Network Elements using Puppet.

Tags: , , , , , , , , ,

ITD: Load Balancing, Traffic Steering & Clustering using Nexus 5k/6k/7k/9k

Cisco Intelligent Traffic Director (ITD) is an innovative solution to bridge the performance gap between a multi-terabit switch and gigabit servers and appliances. It is a hardware based multi-terabit layer 4 load-balancing, traffic steering and clustering solution on the Nexus 5k/6k/7k/9k series of switches.

It allows customers to deploy servers and appliances from any vendor with no network or topology changes. With a few simple configuration steps on a Cisco Nexus switch, customers can create an appliance or server cluster and deploy multiple devices to scale service capacity with ease. The servers or appliances do not have to be directly connected to the Cisco Nexus switch.

ITD won the Best of Interop 2015 in Data Center Category.

With our patent pending innovative algorithms, ITD (Intelligent Traffic Director) supports IP-stickiness, resiliency, consistent hash, exclude access-list, NAT (EFT), VIP, health monitoring, sophisticated failure handling policies, N+M redundancy, IPv4, IPv6, VRF, weighted load-balancing, bi-directional flow-coherency, and IPSLA probes including DNS. There is no service module or external appliance needed. ITD provides order of magnitude CAPEX and OPEX savings for the customers. ITD is much superior than legacy solutions like PBR, WCCP, ECMP, port-channel, layer-4 load-balancer appliances.

ITD provides :

  1. Hardware based multi-terabit/s L3/L4 load-balancing at wire-speed.
  2. Zero latency load-balancing.
  3. CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
  4. Redirect line-rate traffic to any devices, for example web cache engines, Web Accelerator Engines (WAE), video-caches, etc.
  5. Capability to create clusters of devices, for example, Firewalls, Intrusion Prevention System (IPS), or Web Application Firewall (WAF), Hadoop cluster
  6. IP-stickiness
  7. Resilient (like resilient ECMP), Consistent hash
  8. VIP based L4 load-balancing
  9. NAT (available for EFT/PoC). Allows non-DSR deployments.
  10. Weighted load-balancing
  11. Load-balances to large number of devices/servers
  12. ACL along with redirection and load balancing simultaneously.
  13. Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
  14. Order of magnitude OPEX savings : reduction in configuration, and ease of deployment
  15. Order of magnitude CAPEX savings : Wiring, Power, Rackspace and Cost savings
  16. The servers/appliances don’t have to be directly connected to Nexus switch
  17. Monitoring the health of servers/appliances.
  18. N + M redundancy.
  19. Automatic failure handling of servers/appliances.
  20. VRF support, vPC support, VDC support
  21. Supported on all linecards of Nexus 9k/7k/6k/5k series.
  22. Supports both IPv4 and IPv6
  23. Cisco Prime DCNM Support
  24. exclude access-list
  25. No certification, integration, or qualification needed between the devices and the Cisco NX-OS switch.
  26. The feature does not add any load to the supervisor CPU.
  27. ITD uses orders of magnitude less hardware TCAM resources than WCCP.
  28. Handles unlimited number of flows.

For example,

  • Load-balance traffic to 256 servers of 10Gbps each.
  • Load-balance to cluster of Firewalls. ITD is much superior than PBR.
  • Scale IPS, IDS and WAF by load-balancing to standalone devices.
  • Scale the NFV solution by load-balancing to low cost VM/container based NFV.
  • Scale the WAAS / WAE solution.
  • Scale the VDS-TC (video-caching) solution.
  • Scale the Layer-7 load-balancer, by distributing traffic to L7 LBs.
  • ECMP/Port-channel cause re-hashing of flows. ITD is resilient, and doesn’t cause re-hashing on node add/delete/failure.

Documentation, slides, videos:

Email Query or

Please note that ITD is not a replacement for Layer-7 load-balancer (URL, cookies, SSL, etc). Please email: for further questions.

Connect on twitter: @samar4

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,