Cisco Blogs

Cisco Blog > Data Center

ITD: Load Balancing, Traffic Steering & Clustering using Nexus 5k/6k/7k/9k

Cisco Intelligent Traffic Director (ITD) is an innovative solution to bridge the performance gap between a multi-terabit switch and gigabit servers and appliances. It is a hardware based multi-terabit layer 4 load-balancing, traffic steering and clustering solution on the Nexus 5k/6k/7k/9k series of switches.

It allows customers to deploy servers and appliances from any vendor with no network or topology changes. With a few simple configuration steps on a Cisco Nexus switch, customers can create an appliance or server cluster and deploy multiple devices to scale service capacity with ease. The servers or appliances do not have to be directly connected to the Cisco Nexus switch.

ITD won the Best of Interop 2015 in Data Center Category.

With our patent pending innovative algorithms, ITD (Intelligent Traffic Director) supports IP-stickiness, resiliency, consistent hash, exclude access-list, NAT (EFT), VIP, health monitoring, sophisticated failure handling policies, N+M redundancy, IPv4, IPv6, VRF, weighted load-balancing, bi-directional flow-coherency, and IPSLA probes including DNS. There is no service module or external appliance needed. ITD provides order of magnitude CAPEX and OPEX savings for the customers. ITD is much superior than legacy solutions like PBR, WCCP, ECMP, port-channel, layer-4 load-balancer appliances.

ITD provides :

  1. Hardware based multi-terabit/s L3/L4 load-balancing at wire-speed.
  2. Zero latency load-balancing.
  3. CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
  4. Redirect line-rate traffic to any devices, for example web cache engines, Web Accelerator Engines (WAE), video-caches, etc.
  5. Capability to create clusters of devices, for example, Firewalls, Intrusion Prevention System (IPS), or Web Application Firewall (WAF), Hadoop cluster
  6. IP-stickiness
  7. Resilient (like resilient ECMP), Consistent hash
  8. VIP based L4 load-balancing
  9. NAT (available for EFT/PoC). Allows non-DSR deployments.
  10. Weighted load-balancing
  11. Load-balances to large number of devices/servers
  12. ACL along with redirection and load balancing simultaneously.
  13. Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
  14. Order of magnitude OPEX savings : reduction in configuration, and ease of deployment
  15. Order of magnitude CAPEX savings : Wiring, Power, Rackspace and Cost savings
  16. The servers/appliances don’t have to be directly connected to Nexus switch
  17. Monitoring the health of servers/appliances.
  18. N + M redundancy.
  19. Automatic failure handling of servers/appliances.
  20. VRF support, vPC support, VDC support
  21. Supported on all linecards of Nexus 9k/7k/6k/5k series.
  22. Supports both IPv4 and IPv6
  23. Cisco Prime DCNM Support
  24. exclude access-list
  25. No certification, integration, or qualification needed between the devices and the Cisco NX-OS switch.
  26. The feature does not add any load to the supervisor CPU.
  27. ITD uses orders of magnitude less hardware TCAM resources than WCCP.
  28. Handles unlimited number of flows.

For example,

  • Load-balance traffic to 256 servers of 10Gbps each.
  • Load-balance to cluster of Firewalls. ITD is much superior than PBR.
  • Scale IPS, IDS and WAF by load-balancing to standalone devices.
  • Scale the NFV solution by load-balancing to low cost VM/container based NFV.
  • Scale the WAAS / WAE solution.
  • Scale the VDS-TC (video-caching) solution.
  • Scale the Layer-7 load-balancer, by distributing traffic to L7 LBs.
  • ECMP/Port-channel cause re-hashing of flows. ITD is resilient, and doesn’t cause re-hashing on node add/delete/failure.

Documentation, slides, videos:

Email Query or

Please note that ITD is not a replacement for Layer-7 load-balancer (URL, cookies, SSL, etc). Please email: for further questions.

Connect on twitter: @samar4

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

What is Cisco’s SDN Strategy for the Data Center?

Cisco has a broad spectrum of customers across a wide range of markets and geographies. These customers have a diverse set of requirements, operational models and use cases, meaning that a one size fits all SDN strategy does not fit all our customers. As a result, we made a series of announcements earlier this summer (at Cisco Live San Diego) that continued to showcase how our SDN strategy provides customers with a high degree of choice and flexibility. This blog will review key elements of the strategy, as well as provide a bit of background and context around them.

Cisco SDN in the DC

Cisco’s SDN strategy for the Data Center is built on 3 key pillars:

  • Application Centric Infrastructure (ACI)
  • Programmable Fabric
  • Programmable Network

This approach enables our customers to choose the implementation option that best meets their IT and business goals by extending the benefits of programmability and automation across the entire Nexus switching portfolio. Let’s consider each of these pillars.


A lot has been said and written about ACI already, so I’ll keep this section on ACI brief. ACI is Cisco’s flagship SDN offering. It offers the most comprehensive SDN solution in the industry. Based on an application centric policy model, ACI provides automated, integrated provisioning of both underlay and overlay networks, L4-7 services provisioning across a broad set of ecosystem partners, and extensive telemetry for application level health monitoring. These comprehensive capabilities deliver a solution that is agile, open, and secure, offering customers benefits no other SDN solution can.

I know the paragraph above was a bit of a mouthful. For a quick snapshot of what that all translates to in terms of actually helping a customer, check out this report from IDC.   If you want to learn more about ACI, go here.

Programmable Fabric

This pillar is all about providing scale and simplicity to VXLAN Overlays. Beyond that, it provides a clear path forward for the overall Nexus portfolio to participate in and derive the benefits of SDN.

VXLAN has gained huge momentum across the industry for a wide variety of reasons that, in many cases, involve improvements over traditional technologies such as VLANs and Spanning Tree. These involve attributes such as more efficient bandwidth use via Equal Cost Multi Pathing (ECMP), higher theoretical scalability with 16 million segments, and more flexibility through use of an overlay model upon which multi tenant cloud networks can be built. As momentum for VXLAN networks grows, so does the demand for 2 key things:

  • A standards based approach to scale out VXLANs, and
  • Simplified provisioning and management of them.

Regarding a standards based approach to scale out VXLANs, Cisco is now supporting “Multipoint BGP EVPN Control Plane” on Nexus switches. Why does this matter? Well, the original VXLAN spec (RFC 7348) relied on a multicast based flood-and-learn mechanism without a control plane for certain key functions (e.g. VTEP peer discovery and remote end host reachability). This is a suboptimal approach. To overcome the limitations inherent with this approach, the IETF developed MP BGP EVPN Control Plane as a standards-based control plane for VXLAN overlays. This reduces traffic flooding on the overlay network, yielding a more efficient and more scalable approach.

As far as the second item, simplified provisioning and management, Cisco announced an overlay management and provisioning system. This new solution, called Virtual Topology System (VTS), automates provisioning of the overlay network, so as to enhance the deployment of cloud based services. Through an automated overlay provisioning model and tight integration with 3rd party orchestration tools such as OpenStack and VMWare VCenter, VTS simplifies overlay provisioning and management for both physical and virtual workloads by eliminating manually intensive network configuration tasks. These whiteboard sessions provide an overview and also a bit more technical detail, if you’re interested.

Programmable Network

Infrastructure programmability is a big deal because it drives automation, which drives speed, which is an obvious prerequisite for the success of just about any business dealing with digital disruption. As programmability evolves, Cisco continues to roll out more and more capabilities across the Nexus portfolio. We have a broad range of features in this space including things such as Programmable Open APIs, integration with 3rd party DevOps and Automation tools, Custom App Development, and Bash shell commands. This set of capabilities within NX-OS facilitates the concept of the Programmable Network pillar.   Let’s consider how this may be useful for you.

A while ago, a small number of customers with very large networks started shifting the way they operated. Their networks were growing very large because (not too surprisingly) the number of users, thus servers, was growing very large. As the number of servers grew larger and faster, they realized they had a choice:

  • Hire a zillion new sys admins, or
  • Brutally overwork their existing sys admins, or
  • Deploy and manage servers in new and different ways.

The last option won out (in many cases, anyhow), and the revelation was automation. That is, tools that automated server deployment and management helped these sys admins and their employer’s scale the business. In the process, they paid close attention to metrics like the number of servers a given admin was managing. These “device to admin” ratios went up a lot…like in some cases orders of magnitude. With automation tools and other changes (to culture, process, etc.), some companies saw admins managing not 10’s or 100’s of servers, but 1000’s of servers. They also started experimenting with and employing DevOps – a term that at this point has a multitude of meanings, but is defined here in simple English.

As these elements have converged, people across different silos have started to collaborate a bit more, and as a result, tips, tricks and tools have started to spill across the silos. So, for example, as sys admins saw efficiency gains from using tools like Puppet and Chef to automate tasks on their servers, there was a desire to use the same tools on networks. In other cases, someone who was comfortable with Linux and wanted to work from a Bash shell wanted to use those commands for configuration and troubleshooting on the network as well as servers. Others wanted APIs that would allow extraction of all sorts of arcane box info to be massaged and acted upon by scripts and other tools.

Essentially, there was a need for more elements of the box to be more accessible and programmable in a wide variety of ways. It’s worth noting that although these trends started with a small subset of customers, many of the elements are working their way out to a much broader, more diverse cross section of customers. As this evolution has occurred, Cisco has been adding more programmability to the Nexus switches. This paper provides a more detailed view of various use cases and the functionality Nexus provides.

In summary, these 3 pillars of ACI, Programmable Fabric and Programmable Network provide a wide range of capabilities to help our customers across the broad spectrum of challenges they have. In the coming weeks and months, we’ll provide more information – here, as well as other venues – to help you better understand the strategy and its components. If this blog was too geeky and you’re looking for upleveled info, we’ll have that.  If this was too fluffy, and you want more technical depth, we’ll have that as well.  To punctuate this point, I’ll be hosting a webinar on September 15 that will cover the above in more detail. You can register here.

Tags: , , , , , , ,

Why not Initiate a “Save to Invest” Program for your Data Center? (Part 1)

Save Some Money! As a Scot, I have a natural predisposition – almost a gene – for money saving initiatives!  As I’ve been researching new initiatives in my work for Cisco Services over the past few months, I’ve become aware of – for the first time I am ashamed to say in some cases – some huge sinks of your cash in today’s data centers.  In this blog, over 2 parts, I’ll share these with you, with the aim of encouraging you to invest in these money saving activities, with the aim of freeing up investment funds to subsequently modernize your data center, transform your end user experience and improve your asset utilization financial metrics.  In fact, you could create your own “Save to Invest” program by following the 5 tips below.  And while you’re reading this, put on your headphones, turn up the volume, and listen to my “theme tune” for this article, “Money!” :-)



This week I’ll discuss ….

(1) Identify, Turn Off and Remove Idle Servers

(2) Identify Un-used Enterprise Software Applications: Reduce Your Software Costs

(3) Get Rid of Dead Weight – Execute a Server Refresh

Read More »

Tags: , , , , , , , , ,

A Strong Partnership: Cisco and Microsoft

Cisco continues to develop their partnership with Microsoft becoming a critical component of the Microsoft Data Centers across the globe. 80% of the data centers around the globe already include Cisco networking switches and routers. More and more of these same Data Centers are also making the switch to Cisco UCS server platforms.  There are many advantages to using Cisco UCS as your server platform. IDC recently competed a study interviewing many Cisco UCS installed customers and determined that by installing your application on a Cisco UCS server platform, those customers will gain the following business benefits.

Screen Shot 2015-09-03 at 9.16.48 AM

You can get first hand knowledge of these benefits by visiting Cisco at several Microsoft events over the next couple of months.

  1. Cisco Application Centric Infrastructure (ACI) and Microsoft Virtual Trade Show.

Screen Shot 2015-09-02 at 9.42.59 PM

Evolving Your Data Center to the Next Level

Keeping up with all of the changes in today’s data center technology is can be daunting. Data center technologies are evolving quickly on a number of different fronts. This presentation will cover some of the latest trends in data center technology. You’ll see how they can impact your business and how you can best begin to incorporate them into your own infrastructure. The technologies discussed will include the cloud, the hybrid cloud, containers, consolidated management, software defined networking, flash storage, as well as converged and hyper- converged infrastructure.

  1. Microsoft Most Valuable Professional (MVP) Days

Screen Shot 2015-08-21 at 8.22.59 AM

This community initiative is the brain-child of several of Microsoft Canada’s Top MVP’s. It is our absolute pleasure to be able to share our knowledge locally allowing our communities to learn more and advance their technical knowledge base. You can follow Canadian MVP’s on Twitter #CDNMVP

Vancouver, BC – September 21, 2015

Click here to Register for Vancouver

Calgary, AB – September 23, 2015

Click here to Register for Calgary

Edmonton, AB – September 25, 2015

Click here to Register for Edmonton

  1. SQL Saturday

Screen Shot 2015-09-03 at 7.45.12 AM

Free 1 day training events for SQL Server professionals that focus on local speakers, providing a variety of high quality technical sessions. It’s a group of SQL Server database administrators, database and application developers, business intelligence experts, and users from around the globe. This community is represented by more than 285 local PASS Chapters worldwide, 28 virtual chapters, 120,000 members.

Orlando – October 10

Click here to Register for Orlando

Portland – October 24

Click here to Register for Portland

Join us at these different events and discover why the server platform on which you install these Microsoft solutions makes a difference. Hope to see you there.


Tags: , , ,

The Long Road to the Cloud – Changes in Application Deployment Criteria

In today’s world as more and more customers prepare to take advantage of cloud technologies, they are finding that private cloud and colocation services are essential options in their journey to the cloud.

Harrington_DanWe are lucky to have Dan Harrington, as a guest blogger. Dan is a Research Director covering Datacenter trends at 451 Research. His primary focus is managing 451’s Voice of the Enterprise: Datacenters study which surveys thousands of enterprises a year about their datacenter strategies.

Out of the insights of his surveys, Dan has agreed to share:

  • The most important criteria are when determining whether to deploy in your own datacenter, at a colocation provider or in the cloud.
  • Where IT organizations are deploying their applications, today and in the future.
  • How security is often the most important criteria when determining deployment location.

If you believe what you hear from the mainstream media, investment community and tech press, you may come to the conclusion that every application is being deployed to the cloud or an off premise colocation datacenter. And that the very idea of deploying in a company owned datacenter went out of fashion long ago. After all, Amazon Web Services is currently pulling in $6bn annually, which is quite impressive – regardless of the fact that the entire IT industry is worth well over $1 trillion a year. However, if you look under the covers you will find that IT organizations still care very much about attributes that don’t necessarily always lend themselves well to an off-premise deployment. Learn more about which vendors are leading the market in IaaS and on-premises cloud platforms.

VotE_DC_Q2_2015_AppCriteria-03 (8.24.15)

N=416 Source: 451 Research Voice of the Enterprise: Datacenters, Q2 2015

A large (>1,000 Employees) Public sector organization weighed in last quarter about what he considers when deploying a new version of Oracle:

“The most recent major application [workload implemented] is more of an upgrade to Oracle 12… There weren’t really any alternatives [about where to deploy it]. It was here or our colocation facility… Keeping it on [premise] is important, but I think one of the main issues would be just network reliability between here and the colo… We’ve got staff here that are ready and able to deal with any kind of network or server issue. But it would take us an hour or so to get out to the colo site.”

Read More »

Tags: , , , , , , , ,