Cisco Blogs

Cisco Blog > Data Center

Disruption = Cisco UCS Integrated Infrastructure for Big Data + Efficiency + Speed

IWE Site Graphic Header

Data is the foundation of the digital business. You know it, I know it. We get it. Let’s move on… But now a new question arises: As a Leader in your organization are you fully maximizing and leveraging your data to discover from analysis new business insights? New operational efficiencies? New Customer trends and patterns?

We can help… Cisco and our Big Data partners capture, organize, prepare, and handle your available data, while providing the speed, consistency and repeatability necessary for deploying and managing a successful Big Data and Analytics infrastructure and service. Become a disruptor in your market by unlocking the value hidden in your data through data management, data preparation and data analytics to create tomorrow’s trends. Manage diverse sets of data and technologies cohesively, while delivering the analytics and data access control required by your business.

Make the transformative power of Cisco’s Unified Computing System (UCS) Integrated Infrastructure for Big Data your foundation.

Our Cisco UCS® Integrated Infrastructure for Big Data – a Cisco Validated Design – offers comprehensive infrastructure and management capabilities for Big Data. The Cisco UCS Integrated Infrastructure solution helps to improve performance and capacity. It also offers additional complete solutions with industry-leading partnerships such as Cloudera, Hortonworks, IBM, MapR, Platfora, and Splunk. Read More »

Tags: , , , , , , , , , , , , , , ,

Cisco ACI at OpenStack 2015 in Tokyo

Because of the nature of SDN, and specifically the automation available with Cisco’s Application Centric Infrastructure, ACI works really well with cloud orchestration tools such as OpenStack. I was able to be at the OpenStack Summit in Tokyo last week and gave a vBrownBag TechTalk about why Cisco ACI makes OpenStack even better.

So, how does ACI work with OpenStack and perhaps even make it better? First, ACI offers distributed and scalable networking. It supports Floating IP addresses in OpenStack. If you’re not familiar with Floating IPs, they are essentially a pool of publicly routable IP addresses that you purchase from an ISP and assign to instances. This would be especially useful for instances, or VMs, like web servers. It also saves CPU cycles by putting the actual networking in the Nexus 9000 switches that make up the ACI fabric. Since these switches are built to forward packets, and that’s what they’re good at, why not save the CPU cycles for other things like running instances?

OpenStack doesn’t natively work with Layer 4-7 devices like firewalls and load balancers. With ACI we can stitch in these necessary network services in an automated and repeatable way. We do this in a way that doesn’t sacrifice visibility as well. While it’s important that we’re able to automate things, especially in a private or public cloud that is constantly changing and updating, if we lose visibility, we lose the ability to troubleshoot easily. In the demo, shown in the video above, you will see just how easy it is to troubleshoot problems in ACI. We also get the ability to preemptively strike before a problem causes issues on the network by offering easily interpreted health scores for the entire fabric, including hardware and end point groups.

ACI is also a very secure model. Not only does it use a white-list model where traffic is denied by default and only allowed when explicitly configured that way, it will also give more security when it comes to multi-tenancy. In a strict overlay solution, if a hypervisor is attacked or owned the multi-tenancy model could be deemed insecure. In the ACI fabric the security is done at the port level. So even if a hypervisor is attacked the tenants will be safe.

In recent versions of ACI we are able to use OpFlex as a southbound protocol to communicate between OpenStack and ACI. By using OpFlex we get a deeper integration and more visibility into the virtual environment of OpenStack. Instead of attaching hypervisor servers to a physical domain in ACI we can attach them into a VMM (Virtual Machine Manager) domain. This allows us to learn which instances or VMs are on which physical server. It will also automatically learn IP addresses, MAC addresses, states and other information. We can also see which networks or portgroups contain which hypervisors and instances within our OpenStack environments.

For more information on how Cisco ACI works with OpenStack you can go to


Tags: ,

Eliminating Congestion Problems in Storage Area Networks

Today’s storage area networks (SANs) face tremendous pressure from the phenomenal growth of digital information and the need to access it quickly and efficiently. Worldwide data is projected to multiply by an astonishing 1000 percent by 2020. It’s little wonder, then, that storage administrators rank slow drain and related SAN congestion issues as their number-one concern. If not addressed in a timely fashion, these can have a domino effect, even degrading the performance of totally unrelated applications.

A Slow Drain Device is a device that does not accept frames at the rate generated by the source. In the presence of slow devices, Fibre Channel networks are likely to lack frame buffers, resulting in switch port credit starvation and potentially choking Inter-Switch Links (ISLs). Frames destined for slow devices need to be carefully isolated in separate queues and switched to egress ports without congesting the backplane. A decision then needs to be made about whether the frames are considered stuck and when to drop them.

Cisco provides a Slow Drain Device Detection and Congestion Avoidance (referred to Slow Drain) feature that helps detect, identify, and resolve the condition exhibited by slow devices.

Register Now:

Join us for this live 60-minute webcast and learn the common causes of slow drain and other typical SAN congestion issues. See how Cisco Nexus and MDS switches now include hardware-based congestion detection and recovery logic for precise, fast detection and automatic, real-time resolution.

Watch this Video to learn more: This video demonstrates slow drain diagnostics using Cisco Prime DCNM . See how DCNM can be used to detect Slow Drain devices and troubleshoot the situation within minutes in a large fabric with thousands of ports. Deploy Cisco Prime DCNM today to bring down Slow Drain troubleshooting time from day or weeks to minutes.

Reasons for Slow Drain include:

1) An edge device can be slow to respond for a variety of reasons:  Server performance problems: application or OS, Host bus adapter (HBA) problems: driver or physical failure ,  Speed mismatches: one fast device and one slow device ,  Nongraceful virtual machine exit on a virtualized server, resulting in packets held in HBA buffers , Storage subsystem performance problems, including overload, Poorly performing tape drives

2) Inter Switch Links (ISL) can be slow due to:  Lack of B2B credits for the distance the ISL is traversing and the existence of slow drain edge devices ( in the fabric)
Any device exhibiting such behavior is called a Slow Drain Device. Cisco MDS 9000 Family switches constantly monitor the network for symptoms of slow drain and can send alerts and takes automatic actions to mitigate the situation.  Read this whitepaper to learn more.

Also attend this webinar : Nov 10th, 2015, 8:00 AM PST – Eliminating Congestion Problems in Storage Area Networks

Register Now:

More info:
Subscribe to youtube Channel:

Tony Antony
Sr. Marketing Manager



Tags: , , , ,

Cisco UCS Mini Customer Success Stories

When we launched UCS Mini, it pushed us into new territory with an exciting offering that bundles servers, storage and networking into a single solution — much like the traditional UCS — but in a smaller form factor and lowered the cost of entry to a UCS solution. It allows customers of any size (SMB, ROBO, and distributed enterprises) to take advantage of UCS.

Building off of infoTECH Spotlight and Best of InterOp Finalist awards, I wanted to catch you up on some recently posted customer’s success stories for UCS Mini. Most interesting is that they are five very different business from the Australia, Belgium, Mexico, and the United States.

Read More »

Tags: , , ,

Server Load balancing with NAT, using Nexus switches: ITD

Server load balancer (SLB) has become very common in network deployments, as the data & video traffic are expanding at rapid rate. There are various modes of SLB deployments today. Application load balancing with network address translation (NAT) has become a necessity for various benefits.

Cisco Intelligent Traffic Director (ITD) is a hardware based multi-terabit layer 4 load-balancing and traffic steering solution on the Nexus 5k/6k/7k/9k series of switches.

With our latest NX-OS Software 7.2(1)D1(1) (also known as Gibraltar MR), ITD supports SLB NAT on Nexus 7k series of switches.

In SLB-NAT deployment, client can send traffic to a virtual IP address, and need not know about the IP of the underlying servers. NAT provides additional security in hiding the real server IP from the outside world. In the case of Virtualized server environments, this NAT capability provides increased flexibility in moving the real servers across the different server pools with out being noticed by the their clients. With respect health monitoring and traffic reassignment, SLB NAT helps applications to work seamlessly without client being aware of any IP change.

ITD won the Best of Interop 2015 in Data Center Category.


ITD provides :

  1. Zero latency load-balancing.
  2. CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
  3. IP-stickiness
  4. Resilient (like resilient ECMP), Consistent hash
  5. Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
  6. Monitoring the health of servers/appliances.
  7. Handles unlimited number of flows.

Documentation, slides, videos:

Email Query or

Connect on twitter: @samar4

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,