Today’s storage area networks (SANs) face tremendous pressure from the phenomenal growth of digital information and the need to access it quickly and efficiently. Worldwide data is projected to multiply by an astonishing 1000 percent by 2020. It’s little wonder, then, that storage administrators rank slow drain and related SAN congestion issues as their number-one concern. If not addressed in a timely fashion, these can have a domino effect, even degrading the performance of totally unrelated applications.
A Slow Drain Device is a device that does not accept frames at the rate generated by the source. In the presence of slow devices, Fibre Channel networks are likely to lack frame buffers, resulting in switch port credit starvation and potentially choking Inter-Switch Links (ISLs). Frames destined for slow devices need to be carefully isolated in separate queues and switched to egress ports without congesting the backplane. A decision then needs to be made about whether the frames are considered stuck and when to drop them.
Cisco provides a Slow Drain Device Detection and Congestion Avoidance (referred to Slow Drain) feature that helps detect, identify, and resolve the condition exhibited by slow devices.
Join us for this live 60-minute webcast and learn the common causes of slow drain and other typical SAN congestion issues. See how Cisco Nexus and MDS switches now include hardware-based congestion detection and recovery logic for precise, fast detection and automatic, real-time resolution.
Watch this Video to learn more: This video demonstrates slow drain diagnostics using Cisco Prime DCNM . See how DCNM can be used to detect Slow Drain devices and troubleshoot the situation within minutes in a large fabric with thousands of ports. Deploy Cisco Prime DCNM today to bring down Slow Drain troubleshooting time from day or weeks to minutes.
Reasons for Slow Drain include:
1) An edge device can be slow to respond for a variety of reasons: Server performance problems: application or OS, Host bus adapter (HBA) problems: driver or physical failure , Speed mismatches: one fast device and one slow device , Nongraceful virtual machine exit on a virtualized server, resulting in packets held in HBA buffers , Storage subsystem performance problems, including overload, Poorly performing tape drives
2) Inter Switch Links (ISL) can be slow due to: Lack of B2B credits for the distance the ISL is traversing and the existence of slow drain edge devices ( in the fabric)
Any device exhibiting such behavior is called a Slow Drain Device. Cisco MDS 9000 Family switches constantly monitor the network for symptoms of slow drain and can send alerts and takes automatic actions to mitigate the situation. Read this whitepaper to learn more.
When we launched UCS Mini, it pushed us into new territory with an exciting offering that bundles servers, storage and networking into a single solution — much like the traditional UCS — but in a smaller form factor and lowered the cost of entry to a UCS solution. It allows customers of any size (SMB, ROBO, and distributed enterprises) to take advantage of UCS.
Building off of infoTECH Spotlight and Best of InterOp Finalist awards, I wanted to catch you up on some recently posted customer’s success stories for UCS Mini. Most interesting is that they are five very different business from the Australia, Belgium, Mexico, and the United States.
Server load balancer (SLB) has become very common in network deployments, as the data & video traffic are expanding at rapid rate. There are various modes of SLB deployments today. Application load balancing with network address translation (NAT) has become a necessity for various benefits.
With our latest NX-OS Software 7.2(1)D1(1) (also known as Gibraltar MR), ITD supports SLB NAT on Nexus 7k series of switches.
In SLB-NAT deployment, client can send traffic to a virtual IP address, and need not know about the IP of the underlying servers. NAT provides additional security in hiding the real server IP from the outside world. In the case of Virtualized server environments, this NAT capability provides increased flexibility in moving the real servers across the different server pools with out being noticed by the their clients. With respect health monitoring and traffic reassignment, SLB NAT helps applications to work seamlessly without client being aware of any IP change.
ITD won the Best of Interop 2015 in Data Center Category.
ITD provides :
Zero latency load-balancing.
CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
Resilient (like resilient ECMP), Consistent hash
Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
Learning new skills and using new tools to automate your network can appearto be scary if you don’t have a coding background. But that doesn’t need to be the case…
In a previous blog post, I discussed Cisco’s SDN Strategy for the Data Center. I mentioned that it is built on 3 key pillars: Application Centric Infrastructure, Programmable Fabric, and Programmable Network. Regarding the 3rd pillar, I wrote that network programmability has largely been the domain of big Web SP’s, and/or those whose propellers seen to spin faster than others. However, the reality is that tools are available that are useful for networks of pretty much any size, and the tools are within reach of pretty much everybody.
Rather than rattle off a list cool features that are part of Programmable Network (some of which are summarized here), I thought it more useful to consider common things network people actually do on a daily basis, then show how we can apply programmability tools to do those things with, for lack of a better phrase, “the 3 S’s”:
Speed – enabling you to do things much faster;
Scale – enabling you to do things to a much larger group of devices; and
Stability – enabling you to make far fewer errors (thereby also increasing Security…oops, now that’s 4 S’s…)
In upcoming posts, we will consider use cases such as switch provisioning. For example, you need to put a bunch of VLANs on a bunch of switches. Unless you have a battalion of minions to carry out your wishes, this can be a tedious, time consuming task. There is a better way, and we’ll show you how.
What’s that? You say you’re a network geek, but you moonlight as a server admin? You’ve been using Linux tools to monitor and troubleshoot servers and want to use the same tools for the network? Okay, we can cover that too because tools like ifconfig and tcpdump are all part of the party.
If you can’t wait for the future posts and/or you want to dive deep, this recorded webinar should tide you over.
Anyhow, I need to go carve a pumpkin now…Happy Halloween!
*For music aficionados…Yeah, I know – the link was Heavy Metal not Death Metal, but I used one of my own songs…and this is about as close to Death Metal as I get. That whole guttural screaming thing never worked for me…
“Did you say compostable infrastructure? That means using a biodegradable cardboard chassis that can go in the compost bin, right?” This conversation is more common than you think right now as people are introduced to this for the first time. So what exactly does composable infrastructure mean? Perhaps the best description I’ve heard comes from James Leach who recently told me “our customers need us to wrap code around the server, not sheet metal.” I think that concept gets at it pretty well, and no surprise since he’s one of the people behind our M-Series Modular Servers and Cisco System Link technology. Still, it’s early days for this concept in the industry and many customers we talk to haven’t been exposed to the term.
We took some time recently to interview Jed Scaramella from IDC to help explain it all. Here’s another segment in that series, this one focused on answering the question, “What is Composable Infrastructure?”
Composable infrastructure is is emerging out of two trends: disaggregated servers and software-defined infrastructure. Both are prerequisite capabilities: you need be able to take humpty dumpty apart AND put him together again. Disaggregation is where we unbind local shared storage and network I/O from the processor and memory. Subsystems are no longer bound by the server chassis or the traditional motherboard. Then, with a unified control plane and API, these physical and logical resources are pooled and management software composes the resources on demand, so the system can be created to conform to the unique requirements of the workload. That’s the software-defined part.
Path to “Infrastructure as Code”
While many are just beginning to talk about composable infrastructure as a future strategy (“Houston, we have a vision…”) Cisco has been executing on disaggregated systems and software defined infrastructure since the introduction of UCS, through three key areas of innovation: