The USS Cisco took off for the Gestalt IT Networking Tech Field Day 2 with Captain Omar Sultan (see picture below, courtesy of techfieldday.com), Data Center Solutions Sr. Marketing Manager, at the helm. Tech Field Day networking industry experts gathered on the bridge, cleverly disguised as the Cisco Cloud Innovation Center (CICC) Lab, for an informal, no-holds-barred conversation on recent Nexus portfolio announcements, the continued march towards automated provisioning of cloud services and ever-evolving VM networking technologies.
Captain Omar at Cisco Networking Tech Field Day 2
For those who weren’t at the event or haven’t seen the video recording yet, please excuse my unabashed geekiness, but you’ll have to watch the first minute of the video to get the above reference. As a new member of the Data Center Solutions Marketing team, this is also my first foray into the Cisco blog-o-sphere, so I hope to share some fresh viewpoints on the day’s events.
Several things were made very apparent during the Tech Field Day session:
Today we are making a significant announcement with several new innovations across our data center and switching portfolio that showcase how our customers can build large scale-up and scale-out data center networks. While the press release does a great job (thanks Lee!) of highlighting all the innovations across the Nexus Unified Fabric portfolio and the new ASA 1000v, two aspects of the announcement stand out quite prominently:
Cisco is delivering the highest density 10GbE modular switching platform in the industry
Cisco is delivering the most scalable fabric in the industry and, by extension -- on the planet! (we’re told planet sounds much cooler)
No. 1 above is fairly straightforward. With our new 2nd-generation F2 line card and Fabric 2 module, at 768 ports of 10GbE line-rate switching ports running NX-OS, the flagship Nexus 7018 in a fully-loaded configuration is simply the epitome of switch scale.
No.2 is where things get interesting, because we’re no longer thinking about just the “box” but rather, how we can weave different elements across the data center into a holistic “fabric”. This systems-based approach focuses on multi-dimensional scale transcending the box and even the data center LAN, to span between data centers, while providing feature-rich fabric capabilities. At 12,000+ 10GbE nodes supported as part of one Fabricpath-enabled system, and with the ability to support Fabric Extender (FEX) technology (plus L2 and L3 capabilities), this approach re-defines fabric scalability at 2X the scale and half the cost point of the next best claim in the industry. More important, it achieves this in an evolutionary manner for our 19,000+ NX-OS customers, offering investment protection for brownfield deployments while raising the bar for greenfield environments!
The Nexus platforms have been around for 3+ years, and over 500 customers have deployed FabricPath on the Nexus 7000 alone since its introduction about an year ago. It is a proven technology. With Fabricpath now coming onto the Nexus 5500 platforms, the momentum is likely to spike up with a mix of both size and scale. Like I said, things get interesting.
To make it more fun, our technical experts from the product teams have taken a data-driven approach and compared Cisco’s new innovations and our box and system-scale with others in the industry.
They looked at a couple of representative examples -- the first being, what it would take any other vendor to build a non-blocking 768-port 10GbE “switch”, with capabilities similar to what the Nexus 7000 could provide in a single chassis. The second example takes a look at what it takes to build a “fabric” with Cisco leveraging its Nexus portfolio and NX-OS to build that.
Take a look and let us know what you think. It is useful to note that most vendors in the industry today have no fabric capabilities to speak of, and the few that are attempting a systems approach, have really limited to no customer traction thus far. Our customers and key analysts tell us that Cisco has a multi-year innovation lead in this space, even as Cisco continues to focus on bringing the network, compute, storage and application services together with integrated management to drive productivity and efficiency across traditional IT and organizational silos.
With the opening of the new Cisco Datacenter in RTP, I thought it would be cool to reach out to a few of the guys responsible for the design and ask them a few questions. So, I got together with Jag Kahlon (Cisco IT Architect) and John Banner (Cisco IT Network Engineer) for a quick chat.
Me: What were the primary objectives for the new datacenter?
While at Cisco Live I had the pleasure of meeting several people who were curious about Multihop FCoE but had the unfortunate experience of getting too much misinformation from several sources (yes, including some of Cisco’s competition, but even some partners!). Some had already seen my article on FCoE and TRILL and wanted to know if I could help explain the relationship between FCoE and QCN (Quantized Congestion Notification), one of the documents in the IEEE DCB standard revision.
Even though we have a very good, short white paper on the subject, this is one of those subjects that as soon as people ask about it we break out the white boarding, or in the case of being at Cisco Live, the napkins. There are just some things that pictures help explain better.
Because of this, I’m going to try something different with this blog. It may work, or I may fall flat on my face; I suppose we shall find out. Read More »
Today I want to bring up DCI use case that I’ve been thinking about: capacity expansion. As you know, the purpose of DCI is to connect two or more Data Centers together so that they share resources and deliver services. The capacity expansion use case is when you have temporary traffic bursts, cloud bursts, either planned or unplanned, maintenance windows, migrations or really any temporary service event that requires additional service capacity.
To start addressing the challenge of meeting these planned and unplanned cloud burst and capacity expansion requirements, check out the new ACE + OTV feature called Dynamic Workload Scaling announced recently.