It’s been a very busy few weeks. The Data Storage Innovations (DSI) conference, the Ethernet Summit conference, EMCWorld, and next week at CiscoLive, I’ve been starting to talk about a new concept in Data Center storage networks called Dynamic FCoE. Understandably, there have been a lot of questions about it, and I wanted to try to get this blog out as quickly as possible.
The TL;DR version: Dynamic FCoE combines the best of Ethernet Fabrics and traditional deterministic storage environments to create massively scalable and highly resilient FC-based fabrics. If you thought you knew what you could do with storage networks, this takes everything to a whole, new level. Read More »
Tags: CLOS, Dynamic FCoE, East-West Traffic, Leaf/Spine, Multihop FCoE, networking, nexus
In a few days at the Moscone Center in San Francisco, we will be celebrating the 25th anniversary of Cisco Live. This year we are expecting record attendance exceeding 20,000 participants, 9 amazing keynotes, 600 sessions, live demos at world of solutions, big analyst and partner presence, and last but not least, the opportunity for you to meet and network with top minds in high-tech. If you are new to Cisco Live and feel overwhelmed by the grandness of the event, let me assure you that you are not alone. I have been there before. I have set out in this blog to give you an easy walkabout of Cisco Datacenter highlights, particularly the Cisco ACI key activities over the duration of the event.
Much like you I will also be eagerly looking to attend John Chambers’ majestic keynote that starts the proceedings on Monday, May 19. John in his unique style will lead with the Theme “Tomorrow Starts Here” covering leading industry trends such as Internet of Everything (IOE), Fast IT, and Application Centric Infrastructure (ACI) among many others. So, do not miss this opportunity. I want to shift gears and take you on a fast cruise of Cisco Data Center and Cisco ACI highlights at the event.
In less than a year since the announcement, Cisco ACI has taken the industry by storm with a large customer base and several of the industry’s key partners such as Microsoft, Red Hat, Citrix, F5, et al endorsing and building joint solutions. There is so much excitement around ACI at this year’s Cisco Live. I want to give a structure to how I am planning to cover the topic in this blog. Essentially, I consider them as Cisco-led and Partner led.
Cisco has a packed agenda of ACI activities and announcements. Cisco APIC, which enables ACI Fabric mode on Nexus 9000 networks, will be available this summer along with a robust Go-To-Market (GTM) strategy that includes additional eco-system partners, Cisco Validated Designs (CVD), additional platform support and leading-edge hardware innovations across the portfolio. We are also introducing two new additions to the existing portfolio of Nexus 9000 to meet scalability, flexibility and performance requirements of standalone and ACI mode deployments.
Executive ACI speaking sessions feature prominently this year with Cisco President Rob Lloyd’s session “Infrastructure for the Agile Enterprise” keynote, May 20, 10 AM, at the North Hall. Rob’s keynote also features Soni Jiandani, who will present how ACI delivers agility. Rob Soderbery and Soni Jiandani are presenting a technology trends keynote (GENSK 1109) on May 21, 8.30 am, titled “Fast Track to Fast IT: Cisco’s Application Centric Infrastructure”, another choice from a catalog of exciting offers.
Read More »
Tags: ACI, APIC, Ciscolive 2014, citrix, Device Package, Embrane, F5, IoE, netapp, Nexus9000, Splunk, UCS, VCE
The UCS Power Scripting Contest closed on May 11th and the five finalists were announced at Microsoft TechEd today. They are:
Read More »
This week has been the semi-annual OpenStack Summit in Atlanta, GA. In a rare occurrence I’ve been able to be here as an attendee, which has given me wide insight into a world of Open Source development I rarely get to see outside of some interpersonal conversations with DevOps people. (If you’re not sure what OpenStack is, or what the difference is between it and OpenFlow, OpenDaylight, etc., you may want to read an earlier blog I wrote that explains it in plain English).
On the first day of the conference there was an “Ask the Experts” session based upon storage. Since i’ve been trying to work my way into this world of Programmability via my experience with storage and storage networking, I figured it would be an excellent place to start. Also, it was the first session of the conference.
During the course of the Q&A, John Griffith, the Program Technical Lead (PTL) of the Cinder project (Cinder is the name of the core project within OpenStack that deals with block storage) happened to mention that he believed that Cinder represented software-defined storage as a practical application of the concept.
I’m afraid I have to respectfully disagree. At least, I would hesitate to give it that kind of association yet. Read More »
Tags: open source, OpenStack, programmability, SDN, SDS, Storage, storage networks
There has been a lot of recent online discussion about automation of the datacenter network, how we all may (or may not) need to learn programming, the value of a CCIE, and similar topics. This blog tries to look beyond all that. Assume network configuration has been automated. How does that affect network design?
Automation can greatly change the network landscape, or it may change little. It depends on what you’re presently doing for design. Why? The reason is that the programmers probably assumed you’ve built your network in a certain way. As an example, Cisco DFA (Dynamic Fabric Automation) and ACI (Application Centric Infrastructure) are based on a Spine-Leaf CLOS tree topology.
Yes, some OpenFlow vendors have claimed to support arbitrary topologies. Arbitrary topologies are just not a great idea. Supporting them makes the programmers work harder to anticipate all the arbitrary things you might do. I want the programmers to focus on key functionality. Building the network in a well-defined way is a price I’m quite willing to pay. Yes, some backwards or migration compatibility is also desirable.
The programmers probably assumed you bought the right equipment and put it together in some rational way. The automated tool will have to tell you how to cable it up, or it might check your compliance with the recommended design. Plan on this when you look to automation for sites, a datacenter, or a WAN network.
The good news here is the the Cisco automated tools are likely to align with Cisco Validated Designs. The CVD’s provide a great starting point for any network design, and they have recently been displaying some great graphics. They’re a useful resource if you don’t want to re-invent the wheel — especially a square wheel. While I disagree with a few aspects of some of them, over the years most of them have been great guidelines.
The more problematic part of this is that right now, many of us are (still!) operating in the era of hand-crafted networks. What does the machine era and the assembly line bring with it? We will have to give up one-off designs and some degree of customization. The focus will shift to repeated design elements and components. Namely, the type of design the automated tool can work with.
Some network designers are already operating in such a fashion. Their networks may not be automated, but they follow repeatable standards. Like an early factory working with inter-changeable parts. Such sites have likely created a small number of design templates and then used them repeatedly. Examples: ”small remote office”, “medium remote office”, “MPLS-only office”, or “MPLS with DMVPN backup office”.
However you carve things up, there should only be a few standard models, including “datacenter” and perhaps “HQ” or “campus”. If you know the number of users (or size range) in each such site, you can then pre-size WAN links, approximate number of APs, licenses, whatever. You can also pre-plan your addressing, with, say, a large block of /25′s for very small offices, /23′s for medium, etc.
On the equipment side, a small office might have one router with both MPLS and DMVPN links, one core switch, and some small number of access switches. A larger office might have one router each for MPLS and one for DMPVN, two core switches, and more access switches. Add APs, WAAS, and other finishing touches as appropriate. Degree of criticality is another dimension you can add to the mix: critical sites would have more redundancy, or be more self-contained. Whatever you do, standardize the equipment models as much as possible, updating every year or two (to keep the spares inventory simple).
It takes some time to think through and document such internal standards. But probably not as much as you think! And then you win when you go to deploy, because everything becomes repeatable.
Read More »
Tags: ACI, automation, Cisco, cisco champion, cisco live, data center, DFA, OpenFlow, programming, SDN