There has been a lot of recent online discussion about automation of the datacenter network, how we all may (or may not) need to learn programming, the value of a CCIE, and similar topics. This blog tries to look beyond all that. Assume network configuration has been automated. How does that affect network design?
Automation can greatly change the network landscape, or it may change little. It depends on what you’re presently doing for design. Why? The reason is that the programmers probably assumed you’ve built your network in a certain way. As an example, Cisco DFA (Dynamic Fabric Automation) and ACI (Application Centric Infrastructure) are based on a Spine-Leaf CLOS tree topology.
Yes, some OpenFlow vendors have claimed to support arbitrary topologies. Arbitrary topologies are just not a great idea. Supporting them makes the programmers work harder to anticipate all the arbitrary things you might do. I want the programmers to focus on key functionality. Building the network in a well-defined way is a price I’m quite willing to pay. Yes, some backwards or migration compatibility is also desirable.
The programmers probably assumed you bought the right equipment and put it together in some rational way. The automated tool will have to tell you how to cable it up, or it might check your compliance with the recommended design. Plan on this when you look to automation for sites, a datacenter, or a WAN network.
The good news here is the the Cisco automated tools are likely to align with Cisco Validated Designs. The CVD’s provide a great starting point for any network design, and they have recently been displaying some great graphics. They’re a useful resource if you don’t want to re-invent the wheel — especially a square wheel. While I disagree with a few aspects of some of them, over the years most of them have been great guidelines.
The more problematic part of this is that right now, many of us are (still!) operating in the era of hand-crafted networks. What does the machine era and the assembly line bring with it? We will have to give up one-off designs and some degree of customization. The focus will shift to repeated design elements and components. Namely, the type of design the automated tool can work with.
Some network designers are already operating in such a fashion. Their networks may not be automated, but they follow repeatable standards. Like an early factory working with inter-changeable parts. Such sites have likely created a small number of design templates and then used them repeatedly. Examples: “small remote office”, “medium remote office”, “MPLS-only office”, or “MPLS with DMVPN backup office”.
However you carve things up, there should only be a few standard models, including “datacenter” and perhaps “HQ” or “campus”. If you know the number of users (or size range) in each such site, you can then pre-size WAN links, approximate number of APs, licenses, whatever. You can also pre-plan your addressing, with, say, a large block of /25′s for very small offices, /23′s for medium, etc.
On the equipment side, a small office might have one router with both MPLS and DMVPN links, one core switch, and some small number of access switches. A larger office might have one router each for MPLS and one for DMPVN, two core switches, and more access switches. Add APs, WAAS, and other finishing touches as appropriate. Degree of criticality is another dimension you can add to the mix: critical sites would have more redundancy, or be more self-contained. Whatever you do, standardize the equipment models as much as possible, updating every year or two (to keep the spares inventory simple).
It takes some time to think through and document such internal standards. But probably not as much as you think! And then you win when you go to deploy, because everything becomes repeatable.
Welcome back to an amazing episode of Engineers Unplugged, featuring Alan Renouf (@alanrenouf) and Patrick Carmichael (@vmcarmichael) demystify automation in the modern data center in less than 10 minutes: built-in, scripts, workflow, and policy-based. Answers to your most answered questions about how to start, where to simplify, and elimination of human error. Don’t miss this tutorial.
Cisco and Microsoft have been working closely to integrate our data center solutions to provide agile, secure and scalable platforms for private cloud, hybrid IT and modern business applications. The Cisco team is looking forward to showcasing these solutions at Microsoft TechEd 2014, May 12-15, in Houston, Texas.
We have a full line-up of demos, sessions and events that will highlight the unique benefits of the Cisco Unified Data Center for Microsoft environments and applications. If you’ll be in Houston for TechEd, drop by the Cisco booth to speak with Cisco experts and learn how you can take advantage of deep integrations between the Cisco Unified Computing System (UCS) and Microsoft Windows Server, Hyper-V and System Center, to deliver Microsoft applications in private or hybrid cloud environments.
Connect with Cisco in Booth 701
Learn about Cisco Data Center products and talk to Cisco solution experts in booth 701. We’ll be conducting live solution demonstrations on:
3-D UCS demos featuring FlexPod and VSPEX for Microsoft Private Cloud and Applications
UCS Management with Microsoft System Center
Network Visualization with Nexus for Hyper-V
Cisco InterCloud Fabric
UCS Invicta Series Solid State Systems
Application Centric Infrastructure
You’ll come for the demos but you won’t leave empty handed. We’ll have exceptionally cool Cisco hats for visitors to our booth.
Wednesday, May 7: 11:00 am -- 11:30 am, #CiscoChampion Jonas Rosland (@virtualswede) presents High–Performance Splunk on EMC Scale–Out Storage, Cisco UCS Servers, and VMware in the Cisco Booth Theater
#EngineersUnplugged: Take your shot at Internet fame! 2 Engineers, 1 Whiteboard, 10 Minutes of Tech. We’ll be shooting episodes Monday -- Wednesday. Or drop by for a lightning challenge with the all-new 60SecondTech:
Demonstrations at Cisco Booth #202 include the following:
Management and Automation for Integrated Infrastructures
Cisco Solutions for VSPEX
Data Center Networking
Multilayer Data Switching Solutions
Unified Data Center Rack
VCE Vblock Rack
Cisco UCS 3D Virtual Display
Last but not least, did we mention the remote-controlled boat racing? Yes, you read that correctly. When the learning for Tuesday is done, join us for the first annual Geek Regatta!
The Geek Regatta Customer Appreciation Reception
Date: Tuesday, May 6, 2014
Time: 6:30–10 p.m.
Boat race time: 7–9:30 p.m.
Location: Tao Beach | The Venetian Hotel, Las Vegas
This event is invite-only, and you’ll need your event badge and a sticker to attend. Visit the booth for registration information if you haven’t already signed up! Don’t worry, we have field-tested the boats, and they are fully operational.
No geese were harmed in the making of this video:
Looking forward to seeing you in Vegas! Follow us @CiscoDC or Tweet my way @CommsNinja if you’re there or watching virtually.
In my previous blog, we provided an overview of the critical use cases and innovations we included in our new Business Continuity and Workload Mobility Solution for Private Cloud. This blog highlights the critical trends and challenges driving new multi-site Cloud designs.
Two important trends are driving CTO’s and CIO’s to deploy new multi-site Cloud solutions that provide better Business Continuity, Workload Mobility, and Disaster Recovery.
More workloads are moving to the Private and Public Cloud versus the traditional data center
Cloud Data Centers have a higher density of workloads per server than traditional data centers due to increased virtualization.
This ever increasing volume of Cloud hosted workloads is placing serious pressure on operations teams to manage larger scale data centers, and insure that they keep these workloads up and running, avoiding costly downtime or a nightmare service outage. Many of the CTO’s and CIO’s we’ve worked with are re-assessing their Multi-site strategy to insure they can answer some tough questions:
What are the common weak points of multi-site Cloud designs that could prevent us from achieving our Business Continuity goals for our critical apps? Can we avoid them?
How can we provide Workload Mobility between sites to provide a more agile Cloud environment?
In the event of site outage, can our Private Cloud reduce the time it takes to recover critical applications to a new site?
How can our Private Cloud deliver these critical services (Business Continuity, Workload Mobility, and Disaster Recovery) with lower cost and complexity?