This week has been the semi-annual OpenStack Summit in Atlanta, GA. In a rare occurrence I’ve been able to be here as an attendee, which has given me wide insight into a world of Open Source development I rarely get to see outside of some interpersonal conversations with DevOps people. (If you’re not sure what OpenStack is, or what the difference is between it and OpenFlow, OpenDaylight, etc., you may want to read an earlier blog I wrote that explains it in plain English).
On the first day of the conference there was an “Ask the Experts” session based upon storage. Since i’ve been trying to work my way into this world of Programmability via my experience with storage and storage networking, I figured it would be an excellent place to start. Also, it was the first session of the conference.
During the course of the Q&A, John Griffith, the Program Technical Lead (PTL) of the Cinder project (Cinder is the name of the core project within OpenStack that deals with block storage) happened to mention that he believed that Cinder represented software-defined storage as a practical application of the concept.
I’m afraid I have to respectfully disagree. At least, I would hesitate to give it that kind of association yet. Read More »
Tags: open source, OpenStack, programmability, SDN, SDS, Storage, storage networks
The programming of network resources is not just a trend, but also a way to future-proof IT and business needs. This blog series examines how infrastructure programmability is providing a faster time to competitive advantage and highlights the differences between programmable infrastructure and traditional infrastructure, and what programmability means for your entire IT infrastructure.
To read the first post in this series that defines infrastructure programmability, click here.
To read the second post in this series that discusses benefits of network programmability, click here.
According to a recent Network Computing article, changes in network virtualization (overlaying virtual networks over a physical infrastructure) and network programmability (provisioning and controlling its behavior) are causing some to wonder what’s in store for the networking profession.
These changes mean that our skill sets will evolve and our jobs will get more interesting. As the need to build more agility into IT systems becomes more urgent, we are looking for ways to reduce complexity, drive simplification and reduce costs to invest in new initiatives that are critical to the business. We must free up resources so that IT can build new capabilities and provide faster time to new business competitiveness. How can we do this? A new model for IT – one that is simple, smart and secure.
The programming of network resources is not just a trend, but also a way to future-proof IT and business needs. View Executive Perspectives.
Read More »
Tags: #FutureOfIT, Cisco, cloud, infrastructure, infrastructure programmability, network, Network Computing, Network programmability, SDN, SDN2014, software defined, Tom Hollingsworth
There has been a lot of recent online discussion about automation of the datacenter network, how we all may (or may not) need to learn programming, the value of a CCIE, and similar topics. This blog tries to look beyond all that. Assume network configuration has been automated. How does that affect network design?
Automation can greatly change the network landscape, or it may change little. It depends on what you’re presently doing for design. Why? The reason is that the programmers probably assumed you’ve built your network in a certain way. As an example, Cisco DFA (Dynamic Fabric Automation) and ACI (Application Centric Infrastructure) are based on a Spine-Leaf CLOS tree topology.
Yes, some OpenFlow vendors have claimed to support arbitrary topologies. Arbitrary topologies are just not a great idea. Supporting them makes the programmers work harder to anticipate all the arbitrary things you might do. I want the programmers to focus on key functionality. Building the network in a well-defined way is a price I’m quite willing to pay. Yes, some backwards or migration compatibility is also desirable.
The programmers probably assumed you bought the right equipment and put it together in some rational way. The automated tool will have to tell you how to cable it up, or it might check your compliance with the recommended design. Plan on this when you look to automation for sites, a datacenter, or a WAN network.
The good news here is the the Cisco automated tools are likely to align with Cisco Validated Designs. The CVD’s provide a great starting point for any network design, and they have recently been displaying some great graphics. They’re a useful resource if you don’t want to re-invent the wheel — especially a square wheel. While I disagree with a few aspects of some of them, over the years most of them have been great guidelines.
The more problematic part of this is that right now, many of us are (still!) operating in the era of hand-crafted networks. What does the machine era and the assembly line bring with it? We will have to give up one-off designs and some degree of customization. The focus will shift to repeated design elements and components. Namely, the type of design the automated tool can work with.
Some network designers are already operating in such a fashion. Their networks may not be automated, but they follow repeatable standards. Like an early factory working with inter-changeable parts. Such sites have likely created a small number of design templates and then used them repeatedly. Examples: “small remote office”, “medium remote office”, “MPLS-only office”, or “MPLS with DMVPN backup office”.
However you carve things up, there should only be a few standard models, including “datacenter” and perhaps “HQ” or “campus”. If you know the number of users (or size range) in each such site, you can then pre-size WAN links, approximate number of APs, licenses, whatever. You can also pre-plan your addressing, with, say, a large block of /25′s for very small offices, /23′s for medium, etc.
On the equipment side, a small office might have one router with both MPLS and DMVPN links, one core switch, and some small number of access switches. A larger office might have one router each for MPLS and one for DMPVN, two core switches, and more access switches. Add APs, WAAS, and other finishing touches as appropriate. Degree of criticality is another dimension you can add to the mix: critical sites would have more redundancy, or be more self-contained. Whatever you do, standardize the equipment models as much as possible, updating every year or two (to keep the spares inventory simple).
It takes some time to think through and document such internal standards. But probably not as much as you think! And then you win when you go to deploy, because everything becomes repeatable.
Read More »
Tags: ACI, automation, Cisco, cisco champion, cisco live, data center, DFA, OpenFlow, programming, SDN
By Mike McKeown -- Director of Business Development for Service Provider Video at Cisco, EMEAR
It may be a month of bank holidays in Europe, but there’s no standing still for the video industry in May. We’re proud to say that it started with an announcement from Synergy Research (at the end of April) that we are the leading provider of video technology solutions to the industry.
How, you might ask, do you follow that?
With two of the industry’s most prominent events -- firstly NCTA’s the Cable Show in LA and now ANGACOM in Cologne.
As with every year, NCTA provided a platform for the US cable industry to demonstrate and discuss the latest trends affecting some of the world’s largest cable operators.
On May 20th through 22nd, we’ll undoubtedly be having similar discussions at ANGACOM, but with a specific focus on Read More »
Tags: ANGA 2014, cloud, cmts, docsis, hfc, qam, SDN, Service Provider, videoscape, virtualization
In recent years, there have been a number of discussions around the subject of orchestration as a key enabler for different Cloud technologies.
The ETSI NFV Management and Network Orchestration (MANO) working group is defining the main interfaces for resource orchestration, a fundamental layer in management.
It is important to define standard interfaces, but equally important is to understand the main capabilities for an orchestration (or choreography) solution. We can gain some more insight by revisiting previous work, particularly in the domain of Grid computing.
Personally, I found the work done by Ian Foster and Steven Tuecke around IT as a Service (back in 2005, 9 years ago!), still extremely relevant. It is fascinating to see how applicable this work continues to be, apart perhaps from the replacement of general SOA services by REST services in particular. We should pay special attention to their definition of Grid Infrastructure: “enable the horizontal integration across diverse physical resources”. I see their work applicable beyond the physical layer, to logical resources and their composition into services. Quoting the paper, the Grid Infrastructure’s capabilities should be:
- Resource modeling: describes available resources, their capabilities, and the relationships between them to facilitate discovery, provisioning, and quality of service management.
- Monitoring and notification: provides visibility into the state of resources to enable discovery and maintain quality of service.
- Allocation: Assures quality of service across an entire set of resources for the lifetime of their use by an application.
- Accounting and auditing: tracks the usage of shared resources and provides mechanisms for transferring costs among user communities and for charging for resource use by applications and users
- Provisioning, life-cycle management and decommissioning: enables an allocated resource to be configured automatically for application use, manages the resource for the duration of the task at hand and restores the resource to its original state for future use. Read More »
Tags: cloud, Cloud Computing, innovation, NFV, orchestration, SDN, Service Provider, virtualization