Out with the old and in with the new and honestly I couldn’t be happier with the new that’s coming in. What is the new that I’m talking about? The Nexus 1000V REST API of course.
I just finished writing scripts to manage (create, modify, delete) vlans and port-profiles on a Nexus 1000V using expect. The scripts work fine, I’m using PowerShell as the main script and it calls out to expect and ssh running in a Cygwin environment, however it would be nice to use the REST API, and do everything from PowerShell or the language of your choice.
The customer I did the work for has multiple 1000V deployments and wanted to automate some aspects of the 1000V administration. Vlan provisioning and port-profile creation seemed to be obvious choices.
Read More »
Tags: 1000, 1000V, nexus, UCS
We’ve been getting a lot of great questions about ACI since our launch as people try and better understand the value of an application-oriented approach. I got the following questions on my blog post about the Application Virtual Switch that probed on some of the thinking behind an application-aware architecture, and why now was the right time to release it (after all, John Chambers called it the most disruptive Cisco innovation in a decade!). Anyway, on to the Q&A:
I’d like to know more about the path that Cisco pursued to evolve towards an “application aware” architecture. This back-story (how Cisco arrived at this juncture) would be very helpful to industry analysts, customers and institutional investors. Here’s some of the key questions on my mind.
- What were the primary roadblocks that inhibited the adoption of this innovative approach in the past?
I would say that the Application Centric Infrastructure (ACI) was a combination of a Eureka! moment, that people just never thought of it before, and that it was also an insightful evolution from early SDN technology. So, it might be fair to say that SDN had to come along, and then we realized, here might be a better way to program the network (with an application-oriented model, rather than a network-centric model).
That might be another way of saying that the lack of SDN as a precursor to ACI was a roadblock. But I think of it as networks were just built on hardware that were optimized to pass packets and other very specific tasks. And the limitations of historical networking protocols and traditional network designs, coupled with very limited ways in which you could manage a network and tell it what to do, all served as roadblocks to implementing anything like ACI. So the roadblocks that had to be cleared included the ability to program switches through software interfaces, and to centrally manage the software applications or controllers to orchestrate the broader network, not an individual device. Those are some of the things SDN brought along.
Read More »
Tags: ACI, APIC, application centric infrastructure, Cisco ONE, nexus, onePK, OpenFlow, SDN, XNC Controller
Earlier in this month the OpenStack community came out with the biannual OpenStack release – Havana. According to the OpenStack Foundation, not only did Havana add close to 400 new features across Compute (Nova), Storage (Swift), Networking (Neutron) and other core services, it also provided users with more application-driven capabilities and more enterprise features. Two new projects – Heat (orchestration) and Ceilometer (metering) were integrated into OpenStack during the Havana release as well.
One area of focus in Havana for Cisco was on the Neutron project. This included contributions to enhance the Neutron Cisco plugin framework, feature additions to the Nexus plugin for physical Cisco Nexus switches, introduction of the new Cisco Nexus 1000v virtual switch plugin and actively leading and participating in the design of the Neutron Modular Layer 2 plugin framework. This datasheet captures more information on the new features of the Cisco Nexus Neutron plugin (for physical switches) for OpenStack Havana. Cisco’s contribution in these and other areas, such as Layer 3, Firewall and VPN network services are reflected in this Stackalytics report of Neutron contributions for the Havana release.
We are now just a few days away from the OpenStack IceHouse Summit taking place in Hong Kong. Cisco is premier sponsor for the Summit and is also participating in several sessions and panels to make the Summit a success. To secure a slot in the General Session track at the Summit, interested candidates including Cisco’s OpenStack team submitted speaking proposals in August that went through an OpenStack community voting process. The details of the proposals can be found in this blog. Based on these results, Cisco’s team is now leading or participating in 10 session and panel discussions. The following table (sorted by session timings) captures details of the accepted sessions –
In addition to the above General Session tracks, the Cisco OpenStack team is also leading the design sessions in the Neutron project on Connectivity Group extensions for applications, Modular Layer 2 plugin, Network Function Virtualization with Service VM’s and Services Framework. An enhanced constraint based solver scheduler will also be discussed with the community within the Nova project. The schedule for the general sessions is here and for the design sessions here. If you are interested in attending any of the general or design sessions be sure to mark your calendar.
Finally, we are showcasing in the demo theater “Scaling OpenStack with Cisco UCS and Nexus” on Wednesday, November 6th 12:40pm-12:55pm and will be present at the Cisco booth (booth B6 in the exhibit hall) with the following demos –
- OpenStack UCS demo
- N1KV demo on OpenStack
- Seamless-Cloud on OpenStack demo
- Constraint-based Smarter Scheduler for OpenStack demo (short demo here)
- Tuesday, November 5th from 10:45am to 6:00pm
- Wednesday, November 6th from 10:45am to 6:00pm
- Thursday, November 7th from 8:00am to 4:00pm
We are excited to be there at the OpenStack Hong Kong Summit and we hope to see you there as well ! For latest information, visit us here.
Tags: Cisco, datacenter, Havana, HongKong, icehouse, nexus, OpenSource, OpenStack, UCS
A long time ago I got asked to write about how to use Fibre Channel over Ethernet (FCoE) for distance. After all, we were getting the same question over and over:
What is the distance limitation for FCoE?
Now, the short answer for this can be checking out various data sheets for the Nexus 2000, Nexus 5500, Nexus 6000, Nexus 7000, or MDS 9X00 product lines. But it didn’t answer the most obvious follow-up questions: “Why?” and “How?”
Problem is, whenever you start talking about extending your storage connectivity over distance, there are many things to consider, including some things that many storage administrators (or architects) may not always remember to think about. The more I thought about this (and the longer it took to write down the answers), the more I realized that there needed to be a good explanation for how this worked.
Generally speaking, the propeller spins the ‘other way’ when it comes to storage distance.
To that end, I began writing down the things that affect the choice for selecting a distance solution, which involves more than just a storage protocol. And so the story grew. And grew. And then grew some more. And if you’ve ever read any blogs I’ve written on the Cisco site you’ll know I’m not known for my brevity to begin with! So, bookmark this article as a reference instead of general “light reading,” and with luck things will be clearer than when we started. Read More »
Tags: distance, FCIP, FCoE, Fibre Channel, iSCSI, MDS, nexus, Storage
At this year’s Hadoop Summit 2013
, I presented on the “The Data Center and Hadoop” which built upon the past two years of testing the effects of Hadoop on the data center infrastructure
. What makes Hadoop an important framework to study in the data center is that it contains a distributed system that combines both a distributed file system (HDFS) along with an execution framework (Map/Reduce). Further it builds upon itself and can provide other real-time or key/value stores(HBASE) along with many other possibilities. Each comes with its own set of infrastructure requirements that include throughput sensitive components along with latency sensitive components. Further in the Data Center, understanding how all these components work together is key to optimized deployments.
After studying many of these components and their effects, the very data we were alanyzing became a topic of a lot of our discussions. We combined application performance data, application logs, compute data AND network data to build a complete picture of what is happening in the data center.
With the advent of programmable networks (aka “Software Defined Networking”) it is not only important to make the network more application aware, but to also know where and how to analyze and make the right connections between the application and the network.
Tags: Big Data, Cisco Nexus, data center, Hadoop, Hadoop Summit, nexus, SDN, software defined networking