Do you have a need for automated provisioning of your data center? Cisco Prime Data Center Network Manager (DCNM) might just provide that solution.
DCNMis designed to help you efficiently implement, visualize, and manage the Cisco Unified Fabric. The need today in the datacenter is for a comprehensive management platform that delivers visibility as well as control of all elements within the Unified Fabric which in turn significantly simplifies troubleshooting, maintenance and provisioning of the entire fabric in a fast and efficient way.Watch the video below to find out more.
Out with the old and in with the new and honestly I couldn’t be happier with the new that’s coming in. What is the new that I’m talking about? The Nexus 1000V REST API of course.
I just finished writing scripts to manage (create, modify, delete) vlans and port-profiles on a Nexus 1000V using expect. The scripts work fine, I’m using PowerShell as the main script and it calls out to expect and ssh running in a Cygwin environment, however it would be nice to use the REST API, and do everything from PowerShell or the language of your choice.
The customer I did the work for has multiple 1000V deployments and wanted to automate some aspects of the 1000V administration. Vlan provisioning and port-profile creation seemed to be obvious choices.
We’ve been getting a lot of great questions about ACI since our launch as people try and better understand the value of an application-oriented approach. I got the following questions on my blog post about the Application Virtual Switch that probed on some of the thinking behind an application-aware architecture, and why now was the right time to release it (after all, John Chambers called it the most disruptive Cisco innovation in a decade!). Anyway, on to the Q&A:
I’d like to know more about the path that Cisco pursued to evolve towards an “application aware” architecture. This back-story (how Cisco arrived at this juncture) would be very helpful to industry analysts, customers and institutional investors. Here’s some of the key questions on my mind.
- What were the primary roadblocks that inhibited the adoption of this innovative approach in the past?
I would say that the Application Centric Infrastructure (ACI) was a combination of a Eureka! moment, that people just never thought of it before, and that it was also an insightful evolution from early SDN technology. So, it might be fair to say that SDN had to come along, and then we realized, here might be a better way to program the network (with an application-oriented model, rather than a network-centric model).
That might be another way of saying that the lack of SDN as a precursor to ACI was a roadblock. But I think of it as networks were just built on hardware that were optimized to pass packets and other very specific tasks. And the limitations of historical networking protocols and traditional network designs, coupled with very limited ways in which you could manage a network and tell it what to do, all served as roadblocks to implementing anything like ACI. So the roadblocks that had to be cleared included the ability to program switches through software interfaces, and to centrally manage the software applications or controllers to orchestrate the broader network, not an individual device. Those are some of the things SDN brought along.
Earlier in this month the OpenStack community came out with the biannual OpenStack release – Havana. According to the OpenStack Foundation, not only did Havana add close to 400 new features across Compute (Nova), Storage (Swift), Networking (Neutron) and other core services, it also provided users with more application-driven capabilities and more enterprise features. Two new projects – Heat (orchestration) and Ceilometer (metering) were integrated into OpenStack during the Havana release as well.
One area of focus in Havana for Cisco was on the Neutron project. This included contributions to enhance the Neutron Cisco plugin framework, feature additions to the Nexus plugin for physical Cisco Nexus switches, introduction of the new Cisco Nexus 1000v virtual switch plugin and actively leading and participating in the design of the Neutron Modular Layer 2 plugin framework. This datasheet captures more information on the new features of the Cisco Nexus Neutron plugin (for physical switches) for OpenStack Havana. Cisco’s contribution in these and other areas, such as Layer 3, Firewall and VPN network services are reflected in this Stackalytics report of Neutron contributions for the Havana release.
We are now just a few days away from the OpenStack IceHouse Summit taking place in Hong Kong. Cisco is premier sponsor for the Summit and is also participating in several sessions and panels to make the Summit a success. To secure a slot in the General Session track at the Summit, interested candidates including Cisco’s OpenStack team submitted speaking proposals in August that went through an OpenStack community voting process. The details of the proposals can be found in this blog. Based on these results, Cisco’s team is now leading or participating in 10 session and panel discussions. The following table (sorted by session timings) captures details of the accepted sessions –
November 8 11:00am -- 11:40am
Expo Breakout Room 2 (AsiaWorld-Expo)
In addition to the above General Session tracks, the Cisco OpenStack team is also leading the design sessions in the Neutron project on Connectivity Group extensions for applications, Modular Layer 2 plugin, Network Function Virtualization with Service VM’s and Services Framework. An enhanced constraint based solver scheduler will also be discussed with the community within the Nova project. The schedule for the general sessions is here and for the design sessions here. If you are interested in attending any of the general or design sessions be sure to mark your calendar.
Finally, we are showcasing in the demo theater “Scaling OpenStack with Cisco UCS and Nexus” on Wednesday, November 6th 12:40pm-12:55pm and will be present at the Cisco booth (booth B6 in the exhibit hall) with the following demos –
OpenStack UCS demo
N1KV demo on OpenStack
Seamless-Cloud on OpenStack demo
Constraint-based Smarter Scheduler for OpenStack demo (short demo here)
Tuesday, November 5th from 10:45am to 6:00pm
Wednesday, November 6th from 10:45am to 6:00pm
Thursday, November 7th from 8:00am to 4:00pm
We are excited to be there at the OpenStack Hong Kong Summit and we hope to see you there as well ! For latest information, visit us here.
Problem is, whenever you start talking about extending your storage connectivity over distance, there are many things to consider, including some things that many storage administrators (or architects) may not always remember to think about. The more I thought about this (and the longer it took to write down the answers), the more I realized that there needed to be a good explanation for how this worked.
Generally speaking, the propeller spins the ‘other way’ when it comes to storage distance.
To that end, I began writing down the things that affect the choice for selecting a distance solution, which involves more than just a storage protocol. And so the story grew. And grew. And then grew some more. And if you’ve ever read any blogs I’ve written on the Cisco site you’ll know I’m not known for my brevity to begin with! So, bookmark this article as a reference instead of general “light reading,” and with luck things will be clearer than when we started. Read More »