OpenStack sure has come a long way since the first Design Summit in San Antonio back in November 2010. As my team prepares to attend OpenStack Summit in Hong Kong this week, you’d never know that just three years ago there were just 250 people at the first public OpenStack Design Summit that kicked off what has become one of the fastest growing open source projects ever. This week, more than 4000 are expected to attend the Summit, representing more than 500 companies and nearly 50 countries. What makes this Summit just as exciting as the first is the progress we’ve all made delivering on the mission laid out back in 2010.
To produce the ubiquitous open source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable.
The OpenStack community continues to innovate at an even greater pace with 910 contributors to the new Havana release, a more than 70 percent increase from the Grizzly release six months ago. More than 145 OpenStack ecosystem members employ developers who contributed to this release. While there’s still more work to do, most of us feel OpenStack has reached the level of maturity and deployment success that’s needed for production deployment by organizations of just about any size.
Read More »
Tags: cloud, OpenStack, UCS
In three short years OpenStack has become cloud management platform that is “Too Big to Fail” (according to Citi Research). Whether it is true or not, OpenStack is definitely gaining traction and is making a profound impact not only as a viable Cloud management option, but also on the software economics for Cloud solutions.
Cloud computing is rapidly transforming businesses and organizations by providing access to flexible, agile, and cost-effective IT infrastructure. These elastic capabilities help accelerate the delivery of infrastructure, applications, and services with the right quality of service (QoS) to increase revenue. Cisco’s approach—innovative and unified data center infrastructure that provides the underlying foundation for OpenStack technology—enables the creation of massively scalable infrastructure that delivers on the promise of the cloud.
Cisco Common Cloud Architecture built on Cisco Unified Computing System (UCS) with OpenStack provides the foundation for flexible, elastic cloud solutions enabling speed and agility. As the saying goes “Every Skyscraper is built on a strong foundation of pillars”, the OpenStack platform requires the core requirements from the underlying infrastructure – simplification, rapid provisioning, self-service consumption model, and elastic resource allocation. Cisco UCS uniquely provides a policy based resource management model, which simplifies by integrating compute, networking and storage with the ability to scale and automate deployment.
This foundation addresses every stage of cloud deployment be it private or public cloud offerings. Some of primary workloads targeted for OpenStack based deployments are:
- Self-service development and test environments
- Massively scalable software-as-a-service (SaaS) solutions
- High-performance, scale-out storage
- Web server, multimedia, big data, and cluster-aware applications
- Applications with extensive computing power requirements and mixed I/O workloads
To accelerate these cloud infrastructure deployments, Cisco has developed starter configurations focused on compute-intensive, mixed or heterogeneous and storage-intensive workloads. The various server nodes are typically sized to include the OpenStack controller, compute, Ceph storage, Swift proxy and Swift storage.
Cisco UCS Solution Accelerator Paks for Cloud Infrastructure Deployments
Scaling beyond 160 servers can be implemented by interconnecting multiple UCS domains using Nexus 3000/5000/6000/7000 Series switches, scalable to thousands of servers and to hundreds of petabytes storage, and managed from a single pane using UCS Central in a datacenter or distributed globally as shown in figure.
Read More »
Tags: Cisco UCS, cloud, Cloud Computing, data center, OpenStack, UCS, virtualization
Earlier in this month the OpenStack community came out with the biannual OpenStack release – Havana. According to the OpenStack Foundation, not only did Havana add close to 400 new features across Compute (Nova), Storage (Swift), Networking (Neutron) and other core services, it also provided users with more application-driven capabilities and more enterprise features. Two new projects – Heat (orchestration) and Ceilometer (metering) were integrated into OpenStack during the Havana release as well.
One area of focus in Havana for Cisco was on the Neutron project. This included contributions to enhance the Neutron Cisco plugin framework, feature additions to the Nexus plugin for physical Cisco Nexus switches, introduction of the new Cisco Nexus 1000v virtual switch plugin and actively leading and participating in the design of the Neutron Modular Layer 2 plugin framework. This datasheet captures more information on the new features of the Cisco Nexus Neutron plugin (for physical switches) for OpenStack Havana. Cisco’s contribution in these and other areas, such as Layer 3, Firewall and VPN network services are reflected in this Stackalytics report of Neutron contributions for the Havana release.
We are now just a few days away from the OpenStack IceHouse Summit taking place in Hong Kong. Cisco is premier sponsor for the Summit and is also participating in several sessions and panels to make the Summit a success. To secure a slot in the General Session track at the Summit, interested candidates including Cisco’s OpenStack team submitted speaking proposals in August that went through an OpenStack community voting process. The details of the proposals can be found in this blog. Based on these results, Cisco’s team is now leading or participating in 10 session and panel discussions. The following table (sorted by session timings) captures details of the accepted sessions –
In addition to the above General Session tracks, the Cisco OpenStack team is also leading the design sessions in the Neutron project on Connectivity Group extensions for applications, Modular Layer 2 plugin, Network Function Virtualization with Service VM’s and Services Framework. An enhanced constraint based solver scheduler will also be discussed with the community within the Nova project. The schedule for the general sessions is here and for the design sessions here. If you are interested in attending any of the general or design sessions be sure to mark your calendar.
Finally, we are showcasing in the demo theater “Scaling OpenStack with Cisco UCS and Nexus” on Wednesday, November 6th 12:40pm-12:55pm and will be present at the Cisco booth (booth B6 in the exhibit hall) with the following demos –
- OpenStack UCS demo
- N1KV demo on OpenStack
- Seamless-Cloud on OpenStack demo
- Constraint-based Smarter Scheduler for OpenStack demo (short demo here)
- Tuesday, November 5th from 10:45am to 6:00pm
- Wednesday, November 6th from 10:45am to 6:00pm
- Thursday, November 7th from 8:00am to 4:00pm
We are excited to be there at the OpenStack Hong Kong Summit and we hope to see you there as well ! For latest information, visit us here.
Tags: Cisco, datacenter, Havana, HongKong, icehouse, nexus, OpenSource, OpenStack, UCS
Needle and thread. Fire and wood. Peanut butter and jelly. Just a few things that are essential together so that you can sew, keep warm and well, is just yummy. So what happens when the data center-class server blade for the branch meets applications? That’s the topic discussed in the 2nd episode of the Inside the Branch: UCS E-series episodes.
Last week was the series premier of our 5 part series on UCSE. Hugo and Jay discussed the basics of the product and some key facts we should know. In this episode, Hugo met with Vidya, our guru in charge of Cisco applications for UCSE.
Read More »
Tags: Cisco, enterprise networks, insidebranch, ISR, ISR G2, UCS, UCS-E Series, UCSE
I recently wrote a blog titled Blade Server TCO and Architecture – You Cannot Separate Them and thought a little more on the architecture side would be a good thing.
With so much misinformation (dis-information?) about UCS running around in the ether, I thought the straight forward comparison offered here would be valuable. It is important to dispel myths and analyze reality before making the important decisions around server and networking refreshes / upgrades, which by necessity affect long term data center architecture. I hope you will find this presentation -- Cisco UCS, HP and IBM -- A Blade Architecture Comparison, useful in your decision making process.
For me, there are three primary drivers that differentiate the Cisco UCS architecture from everyone else’s designs and they can be divided into the buckets below:
You could, and probably should, ask what is left out? That’s pretty easy. I did not specifically call out Performance and TCO, for a good reason. If you can execute on the three bullets above like Cisco UCS does, Performance and TCO are the natural derivatives. You shouldn’t have to target them separately. It’s kind of a “If you build it, they will come” scenario. That’s why I made the statements in the TCO and Architecture blog that “…Server cost is irrelevant (to OpEx) because: changing its contribution to total TCO has a vanishingly small impact….” and “…It [architecture] is the single most important component of OpEx…” For more on this and how server cost and TCO intersect, please check out this blog -- Blade Server TCO and Architecture – You Cannot Separate Them. It takes a look at the OpEx and CapEx components of TCO, and how altering either of them effects the actual total 3-year TCO. You may be surprised.
Cisco is providing trade-in credits for customers’ old generation servers and blade chassis, helping ease the transition and upgrade to a new UCS blade architecture. The UCS Advantage presentation below has more details on this fantastic program that can further enhance the already compelling TCO benefit of upgrading to Cisco UCS.
Special note: For more on the benefit that Cisco UCS delivers for I/O and throughput, I suggest a great blog by Amit Jain -- How to get more SAN mileage out of UCS FI. Amit does an excellent compare / contrast of FC and FCoE technologies (“…8 Gb FC yields 6.8 Gb throughput while 10 Gb FCoE yields close to 10 Gb throughput…”).
Tags: blade architecture, blade architecture comparison, blade server, blade server architecture, blade server TCO, capex, Cisco, Cisco UCS, data center, data center TCO, HP blades, HP BladeSystem, IBM blades, IBM Flex Fabric, opex, server, server TCO, tco, technology, UCS