Guest Blog by Mark Voelker, Technical Lead, Cisco http://blogs.cisco.com/author/MarkVoelker/
Today, the OpenStack@Cisco team is in a celebratory mood: OpenStack 2014.2 (“Juno”) has been released! The 10th release of OpenStack contains hundreds of new features and thousands of bugfixes and is the result of contributions from over 1400 developers. You can find out more about Cisco’s contributions to Juno here. What’s more, in just a few short weeks we’ll be joining the rest of the OpenStack Community in Paris for the OpenStack Summit, where plans for the next release (“Kilo“) will be laid. We think that OpenStack’s appeal has never been higher, and are excited to see continued growth forecast for the OpenStack market. Since OpenStack continues to see new growth, we thought this would be a good time to take a step back and review a few basics for those of you that are just beginning to get acquainted with today’s dominant open source cloud platform.
First, a bit of history. OpenStack was founded in the summer of 2010 as an open source project driven primarily by Rackspace Hosting (who contributed a scalable object storage system that is today known as OpenStack Swift) and NASA (who contributed a compute controller that is today known as OpenStack Nova). The announcement quickly attracted attention, and in September of 2012 the OpenStack Foundation was created as an independent body to promote the development, distribution, and adoption of the OpenStack platform. Since then, the Foundation has grown to over 18,800 members spanning over 140 countries and representing over 400 supporting companies.
Simply put, OpenStack is “Open source software for creating private and public clouds.” Not only is it developed by a wide variety of corporate and individual contributors, it is also used by hundreds of companies (includingCisco!) for a variety of purposes. You can find a sampling at the OpenStack User Stories and OpenStack SuperUser websites. The software itself is a set of loosely coupled distributed systems comprised of several discrete pieces of software with a focus on supporting multi-tenancy and scalability for on-demand resources. Whereas OpenStack originally contained just two major components, today’s integrated Juno release contains 11:
Sorry .. I did not mean to steal the title of Hillary Clinton’s book. It so happened that we had to deal with “hard choices” of our own, when we had to decide on the management approach to our new M-Series platform. In the first blog of the UCS M-Series Modular Servers journey series, Arnab briefly alluded to the value our customers placed on UCS Manager.As we started to have more customer conversations, we recognized a clear demarcation when it came to infrastructure management. There was a group of customers who just would not take any offering from us that is not managed by UCS Manager. On the other hand, a few customers who had built their own management framework were more enamored by the disaggregated server offering that we intended to build. For the second set of customers, there was a strong perception that UCS Manager did not add much value to their operations. We were faced with a very difficult choice of whether to release the platform with UCS Manager or provide standalone management. After multiple rounds of discussions, we made a conscious decision to launch M-Series as a UCS Manager managed platform only. Ironically enough, it was one such customer discussion that vindicated our decision. This happened to be a customer deploying large cloud scale applications and did not care much UCS Manager. During the conversation, they talked about some BIOS issues in their super large web farm that surfaced couple of years back. After almost 2 years, they were still rolling out the BIOS updates !
UCS Manager is the industry’s first tool to elegantly break down the operational silos in the datacenter by introducing a policy-based management of disparate infrastructure elements in the datacenter. This was made possible by the concept of Service Profiles, which made it easy for the rapid adoption of converged infrastructure. Service Profiles allowed the abstraction of all elements associated with a server’s identity and rendering the underlying servers pretty much stateless. This enabled rapid server re-purposing and workload mobility as well as made it easy for enforcing operational policies like firmware updates. And, the whole offering has been built on the foundation of XML APIs, which makes it extremely easy to integrate with other datacenter management, automation and orchestration tools. You can learn more about UCS Manager by clicking here.
UCS M-Series Modular Servers are the latest addition to the infrastructure that can be managed by UCS Manager. M-Series is targeted at cloud-scale applications, which will be deployed in 1000s, if not 10s of 1000s of nodes. Automation of policy enforcement is more paramount than the traditional datacenter deployments. Managing groups of compute elements as a single entity, fault aggregation, BIOS updates and firmware upgrades are a few key features of UCS Manager that kept surfacing repeatedly during multiple customer conversations. That was one of the primary drivers in our decision to release this platform with UCS Manager.
In the cloud-scale space, the need to almost instantaneously deploy lots of severs at a time is a critical requirement. Also, all of the nodes are pretty much deployed as identical compute elements. Standardization of configurations across all of the servers is very much needed. UCS Manager makes it extremely easy to create the service profile templates ahead of time (making use of the UCS Manager emulator) and create any number of service profile clones literally at the push of a button. Associating the service profiles with the underlying infrastructure is also done with a couple of clicks. Net-Net: you rack, stack, and cable once; re-provision and re-deploy to meet your workload needs without having to make any physical changes to your infrastructure.
Storage Profiles is the most notable enhancement to UCS Manager in order to support M-series. This feature allows our customers to slice and dice the SSDs in the M-Series chassis into smaller virtual disks. Each of these virtual disks is then served up as if they are local PCIe devices to the server nodes within the compute cartridges plugged into the chassis. Steve has explained that concept elaborately in the previous blog. In the next edition, we will go into more details about Storage Profiles and other pertinent UCS Manager features for the M-Series.
SAP recently announced that they have expanded their SAP HANA Tailored Data Center Integration (TDI) to include networking. So what does this mean? It means that if a SAP customer installs SAP HANA, and that same customer has enough capacity on their existing networking equipment to satisfy the SAP HANA certification requirements for networking, then the customer can utilize their existing networking architecture for SAP HANA without having to purchase additional equipment to meet those requirements. Read More »
In this week’s episode of Engineers Unplugged, John Griffith (@jdg_8) and Kenneth Hui (@hui_kenneth) discuss Cinder--a way to abstract and give you block storage services inside of OpenStack. Great info with practical applications in this second of our series on OpenStack leading up to OpenStack Summit in Paris.
And let there be whiteboards and unicorns!
Cinder + OpenStack + Unicorns (courtesy of John Griffith and Kenneth Hui!)
**Want to be Internet Famous? Act now! Join us for our next shoot: NetApp Insight. Tweet me @CommsNinja!**
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
We’ve reached the 8th installment of our blog series on Cisco’s Big Data and Analytics vision (beginning with Scott Ciccone’s blog on September 23). No doubt by now you have either seen or heard about Cisco’s broad data and analytics portfolio presented at Strataconf in New York on Oct. 15. And if you missed our October 21st executive webcast ‘Unlock Your Competitive Edge with Cisco Big Data and Analytics Solutions,’ please check it out. Now you’re probably eager to know how to make the most of our approach to data analytics. How can you benefit the most—and the most quickly—from data analysis in your organization?
Customers come to us to ask for support in extracting valuable and actionable business insights from their large stocks of network data. Their goal is always to drive both operational efficiencies and new revenue opportunities. Rapid changes in the business environment increase pressure on time-to-value: savings and revenues need to be brought in as quickly as possible. But traditional ways to extract value from data, complicated by volume, velocity and variety issues, often have a very long time-to-value. In fact, data analytics consulting projects historically take a year or longer to complete. Customers get handed large scale implementation plans and, by the time the program is implemented, the wind has changed: the market opportunity has closed, and the business has moved on.
That’s why for some time now I’ve been a student of accelerating time to value for data analytics. Our job is not just to show our customers the hidden business value of their data, but also to bring that value to them fast. We have developed a rapid prototyping, iterative approach that continuously develops actionable insights from network and other sources of data. Our approach contains four steps to help our customers quickly develop, test, and implement business ideas and processes:
Step One: We start by working with customers and identify key use cases through an “Internet of Everything” iterative planning approach. Our experts don’t just present an idea, but a complete, ready-to-test hypothesis, using visualization techniques and an analytics design approach to discover new ways to do business, based on analytics insights.
Step Two: We use a rapid data extraction approach to capture the data needed to test that hypothesis. We fully leverage Cisco’s Connected Analytics platform, enabling automated data collection and simple correlations exploration.
Step Three: Once we have the data we need, we apply a data science approach to build an “analytics sandbox” in which we test the proposed use cases and measure its outcomes. We use rapid prototyping to test theories, quickly working through iterations to develop a truly working business model for our customers’ unique situations. In the process we are able to identify new insights that became the basis for the next use cases.
Step Four: The result is a set of modular Business Insights, which we interpret and thoroughly test, and turn into an actionable plan that we execute. This makes it relatively easy for our experts to integrate insights and actions into our customers’ transformation initiatives—and in a fraction of the time of traditional data-driven solutions.
The world of top down, outside-in consulting, where value comes from individuals’ experience, is gone. Value today is enabled by the capability of companies like Cisco to extract and interpret data about our customers’ core business, enabling agile decision making and rapid process transformation.
As the Internet of Everything becomes a pervasive reality, we see that analytics is what creates value from all of these connections value. To learn more about Cisco’s vision for the Internet of Everything, read Joseph Bradley’s blog on Thursday, October 23! #UnlockBigData