As cloud technology and organizations mature, customers are shifting their focus from the provisioning of individual servers to richer cloud-based application platform stacks. Why? Servers usually do not exist as standalone entities but are designed to run something tangible for the business. For example, multi-tier application platform stacks have in their design multi-server elements such as database, application and web servers.
In this era of the cloud, creating golden templates for each of the elements required to configure these multi-tier stacks and the servers they reside on, is not only unwieldy for IT to maintain and manage but they are monolithic. This means if one single element changes, the whole golden image needs to be revised. Golden images are not configurable and frequently require additional manual configuration to complete installation.
What’s the solution? It begins with the concept of DevOps.
DevOps is a software development method that permits better collaboration between software development and IT operations in a way that these multi-tier application servers can be consumed in the cloud without human intervention. There are a number of disciplines included under the DevOps category, but this blog will be focusing on configuration management.
Puppet and Chef are two of the leading configuration management vendors in the DevOps segment delivering the following benefits:
• Elastic and continuous configurations
• Increased productivity to handle hundreds to thousands of nodes
• Improve IT responsiveness by reducing time to deploy changes
• Eliminate configuration drift and reduce outages
There is a lot of buzz about this capability. How much buzz? Watch this video from CiscoLive Orlando.
Within the next month, Cisco will be releasing a cloud accelerator that delivers configuration management of multi-tier application stacks. Based on the TOSCA-modeled graphical user interface, customers utilize a canvas that simplifies the design of these stacks into templates. Each element: server, network device and storage; is represented on the canvas with a graphical icon. Behind each icon are configuration details for each component. For example, network device configuration may include firewall rules and load balancing algorithms. For servers, Cisco is leveraging Puppet and Chef or home-grown scripts. The result is a blueprint that allows for consumption of the complete application stack by end users, on demand, delivered by the cloud.
So now we have blueprints. Where’s the real advantage?
Cisco Intelligent Automation for Cloud (IAC) is the golden key that gives you the advantage because it unlocks this new approach to cloud efficiency. Providing blueprints for multi-tier application stacks on their own do nothing if they cannot be ordered by customers from a standardized menu of services and acted upon by an orchestrator to automatically deploy the entire configuration. Extending functionality for DevOps is just another example of Cisco IAC’s ability to go beyond IaaS without requiring a solution rip and replace or major push-ups by customers.
Why just provision servers and continue to increase IT costs with manual “last mile” provisioning?
Cisco IAC and the configuration management accelerator simplify the delivery of multi-tier application stacks through self-service ordering and repeatable delivery. Cloud accelerators are designed to follow the vision and strategy of Cisco IAC eliminating code islands that become problematic when you upgrade to the next generation Cisco IAC edition.
To browse through the current cloud accelerators, go here. First time visitors will need to sign the register.
If you would like to learn more or comment, tweet us at: http://twitter.com/ciscoum
Please be aware that this product is no longer sold.
As Jason Schroedl announced, http://blogs.cisco.com/datacenter/announcing-the-new-cisco-intelligent-automation-for-cloud-starter-edition Cisco’s Intelligent Automation Solutions Business Unit, in conjunction with the Unified Computing System has just announced a solution for customers of UCS and vCenter that want a Cloud Automation system that can perform both Physical and Virtual server provisioning. It is called the starter edition for a reason. We find that many customers are not sure what they want from their cloud and are looking for a great place to start. This is not what I call the “starship enterprise” of clouds. It is the first step that a company will take on their cloud journey.
Cloud Expo was indeed a very interesting juxtaposition of people espousing the value of cloud and how their stuff is really cloudy. You have a group of presenters and expo floor booths talking about their open API and how that is the future of cloud. Then you have the other camp that tells us how their special mix of functions is so much better than that. All of this is a very interesting dialog. APIs are indeed very important. If your technology is indeed a cloud operating model then you must have an API. Solutions like Cisco’s Intelligent Automation for Cloud rely on those APIs to orchestrate cloud services. But APIs are not the end all. The reality is that while the cloud discussions tend to center on the API and the model behind that API, the real change enabling the move towards cloud is the operating model of the users who are leveraging the cloud for a completely fresh game plan for their businesses.
James Urquhart’s recent blog: http://gigaom.com/cloud/what-cloud-boils-down-to-for-the-enterprise-2/ highlights that the real change for users of the cloud is modifying how they do development, test, capacity management, production operations and disaster recovery. My last blog talked about the world before cloud management and automation and the move from the old world model to the new models of dev/test or dev/ops that force the application architects, developers, and QA folks to radically alter their model. Those that adopt the cloud without changing their “software factory” model from one that Henry Ford would recognize to the new models may not get the value they are looking for out of the cloud.
At Cloud Expo I saw a lot of very interesting software packages. Some of them went really deep into a specific use case area, while others accomplished a lot of functional use cases that were only about a inch deep. As product teams build out software packages for commercial use, they have a very interesting and critical decision point that will drive the value proposition of the software product. It seems to me that within 2 years, just about all entrants in the cloud management and automation marathon will begin to converge on a simple focused yet broad set of use cases. Each competitor will be either directly driving their product to that point, or they will be forced to that spot by the practical aspects of customers voting with the wallets. Interestingly enough, this whole process it drives competition and will yield great value for the VP of Operations and VP of Applications of companies moving their applications to the cloud.
Early in my career I moved quite a bit, new job, growing family, whatever the reason it seemed like every two or three years we were packing up and going to a new place and meeting our new neighbors.
Each new place had its own protocol for getting to know the neighbors, sometimes they came to us other times we had to walk around the block with the kids in tow to make that connection. The benefits of knowing your neighbors are many, who’ll lend you tools, who will help move furniture, etc.
Knowing the device neighbors in you network is just as important and fortunately there is a protocol for that, Cisco Discovery Protocol Cisco Discovery Protocol. This article is a guide to getting to know your UCS Fabric Interconnects’ neighbors in a manual and automated way.
Earlier in my career, I ran a corporate IT and managed services tooling team. I wish it was garage type tools, but it was IT operational management tools. My team was responsible for developing and integration a set of ~20 applications that was the “IT for the IT guys”. It was a great training ground for 120 of us; we worked on the bleeding edge and we were loving it. We did everything from product management, development, test, quality engineering deployment, production and operational support. It was indeed an example of eating your own cooking. Applications where king in our group. We had .NET, J2EE, JAVA, C, C+, C++ and other languages. We have custom build and COTS (commercial off the shelf) software applications.
One day on a fateful Friday, my teenagers happily asleep on a Friday night way past midnight (I guess that made it Saturday), I was biting my nails at 2 AM with my management and technical team on a concall wondering what went wrong. We were 5 hours into a major yearly upgrade and Murphy was my co-pilot that night. I had DBAs, architects, Tomcat experts, QA, load testing gurus, infrastructure jockeys, and everyone else on the phone. We had deployed 10 new servers that night and were simultaneously doing an upgrade to the software stack. I think we had 7 time zones covered with our concall. At least for my compatriots in France it was not too bad; they were having morning coffee in their time zone. Our composite application was taking 12 seconds to process transactions; it should have taken no more 1.5 secs. The big question: can we fix this by Sun at 10 PM when our user base in EMEA showed up for work, or do we (don’t say this to the management) roll back the systems and application…. I ran out of nails at this point…. My wife came into my dark home office and wondered what the heck was going on…..