Cloud Expo was indeed a very interesting juxtaposition of people espousing the value of cloud and how their stuff is really cloudy. You have a group of presenters and expo floor booths talking about their open API and how that is the future of cloud. Then you have the other camp that tells us how their special mix of functions is so much better than that. All of this is a very interesting dialog. APIs are indeed very important. If your technology is indeed a cloud operating model then you must have an API. Solutions like Cisco’s Intelligent Automation for Cloud rely on those APIs to orchestrate cloud services. But APIs are not the end all. The reality is that while the cloud discussions tend to center on the API and the model behind that API, the real change enabling the move towards cloud is the operating model of the users who are leveraging the cloud for a completely fresh game plan for their businesses.
James Urquhart’s recent blog: http://gigaom.com/cloud/what-cloud-boils-down-to-for-the-enterprise-2/ highlights that the real change for users of the cloud is modifying how they do development, test, capacity management, production operations and disaster recovery. My last blog talked about the world before cloud management and automation and the move from the old world model to the new models of dev/test or dev/ops that force the application architects, developers, and QA folks to radically alter their model. Those that adopt the cloud without changing their “software factory” model from one that Henry Ford would recognize to the new models may not get the value they are looking for out of the cloud.
At Cloud Expo I saw a lot of very interesting software packages. Some of them went really deep into a specific use case area, while others accomplished a lot of functional use cases that were only about a inch deep. As product teams build out software packages for commercial use, they have a very interesting and critical decision point that will drive the value proposition of the software product. It seems to me that within 2 years, just about all entrants in the cloud management and automation marathon will begin to converge on a simple focused yet broad set of use cases. Each competitor will be either directly driving their product to that point, or they will be forced to that spot by the practical aspects of customers voting with the wallets. Interestingly enough, this whole process it drives competition and will yield great value for the VP of Operations and VP of Applications of companies moving their applications to the cloud.
Read More »
Tags: API, application, automated provisioning, cloud, data center provisioning, devops, devtest, intelligent automation, monitoring, private cloud, service assurance
Early in my career I moved quite a bit, new job, growing family, whatever the reason it seemed like every two or three years we were packing up and going to a new place and meeting our new neighbors.
Each new place had its own protocol for getting to know the neighbors, sometimes they came to us other times we had to walk around the block with the kids in tow to make that connection. The benefits of knowing your neighbors are many, who’ll lend you tools, who will help move furniture, etc.
Knowing the device neighbors in you network is just as important and fortunately there is a protocol for that, Cisco Discovery Protocol Cisco Discovery Protocol. This article is a guide to getting to know your UCS Fabric Interconnects’ neighbors in a manual and automated way.
Read More »
Tags: application, automated provisioning, cloud, devops, devtest, expect, intelligent automation, server provisioning, TCL
Earlier in my career, I ran a corporate IT and managed services tooling team. I wish it was garage type tools, but it was IT operational management tools. My team was responsible for developing and integration a set of ~20 applications that was the “IT for the IT guys”. It was a great training ground for 120 of us; we worked on the bleeding edge and we were loving it. We did everything from product management, development, test, quality engineering deployment, production and operational support. It was indeed an example of eating your own cooking. Applications where king in our group. We had .NET, J2EE, JAVA, C, C+, C++ and other languages. We have custom build and COTS (commercial off the shelf) software applications.
One day on a fateful Friday, my teenagers happily asleep on a Friday night way past midnight (I guess that made it Saturday), I was biting my nails at 2 AM with my management and technical team on a concall wondering what went wrong. We were 5 hours into a major yearly upgrade and Murphy was my co-pilot that night. I had DBAs, architects, Tomcat experts, QA, load testing gurus, infrastructure jockeys, and everyone else on the phone. We had deployed 10 new servers that night and were simultaneously doing an upgrade to the software stack. I think we had 7 time zones covered with our concall. At least for my compatriots in France it was not too bad; they were having morning coffee in their time zone. Our composite application was taking 12 seconds to process transactions; it should have taken no more 1.5 secs. The big question: can we fix this by Sun at 10 PM when our user base in EMEA showed up for work, or do we (don’t say this to the management) roll back the systems and application…. I ran out of nails at this point…. My wife came into my dark home office and wondered what the heck was going on…..
Read More »
Tags: application, automated provisioning, cloud, devops, devtest, intelligent automation, orchestration, server provisioning
Recently, a customer asked me what was the value of using automation to operate a private cloud? It was a good question. Working in the middle of the reality distorition field of the cloud industry I take it for granted that everyone knows automation’s benefits.
Fundamentally, automation tools help to reduce labor costs, rationalize consumption and increase utilization.
Costs are lower because the labor required to configure and deploy is eliminate. This automation is possible by creating standard infrastructure offerings. Standard infrastructure offering make possible a new operational model: to move from the artesanal approach of delivering infrastructure ,where every system and configuration is uniqe, to the industrialized approach, that ensures repeatability, quality and agility. It’s the difference between custom tailoring and standardized sizes at The Gap. Both have their place, but one costs more.
Read More »
Tags: Cisco Intelligent Automation for Cloud, Cloud Management, intelligent automation, orchestration, Service Orchestration, unified management
What provisioning the Cloud infrastructure and cooking have in common…
I like to cook. Sometimes, I’ll grab whatever ingredients I have on hand, put them in a Dutch oven, throw in a few spices, and make a delicious casserole that can never be repeated. At other times, I’ll follow a recipe to the letter, measure and weigh everything that goes in, and produce a great meal that I can repeat consistently each time.
When provisioning servers and blades for a Cloud infrastructure, the same 2 choices exist: follow your instinct and build a working (but not repeatable) system, or follow a recipe that will ensure that systems are built in an exacting fashion, every time. Without a doubt, the latter method is the only way to proceed.
Enter the Cisco Tidal Server Provisioner (an OEM from www.linmin.com) , an integral component of Cisco Intelligent Automation for Cloud and Cisco Intelligent Automation for Compute. TSP lets you easily create “recipes” that can be easily deployed onto physical systems and virtual machines with repeatability and quality, every time. These recipes can range from simple, e.g., install a hypervisor or an operating system, to very complex: install an operating system, then install applications, run startup scripts, configure the system, access remote data, register services, etc.
Once you have a recipe (we call it a Provisioning Template), you can apply it to any number of physical systems or virtual machines without having to change the recipe. Some data centers use virtualization for sand box development and prototyping, and use physical servers and blades for production. Some data centers do the opposite: prototype on physical systems, then run the production environment in a virtualized environment. And of course, some shops are “all physical” or “all virtual”. Being able to deploy a recipe-based payload consistently on both physical and virtual systems provides the ultimate flexibility. Yes, once you’ve created a virtual machine, you’ll likely use VMware vSphere services to deploy, clone and move VMs, but as long as you’re using TSP to create that “first VM”, you have the assurance that you have a known-good, repeatable way of generating the golden image. When time comes to update the golden image, don’t touch the VM: instead, change the recipe, provision a new VM, and proceed from there.
Read More »
Tags: Cloud Computing, data center provisioning, disk imaging, intelligent automation, job scheduling, linmin, orchestration, self-service, server provisioning