Earlier in my career, I ran a corporate IT and managed services tooling team. I wish it was garage type tools, but it was IT operational management tools. My team was responsible for developing and integration a set of ~20 applications that was the “IT for the IT guys”. It was a great training ground for 120 of us; we worked on the bleeding edge and we were loving it. We did everything from product management, development, test, quality engineering deployment, production and operational support. It was indeed an example of eating your own cooking. Applications where king in our group. We had .NET, J2EE, JAVA, C, C+, C++ and other languages. We have custom build and COTS (commercial off the shelf) software applications.
One day on a fateful Friday, my teenagers happily asleep on a Friday night way past midnight (I guess that made it Saturday), I was biting my nails at 2 AM with my management and technical team on a concall wondering what went wrong. We were 5 hours into a major yearly upgrade and Murphy was my co-pilot that night. I had DBAs, architects, Tomcat experts, QA, load testing gurus, infrastructure jockeys, and everyone else on the phone. We had deployed 10 new servers that night and were simultaneously doing an upgrade to the software stack. I think we had 7 time zones covered with our concall. At least for my compatriots in France it was not too bad; they were having morning coffee in their time zone. Our composite application was taking 12 seconds to process transactions; it should have taken no more 1.5 secs. The big question: can we fix this by Sun at 10 PM when our user base in EMEA showed up for work, or do we (don’t say this to the management) roll back the systems and application…. I ran out of nails at this point…. My wife came into my dark home office and wondered what the heck was going on…..
Recently, a customer asked me what was the value of using automation to operate a private cloud? It was a good question. Working in the middle of the reality distorition field of the cloud industry I take it for granted that everyone knows automation’s benefits.
Fundamentally, automation tools help to reduce labor costs, rationalize consumption and increase utilization.
Costs are lower because the labor required to configure and deploy is eliminate. This automation is possible by creating standard infrastructure offerings. Standard infrastructure offering make possible a new operational model: to move from the artesanal approach of delivering infrastructure ,where every system and configuration is uniqe, to the industrialized approach, that ensures repeatability, quality and agility. It’s the difference between custom tailoring and standardized sizes at The Gap. Both have their place, but one costs more.
What provisioning the Cloud infrastructure and cooking have in common…
I like to cook. Sometimes, I’ll grab whatever ingredients I have on hand, put them in a Dutch oven, throw in a few spices, and make a delicious casserole that can never be repeated. At other times, I’ll follow a recipe to the letter, measure and weigh everything that goes in, and produce a great meal that I can repeat consistently each time.
When provisioning servers and blades for a Cloud infrastructure, the same 2 choices exist: follow your instinct and build a working (but not repeatable) system, or follow a recipe that will ensure that systems are built in an exacting fashion, every time. Without a doubt, the latter method is the only way to proceed.
Enter the Cisco Tidal Server Provisioner (an OEM from www.linmin.com) , an integral component of Cisco Intelligent Automation for Cloud and Cisco Intelligent Automation for Compute. TSP lets you easily create “recipes” that can be easily deployed onto physical systems and virtual machines with repeatability and quality, every time. These recipes can range from simple, e.g., install a hypervisor or an operating system, to very complex: install an operating system, then install applications, run startup scripts, configure the system, access remote data, register services, etc.
Once you have a recipe (we call it a Provisioning Template), you can apply it to any number of physical systems or virtual machines without having to change the recipe. Some data centers use virtualization for sand box development and prototyping, and use physical servers and blades for production. Some data centers do the opposite: prototype on physical systems, then run the production environment in a virtualized environment. And of course, some shops are “all physical” or “all virtual”. Being able to deploy a recipe-based payload consistently on both physical and virtual systems provides the ultimate flexibility. Yes, once you’ve created a virtual machine, you’ll likely use VMware vSphere services to deploy, clone and move VMs, but as long as you’re using TSP to create that “first VM”, you have the assurance that you have a known-good, repeatable way of generating the golden image. When time comes to update the golden image, don’t touch the VM: instead, change the recipe, provision a new VM, and proceed from there.
With this week’s announcement, Cisco continues its innovation and leadership by bringing unmatched architectural flexibility and revolutionary scale to meet diverse requirements of massively scalable data centers, big data environments, cloud-based architectures or bare-metal deployments – with one evolutionary network: Unified Fabric.
To drive the point home, the real economics of networking reveal that for many organizations approximately 70% of network TCO is incurred after the initial equipment purchase. So why is this important?
In case you missed it, the Cisco Intelligent Automation team was at Oracle OpenWorld a couple weeks ago. This fall has been packed with events for our team, ranging from major partner shows like SAP TechEd and VMworld to local Cisco Tech Days – and we’re at VMworld in Copenhagen this week.
That’s because our Intelligent Automation software solutions are relevant across the entire IT landscape. The more resources and applications that Cisco Intelligent Automation manages, the more our customers achieve efficiencies in their data center – including for Oracle applications and database management.
The Oracle event was a success for Intelligent Automation. We had three theater presentations and two demo pods about Cisco Intelligent Automation for Cloud and Cisco Tidal Enterprise Scheduler for Oracle enterprise applications running on Cisco UCS. We had great discussions about the heterogeneous adapter framework built into these solutions and showed our self-service provisioning and cross-application workload automation capabilities.
Here’s the presentation highlighting the Intelligent Automation solution at OracleWorld: