Where were the application clouds when I needed them…..

November 7, 2011 - 2 Comments

Earlier in my career, I ran a corporate IT and managed services tooling team.   I wish it was garage type tools, but it was IT operational management tools.   My team was responsible for developing and integration a set of ~20 applications that was the “IT for the IT guys”.  It was a great training ground for 120 of us; we worked on the bleeding edge and we were loving it.   We did everything from product management, development, test, quality engineering deployment, production and operational support.  It was indeed an example of eating your own cooking.  Applications where king in our group.  We had .NET, J2EE, JAVA, C, C+, C++ and other languages.  We have custom build and COTS (commercial off the shelf) software applications.

One day on a fateful Friday, my  teenagers happily asleep on a Friday night way past midnight (I guess that made it Saturday), I was biting my nails at 2 AM with my management and technical  team on a concall wondering what went wrong.  We were 5 hours into a major yearly upgrade and Murphy was my co-pilot that night.  I had DBAs, architects, Tomcat experts, QA, load testing gurus, infrastructure jockeys, and everyone else on the phone.  We had deployed 10 new servers that night and were simultaneously doing an upgrade to the software stack.  I think we had 7 time zones covered with our concall.   At least for my compatriots in France it was not too bad; they were having morning coffee in their time zone.  Our composite application was taking 12 seconds to process transactions; it should have taken no more 1.5 secs.    The big question:  can we fix this by Sun at 10 PM when our user base in EMEA showed up for work, or do we (don’t say this to the management)  roll back the systems and application….  I ran out of nails at this point….  My wife came into my dark home office and wondered what the heck was going on…..

It was that night that I realized I needed automation to do deployment and configuration (and production look-a-like servers and software in a staging environment).

This was the world right before virtualization.   Everything was physical, and everything depending on this guy named “Bob” who did the entire web server configuration and “Fred” who did the queue configuration in our enterprise message bus.  It also depended on “George” to configure our COTS core application and “Sam” who racked and stacked and configured our load balencers.  This was my worst nightmare of configuration errors masking other configuration errors.

It we had a cloud automation system for these tools and automated application provisioning and configuration management, I can tell you that fateful night would not have happened.  Technology like Cisco’s Intelligent Automation for Cloud  (http://www.cisco.com/en/US/products/ps11869/index.html ) with rPath Cloud Engine (www.rpath.com) for application provisioning would have saved the day.  Cisco Intelligent Automation for Cloud would have been used in the world of virtualization to create a  production look-alike environments when we needed them and ensure configuration deployments were exactly the way we wanted them for production.  We might choose devtest or even devops operating model.   rPath would have automated deployment, update, and especially rollback from a model. Instead of converting Bob, Fred, and Sam’s activities to delicate scripts, rPath version-controls a complete model for how systems should look, and automates the software and configuration changes required to move those systems forward or backward.

What did actually happen that night?  Did we roll back or sweat it out until Sunday evening?

Let me know what you would have done and leave a comment.   Tell me your own story of where an application cloud would have helped you…

After 1 week I will spill the beans on what happened that night and why.…

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. Our answer that night was to leave the application as is and work the performance issues over the weekend. We got the multi-tenant performance up to snuff by Monday morning in Europe. We did have some lingering issues but that night taught us a lot and made us fundamentally question our assumption of “we tested it so it must deploy perfectly…”. It was a dawn of new era for our team.

  2. ………automated configuration!…..how sweet it is Wayne.