Leading IT shops like to have a single pane of glass that is the IT storefront to all employees. This is a very noble goal. Having worked at a few large companies this is indeed a moving target as supporting the end user employee can mean a lot of different entry points, contexts and presentation technologies. When it comes to have a central location for ordering services it is very important to on board all of the employee based and data center services in a consistent fashion. Some of the key use cases include employee on boarding (and off boarding), virtual desktops, virtual machines and physical servers in the datacenter and access to applications. Typical IT departments may have several hundred orderable services, many of which are bundled (think of employee on boarding).
Interestingly some organizations first drive towards a common catalog and then automate what they can afterwards. At first you can take orders through the service catalog and then work the tasks to fulfill the request through manual process tracking. Alternatively I have seen some shops say that they will only put services in the catalog that can be automated. Then there are all the intermediate cases. Organizations deploying automated request management have many issues to consider and standards to be set.
Can we declare victory when a process is mostly manual but yet orderable from a catalog in four clicks? Perhaps…
Your end users are happy. They can see where their request is in the process flow. Kind of like going to fedex.com and seeing where that DVD is on its journey to your house. But that package took 3 days to traverse its journey.
Considered an automated fulfillment or provisioning process. In my above analogy, you are no longer dealing with DVDs shipping to your house but on demand video streaming. A simple click sets into motion many automated processes that deliver the movie to your device. For end user services this means your remote access is provisioned with a simple click, your Linux server and application stack is delivered in less than 15 minutes for use. Key to making that happen is a full automated process. Is that achievable in all cases? Perhaps….
In most cases what we are provisioning requires a northbound API (an programming interface above the fulfillment system) to accomplish the instantiation of the service. Oftentimes, in legacy environments the target system is so dated or under invested-in that an API does not exist. It is pretty hard to automate a process that can only occur through a human interfacing with the system.
People ask me the question: So What? We have found that by automating processes we can save on average 30% of the process cost. Multiply that by tens of thousands of requests and it will really add up.
Investing in Self Service requires investing in automation and in some cases, wrapping an API around a legacy environment in order to get the desire result: IT as a Service, delivered at the speeds needed by our end users.
if you get the chance to be at EMC World you probably saw an interesting demo shared by Cisco, EMC and VCE about Mobility and Business Continuance -- If you didn’t , Cisco Live San Diego will be another opportunity to see it
Today I am pleased to have EMC Colin Durocher, bringing his perspective on the best way to address a critical challenge for a lot of IT organizations.
Next week I will post a second part (here) , with a video about the demo itself
Colin Durocher (on Twitter @OtherColin) is a Principal Product Manager with the RecoverPoint VPLEX Business Unit.He has been working with the VPLEX product in several capacities including QA, software development,
systems engineering, and product management for over 10 years.
He is a father of two, a professional engineer, and is currently pursuing an MBA.
Colin is based out of Montreal, Canada.
“Life Inside the Datacenter Silo
The traditional approach to IT is characterized by datacenter silos. Within each silo, we have our operations down to a science:
We use server clustering, redundant network fabrics, and RAID storage to protect against unplanned local failures.
We maintain spare capacity to absorb failures and workload spikes
We don’t think twice about moving data between tiers, or even between arrays to optimize cost and performance.
We commonly move virtual machines non-disruptively from server to server to load balance or perform maintenance.
As far as mobility and availability needs are concerned, life is good… Within the silo.
Crossing the Chasm (Between Silos)
When it comes to protecting against site failures, we use array replication to maintain a copy of all our data in a secondary (often passive) datacenter. We maintain scripts to automate our failover in case we ever need to declare a disaster. We practice our DR plan at least once a year. Don’t we? Moving applications between datacenters is complicated enough that we really just try not to do it. When we do, it often entails a professional services engagement.
All this has worked reasonably well for us up to now. But IT budgets are being squeezed and IT administrators need to eliminate waste, reduce complexity and find ways to increase their operational efficiency. It isn’t an optional thing. Consider the IDC digital universe study (2011) which estimates that by 2020, the amount of information under management will increase by a factor of 50 while the number of IT staff managing it will increase by only 1.5
That gap will need to be filled by different technologies. Let me introduce one to you – EMC VPLEX Metro. For hundreds of customers, it is breaking down the barriers between datacenters bringing new levels of efficiency, simplicity, and availability.
A quick report from EMC World 2012 in Las Vegas
Pretty busy day this Tuesday with a lot of topics covered by Cisco experts and partners
Desktop virtualization Interesting conversation between EMC Josh Mello (@joshmello), Presidio Steve Kaplan (@ROIdude), and Cisco Ravi Balakrishnan who addressed major questions in this panel such as common barriers for adoption, architectural innovations and value proposition brought by each company
If I become hiring manager for a Data Center team, I’m asking candidates whether they have Tetris skills. Anyone who can neatly fill a space with odd-shaped blocks falling at ever-increasing speed can oversee the rack-and-stack activities in my Data Centers.
I talked in my last two posts – on preparing for and then executing a Data Center move – about planning where you want to place your Data Center hardware. That’s a good idea even if you’re not moving your server environment, because how you deploy your equipment affects how efficiently rack space is used, airflow patterns and more. Read More »
This is a must read for those who want to deeply understand the philosophy behind Cisco’s automation product portfolio
It should not be news to you that Cisco has invested in software products to drive the management and automation of clouds, datacenters, and applications. Intelligent Automation is the name that we have for the management and orchestration solutions in the Intelligent Automation Solutions Business Unit in Cisco’s Cloud and Systems Management Technology Group.
What is so intelligent about Cisco’s automation products? Besides the official marketing and product management answers, I polled our Business Unit and Advanced Services teams and got the following responses (which I distilled a bit). Oh and by the way, one constraint was that we cannot use Intelligent in the definition of Intelligent Automation (harder than you might think).
The top winners for the best contributions are: Oleg Danilov (Solution Architect), Mynul Hoda (Technical Leader), Peter Charpentier (Solution Architect), Frank Contrepois (Network Consulting Engineer) and Devendran Rethinavelu (QA Engineer).