Rebecca Jacoby, Cisco Senior Vice President and Chief Information Officer, discusses Cisco’s journey to the cloud. Cisco is running a private cloud as a utility and is moving toward an inter-cloud approach. This capability will give Cisco the business process opportunity to source services from multiple places and deliver them seamlessly to employees in a flexible, cost-effective manner.
To view the complete interactive, please visit http://www.cisco.com/go/ciscoit
For more information on Cisco IT’s journey to the cloud:
Cloudy With a Chance of Data Center Savings
Parting Clouds and Being Real About Virtualization
ITIaaS Clouds Over ITIL
Tags: aaron chiles, Borderless Networks, CIO, cisco on cisco, cloud, coc-business-of-it, coc-data-center, data center, inside cisco it, private cloud, Public Cloud, Rebecca Jacoby, virtualization
Leading IT shops like to have a single pane of glass that is the IT storefront to all employees. This is a very noble goal. Having worked at a few large companies this is indeed a moving target as supporting the end user employee can mean a lot of different entry points, contexts and presentation technologies. When it comes to have a central location for ordering services it is very important to on board all of the employee based and data center services in a consistent fashion. Some of the key use cases include employee on boarding (and off boarding), virtual desktops, virtual machines and physical servers in the datacenter and access to applications. Typical IT departments may have several hundred orderable services, many of which are bundled (think of employee on boarding).
Interestingly some organizations first drive towards a common catalog and then automate what they can afterwards. At first you can take orders through the service catalog and then work the tasks to fulfill the request through manual process tracking. Alternatively I have seen some shops say that they will only put services in the catalog that can be automated. Then there are all the intermediate cases. Organizations deploying automated request management have many issues to consider and standards to be set.
Can we declare victory when a process is mostly manual but yet orderable from a catalog in four clicks? Perhaps…
Your end users are happy. They can see where their request is in the process flow. Kind of like going to fedex.com and seeing where that DVD is on its journey to your house. But that package took 3 days to traverse its journey.
Considered an automated fulfillment or provisioning process. In my above analogy, you are no longer dealing with DVDs shipping to your house but on demand video streaming. A simple click sets into motion many automated processes that deliver the movie to your device. For end user services this means your remote access is provisioned with a simple click, your Linux server and application stack is delivered in less than 15 minutes for use. Key to making that happen is a full automated process. Is that achievable in all cases? Perhaps….
In most cases what we are provisioning requires a northbound API (an programming interface above the fulfillment system) to accomplish the instantiation of the service. Oftentimes, in legacy environments the target system is so dated or under invested-in that an API does not exist. It is pretty hard to automate a process that can only occur through a human interfacing with the system.
People ask me the question: So What? We have found that by automating processes we can save on average 30% of the process cost. Multiply that by tens of thousands of requests and it will really add up.
Investing in Self Service requires investing in automation and in some cases, wrapping an API around a legacy environment in order to get the desire result: IT as a Service, delivered at the speeds needed by our end users.
Tags: automated provisioning, data center, intelligent automation, orchestration
if you get the chance to be at EMC World you probably saw an interesting demo shared by Cisco, EMC and VCE about Mobility and Business Continuance – If you didn’t , Cisco Live San Diego will be another opportunity to see it
Our favorite bloggers Jake Howering and Omar Sultan wrote in the recent past about DCI (Data Center Interconnect) , OTV (Overlay Transport Virtualization) i.e DCI as an enabling framework for both Workload Mobility & Disaster Recovery
Today I am pleased to have EMC Colin Durocher, bringing his perspective on the best way to address a critical challenge for a lot of IT organizations.
Next week I will post a second part (here) , with a video about the demo itself
Colin Durocher (on Twitter @OtherColin) is a Principal Product Manager with the RecoverPoint VPLEX Business Unit.He has been working with the VPLEX product in several capacities including QA, software development,
systems engineering, and product management for over 10 years.
He is a father of two, a professional engineer, and is currently pursuing an MBA.
Colin is based out of Montreal, Canada.
“Life Inside the Datacenter Silo
The traditional approach to IT is characterized by datacenter silos. Within each silo, we have our operations down to a science:
- We use server clustering, redundant network fabrics, and RAID storage to protect against unplanned local failures.
- We maintain spare capacity to absorb failures and workload spikes
- We don’t think twice about moving data between tiers, or even between arrays to optimize cost and performance.
- We commonly move virtual machines non-disruptively from server to server to load balance or perform maintenance.
As far as mobility and availability needs are concerned, life is good… Within the silo.
Crossing the Chasm (Between Silos)
When it comes to protecting against site failures, we use array replication to maintain a copy of all our data in a secondary (often passive) datacenter. We maintain scripts to automate our failover in case we ever need to declare a disaster. We practice our DR plan at least once a year. Don’t we? Moving applications between datacenters is complicated enough that we really just try not to do it. When we do, it often entails a professional services engagement.
All this has worked reasonably well for us up to now. But IT budgets are being squeezed and IT administrators need to eliminate waste, reduce complexity and find ways to increase their operational efficiency. It isn’t an optional thing. Consider the IDC digital universe study (2011) which estimates that by 2020, the amount of information under management will increase by a factor of 50 while the number of IT staff managing it will increase by only 1.5
That gap will need to be filled by different technologies. Let me introduce one to you – EMC VPLEX Metro. For hundreds of customers, it is breaking down the barriers between datacenters bringing new levels of efficiency, simplicity, and availability.
Read More »
Tags: Business Continuance, Cisco, data center, disaster recovery, EMC, mobility, VCE
A quick report from EMC World 2012 in Las Vegas
Pretty busy day this Tuesday with a lot of topics covered by Cisco experts and partners
Interesting conversation between EMC Josh Mello (@joshmello), Presidio Steve Kaplan (@ROIdude), and Cisco Ravi Balakrishnan who addressed major questions in this panel such as common barriers for adoption, architectural innovations and value proposition brought by each company
More about VDI from Steve Kaplan here , and from Cisco with Tony Paikeday and Jonathan Gilad
This Tuesday was also the opportunity to meet Nexus Colin McNamara (@colinmcnamara) and EMC Damian Karlson(@sixfootdad) to talk about VSPEX awareness and potential.
Stay tuned for a video blog in the following days
Meanwhile you may want to check this to-the point blog from Colin VSPEX EMC’s Flexible Reference Architecture Explained
Read More »
Tags: Big Data, Cisco, data center, EMC World, vdi
If I become hiring manager for a Data Center team, I’m asking candidates whether they have Tetris skills. Anyone who can neatly fill a space with odd-shaped blocks falling at ever-increasing speed can oversee the rack-and-stack activities in my Data Centers.
I talked in my last two posts – on preparing for and then executing a Data Center move – about planning where you want to place your Data Center hardware. That’s a good idea even if you’re not moving your server environment, because how you deploy your equipment affects how efficiently rack space is used, airflow patterns and more. Read More »
Tags: Cisco, coc-data-center, data center, datacenterdeconstructed, hardware, hardware deployment, relocation, Servers, space planning, Tetris