Today’s announcement that Citrix is dropping support for OpenStack has reverberated through the clouderati sphere like a new Justin Bieber song through my niece’s third grade class. Super important but will not matter much when the next idol arrives.
In any case, a lot of smart people have written about it. I’ll leave them to explain the whole thing.
But the post that most caught my attention came from Thorsten at Rightscale‘s. We both share something in common: we both build products that connect to cloud API’s. Including vendor who have API’s that claim to be compatible EC2. This experience, I think provides a useful point of view when thinking about API compatibility. Not to mention it creates a jaundiced view of the human soul.
I’ve said it many times and I’ll repeat it again: it’s the semantics of the resources in the cloud that matter, not the syntax of the API. This means that “API compatibility” has to reach very, very deep to be meaningful. Let me give you a couple of examples around EC2.
Some people say that in the next few years that Infrastructure as a Service cloud deployments will be focused mostly on private clouds. And then they say that enterprises will migrate to public clouds after they have become “experienced” in running a cloud. About a year ago I could really see this story played out. Now, fifteen months after we introduced Cisco Intelligent Automation for Cloud, I have some different points of view. I would have thought that by now that private cloud architectures would have begun to converge to a few standard patterns. This has not happened. The world is still diverging when it comes to both Private and Public cloud architectures.
I do see patterns arising in successful cloud deployments and here are some of the key ones:
#5: Pragmatic Approach: IT shops that come with a long list of RFP requirements and questions take a long time to source a technology provider and to achieve production success. Others that are pragmatic (can I say Agile in their approach) get to cloud quicker and learn from their successes and missteps alike.
#4: They Have a Cloud Instance Roadmap: After a cloud deployment, some IT organizations think that is it, they are done, next project, my move to cloud is complete. Hold it right there, did you know that cloud is not a single step where you through a switch, but a succession of deployments of great scope from one step to the next? A roadmap is needed that covers: hardware, network, storage infrastructure, virtualization technology and release version, management and orchestration software instance version and finally the services that you are offering to the end users and how the service catalog is changing over time. Those that have a roadmap roughed out are generally more successful than those that have a big bang perspective.
#3: Appreciation for Challenge of Management of Change:Moving to cloud is a big change in an operating model; careers are created and new roles are defined. How does an organization move to the new model with different technology, processes and people? When a team proactively manages the change in the non-technical they ensure long term success. It is not just about self service, cloud catalogs, orchestration, domain management and virtualization. It is more about service designers and automation authors and changes in operational processes.
#2: Rise of the Cloud Architect: Since cloud is about a new operating model a new position and role is needed. If you have a cloud project and do not have a cloud architect tying it all together from cost models, to hypervisors, to orchestration and orderable service definitions, you need a organization role tune up ASAP.
#1: A Service Centric Approach: Most people get this one right away. Service centric projects are the key focus for ITaaS. However, I can’t tell you how many times when I am talking to an IT team, the opening bell results in a speeds and feeds conversation around provisioning that piece of infrastructure and that virtualization API. If you ask the question about what services they want to offer their end users for self service ordering you will get a request for more time to answer that question. Service Centric IT shops will take the time to start first with the business requirements and the perspective from the end user point of view. Transform your cloud project approach to a service centric agile project and you will go far.
We’re in the sporting and cultural capital of Australia this week for Cisco Live! Did you know that Melbourne is the only city in the world that has five international standard sporting facilities surrounding its central business district?
Cisco Intelligent Automation for Cloud is a cloud management and orchestration software solution that complements Cisco UCS and Nexus to provide self-service on-demand provisioning of IT resources. This new solution is becoming as ubiquitous as the sporting facilities in Melbourne. Cisco partners including Alphawest / Optus, CSC, and VCE are also showcasing our Intelligent Automation for Cloud software in action at their booths.
Essentially, this solution will help you tackle the challenge of deploying infrastructure-as-a-service – and adopt an IT-as-a-Service (ITaaS) strategy. Here’s a short analyst video on delivering ITaaS with Cisco Intelligent Automation:
You are probably thinking that CITEIS is a typo – but it’s not. In fact, CITEIS stands for Cisco IT Elastic Infrastructure Services and it’s the name that Cisco’s IT department coined for our internal private cloud.
You can read more about CITEIS here, including an explanation of the two options: CITEIS “Express” for on-demand access to virtual compute resources from a shared pool of resources; and CITEIS “VDC” (Virtual Data Center) to provision your own virtual data center with a reserved pool of compute, storage, and network capacity.
We recently recorded a brief demo video of the Express version so you can see how it works:
They say that data about your data is more important than the data itself. Having the right data in the data warehouse at the right time or loaded up for Hadoop Analysis is critical. I have heard of stories where the wrong product was sent to the wrong store for sale due to incorrect conclusions on what was selling best. This was due to reports and decisions being made on the wrong data. This can be a resume impacting decision in this modern world of data driven product placements around the globe. In previous blog about Enterprise Job Scheduling (aka Workload Automation) http://blogs.cisco.com/datacenter/workload-automation-job-scheduling-applications-and-the-move-to-cloud/ I discussed the basic uses of automating and scheduling batch workloads. Business intelligent, data warehousing and Big Data initiatives need to aggregate data from different sources and load them into very large data warehouses.
Let’s look into the life of the administrator and operations of a workload automation tool. The typical Enterprise may have thousands if not ten thousands of job definitions. Those are individual jobs that get run: look for this file in a drop box, FTP data from that location, extract this specific set of data from an Oracle database, connect to that windows server and launch this process, load this data into a datawarehouse using Informatica PowerCenter, run this process chain in SAP BW and take that information to this location. All this occurs to get the right data in the right place at the right time. These jobs are then strung together in a sequences we in the Intelligent Automation Solutions Business Unit at Cisco call Job Groups. These groups can represent business processes that are automated. They many have 10’s to hundreds of steps. Each job may have dependency on other jobs for completion. The jobs may be waiting for resources to become available. This all leads to a very complex execution sequence. These jobs groups run every day; some run multiple times a day, some only run at the end of the quarter.
The typical IT operations team has a group of people that design, test and implement these job groups by working with people in business IT that design and implement business processes. Often times these job groups need to finish by a certain time to meet the needs of the business. If you are a stock exchange some job groups have to finish say in so many hours after the market closes. If you have to get your data to a downstream business partner (or customer) by a certain time you become very attached to watching those jobs execute. No pun intended, your job may be on the line.
A new technology has hit the scene for our customers of the Cisco Tidal Enterprise Scheduler. It is called JAWS Historical and Predictive Analytics. http://www.termalabs.com/products/cisco-tidal-enterprise-scheduler.html . These modules takes all historical and real time performance data information from the Scheduler and through a set of algorithms produce historical, real-time, predictive, and business analytics historical and predictive analytics. This is the data about the data I mentioned previously. Our customers can do what if analyses as well as get early indication that a particular job group is not able to finish in time. The administrators can take action before it is too late. This is critical in getting the data in the right place so that analytics can be performed correctly and therefore not sending 1000 of the wrong product to the wrong store location. Thanks to our partners from Terma Software Labs http://info.termalabs.com/cisco-systems-and-terma-software-labs-to-join-forces-for-more-sla-aware-workload-processing/ .