In this week’s episode of Engineers Unplugged, David Hartman and Tim Cerling discuss Fast Track 4.0 Solutions, which promote fast and efficient deployment of private clouds with Cisco, EMC, and Microsoft solutions.
How many engineers does it take to straighten a whiteboard? (Answer: 5) Behind the scenes on #EngineersUnplugged
If you would like to become Internet Famous, and strut your unicorn talents, join us for our next filming session at VMworld 2014. Tweet me for details!
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Recently, I had the opportunity to join a discussion regarding the #FutureOfCloud in the #InnovateThinkTweet Chat. One of the questions that came up revolved around the process typically used to associate a workload with a specific cloud deployment model. That is an important question and top of mind whenever we speak with customers.
One of the most appealing qualities of the cloud is the variety of ways in which it can be delivered and consumed. A successful cloud strategy will let you take advantage of a full range of consumption models for cloud services to meet your specific business needs. In reality, when we think about it, the process is very similar to what any company in virtually any industry goes through when shaping its business strategy. For each area of the business, inevitably the question arises: Build, Buy or Partner?
Build versus Buy
When formulating their sourcing strategies, IT organizations repeatedly face very similar service-by-service, “build-versus-buy” decisions. The predisposition of IT organizations is to create and build IT services on their own. That is what many IT professionals want to do … create new services, invent ‘new things’. And that may very well be the best option. However, many customers also realize that it is often beneficial to adopt best-in-class capabilities to remain competitive even if this requires outsourcing select portions of the IT value chain. Hence the emerging role of IT as a broker of IT services that we discussed in the past (for more information please visit our web site.) And this requires a paradigm shift for many IT organizations.
Solving the ‘Equation’
To solve the “build versus buy” equation when sourcing their IT services, IT needs to evaluate cost, risk, and agility requirements to determine the best strategy for their business. IT needs a plan and a set of governance principles to evaluate each service based on its strategic profile. A collaborative approach between business and IT is also required. For example: Is the service core to the business? What is the business value associated with it (e.g., strategic importance, sustainable differentiation it can provide, time to market requirements etc..)? What are the cost implications (CapEx vs OpEx), risk profile, security, SLAs, data privacy and regulatory compliance requirements? And … do you have the expertise to plan, build and manage the new IT service while meeting the expectations of your business counterparts?
Hybrid Cloud Rapidly Emerging as the New ‘Normal’
Not surprisingly, my experience when talking to customers that operate in regulated industries or that are concerned about security -- and the privacy of their data more specifically – is that they tend to favor private cloud deployments. For example, I was talking to a compliance manager part of a global financial institution and as soon as I uttered ‘public cloud’ his reaction was quite predictable …. He shook his head, got serious and quipped “Public cloud … I do not think so …” Real or perceived, security concerns remain top of mind and a major barrier to cloud adoption, and this is validated by market research data.
The predictability of the application with respect to resource consumption is also a factor. Applications that have high elasticity requirements are well positioned to benefit from the economics, agility and scale that public clouds can offer. Infrastructure capacity planning and optimization is a big task for most IT organizations. Having the ability to burst into the public cloud represents an appealing option. This is also why ultimately hybrid cloud is becoming the new normal, and results of the 2014 North Bridge Future of Cloud Computing Survey supports that view.
2014 Future of Cloud Computing - Annual Survey Results
The Power of Choice
Arguably the most important thing your IT organization can do is to diversify its choice of cloud providers ….. Simply because without choice you really do not have a strategy …. And no contingency plans to go along with it ….
What do you think?
If you want to learn more about Cisco Cloud you can watch this video or visit our web site
If you need help with your cloud strategy, please consider the Cisco Domain Ten framework
According to GigaOM, the use of cloud-based resources will be what’s “next” for IT in preparation for an in-depth look at the infrastructure that will drive the next decade of application development.
At the recent Structure event, GigaOM tapped into the minds of cloud-technology industry leaders, seeking insight into the “Top 5 Questions for the Titans of Cloud.”
In this post, Gee Rittenhouse, Vice President/General Manager, Cloud and Virtualization Group at Cisco, provides answers and insight on cloud infrastructure, exchange, data security and more.
Top Cloud Question #1: “When will all the major clouds support the same set of APIs?”
Today, there is a three-horse race between two proprietary APIs (Amazon Web Services and VMware’s vCloud API) and one open API (OpenStack). For now, the two proprietary APIs will continue to be the dominant players, leveraging their large public cloud (in the case of AWS) and private cloud (in the case of VMware) deployments.
But, as an increasing number of service providers and enterprises adopt and deploy OpenStack cloud solutions across both public and private models, the balance will shift, more than likely over the next two to four years.
Cisco’s approach is different from other, more infrastructure-centric public cloud offers. We believe that the open API model OpenStack will eventually be the dominant cloud API model and will ultimately become the de-facto standard.
Looking to the future beyond just a hybrid cloud conversation toward the Intercloud, an interconnected global cloud of clouds, built with a commitment to open standards and based on OpenStack, will feature APIs to connect any cloud or hypervisor to any other cloud or hypervisor.
Acxiom is a well-known Software-as-a-Service (SaaS) company providing data analytics and data processing solutions to Fortune 100 companies for running and analyzing their marketing campaigns. Recently Cisco spoke to Acxiom’s senior managers Kamal Kharrat, and Chuck Crane, about Cisco’s Application Centric Infrastructure (ACI) strategy and how it helps them address their Data Center challenges. In this blog, I will present a brief summary of our discussions. Acxiom is experiencing exponential growth in its customer base, running millions of transactions every week in their hybrid-cloud based data centers. But this growth has brought in its wake several challenges. Acxiom stores confidential, compliance driven data in their private data center infrastructure, and is currently facing elastic scalability problems. Second, they want to transition from a high CAPEX, fixed infrastructure utilization model towards a dynamic model, in which workloads can be seamlessly moved across the private and public infrastructures. Besides, Axciom has a heteregenous mix of L4-L7 vendor devices, multi-hypervisor and security systems and has a pressing need for an open, policy based extensible foundation for their AOS SAAS to bring these services together.
Acxiom is excited to consider Cisco ACI as the best solution to address these problems and are looking to automate their compute, storage and security infrastructure provisioning and achieve the elasticity requirements in their private cloud similar to what they are achieving in the public cloud. Also, Acxiom plans to move the workloads in and out of compute and storage platforms while changing the security zones on-demand increasing the resource utilization to upwards of 80%. Mr. Chuck Crane is quick to point out that Acxiom makes more than 20,000 network and security configuration changes every year and feels the only way to keep up with the growing customer base is to eliminate the labor intensive man-hours and costs that go with them, and hopes to achieve significant reduction in these inefficient processes via automation. He says ACI is the key to arm the network operations to automate the operations and ultimately attain the competitive advantage of agile IT resulting in faster time to market and capitalizing new revenue opportunities.
Today, depending on the solution, it takes about 7 days to 3 weeks for a full provisioning of the resources and the goal is to bring the provisioning time down to hours. With ACI, they say, Acxiom aims to achieve 24-hour turnaround in end-end infrastructure provisioning for application deployments Acxiom will realize a significant reduction in OPEX with this automation.
Last, let us look at how ACI’s Openness helps Acxiom’s data center operations. When looking at repatriating an application (Figure 2) into a private data center, one of the critical challenges is the ability to port the same tools and automation from the public to the private cloud and the network infrastructure is a critical layer in realizing this goal. The open standards based ACI helps Acxiom to use their existing tools and expertise in working across public and private clouds in building infrastructure quickly and achieving the business goals of faster time to market resulting in increased revenue potential.
In conclusion, the Acxiom executives assert that ACI allows their private datacenters to integrate best of breed technologies with their existing infrastructure and achieve full automation seamlessly using service stitching from compute through load balancing through the security platforms -- all from a single point of control. This helps Acxiom to optimize costs, reduce turnaround times and at the same time work seamlessly across private and public clouds.
In my previous blog, we provided an overview of the critical use cases and innovations we included in our new Business Continuity and Workload Mobility Solution for Private Cloud. This blog highlights the critical trends and challenges driving new multi-site Cloud designs.
Two important trends are driving CTO’s and CIO’s to deploy new multi-site Cloud solutions that provide better Business Continuity, Workload Mobility, and Disaster Recovery.
More workloads are moving to the Private and Public Cloud versus the traditional data center
Cloud Data Centers have a higher density of workloads per server than traditional data centers due to increased virtualization.
This ever increasing volume of Cloud hosted workloads is placing serious pressure on operations teams to manage larger scale data centers, and insure that they keep these workloads up and running, avoiding costly downtime or a nightmare service outage. Many of the CTO’s and CIO’s we’ve worked with are re-assessing their Multi-site strategy to insure they can answer some tough questions:
What are the common weak points of multi-site Cloud designs that could prevent us from achieving our Business Continuity goals for our critical apps? Can we avoid them?
How can we provide Workload Mobility between sites to provide a more agile Cloud environment?
In the event of site outage, can our Private Cloud reduce the time it takes to recover critical applications to a new site?
How can our Private Cloud deliver these critical services (Business Continuity, Workload Mobility, and Disaster Recovery) with lower cost and complexity?