Recently, I had the opportunity to join a discussion regarding the #FutureOfCloud in the #InnovateThink Tweet Chat. One of the questions that came up revolved around the process typically used to associate a workload with a specific cloud deployment model. That is an important question and top of mind whenever we speak with customers.
One of the most appealing qualities of the cloud is the variety of ways in which it can be delivered and consumed. A successful cloud strategy will let you take advantage of a full range of consumption models for cloud services to meet your specific business needs. In reality, when we think about it, the process is very similar to what any company in virtually any industry goes through when shaping its business strategy. For each area of the business, inevitably the question arises: Build, Buy or Partner?
Build versus Buy
When formulating their sourcing strategies, IT organizations repeatedly face very similar service-by-service, “build-versus-buy” decisions. The predisposition of IT organizations is to create and build IT services on their own. That is what many IT professionals want to do … create new services, invent ‘new things’. And that may very well be the best option. However, many customers also realize that it is often beneficial to adopt best-in-class capabilities to remain competitive even if this requires outsourcing select portions of the IT value chain. Hence the emerging role of IT as a broker of IT services that we discussed in the past (for more information please visit our web site.) And this requires a paradigm shift for many IT organizations.
Solving the ‘Equation’
To solve the “build versus buy” equation when sourcing their IT services, IT needs to evaluate cost, risk, and agility requirements to determine the best strategy for their business. IT needs a plan and a set of governance principles to evaluate each service based on its strategic profile. A collaborative approach between business and IT is also required. For example: Is the service core to the business? What is the business value associated with it (e.g., strategic importance, sustainable differentiation it can provide, time to market requirements etc..)? What are the cost implications (CapEx vs OpEx), risk profile, security, SLAs, data privacy and regulatory compliance requirements? And … do you have the expertise to plan, build and manage the new IT service while meeting the expectations of your business counterparts?
Hybrid Cloud Rapidly Emerging as the New ‘Normal’
Not surprisingly, my experience when talking to customers that operate in regulated industries or that are concerned about security – and the privacy of their data more specifically – is that they tend to favor private cloud deployments. For example, I was talking to a compliance manager part of a global financial institution and as soon as I uttered ‘public cloud’ his reaction was quite predictable …. He shook his head, got serious and quipped “Public cloud … I do not think so …” Real or perceived, security concerns remain top of mind and a major barrier to cloud adoption, and this is validated by market research data.
The predictability of the application with respect to resource consumption is also a factor. Applications that have high elasticity requirements are well positioned to benefit from the economics, agility and scale that public clouds can offer. Infrastructure capacity planning and optimization is a big task for most IT organizations. Having the ability to burst into the public cloud represents an appealing option. This is also why ultimately hybrid cloud is becoming the new normal, and results of the 2014 North Bridge Future of Cloud Computing Survey supports that view.
2014 Future of Cloud Computing – Annual Survey Results
The Power of Choice
Arguably the most important thing your IT organization can do is to diversify its choice of cloud providers ….. Simply because without choice you really do not have a strategy …. And no contingency plans to go along with it ….
What do you think?
Tags: capacity planning, Cisco Cloud strategy, Cisco Domain Ten, cloud, cloud workloads, Hybrid Cloud, InterCloud, private cloud, Public Cloud
First, the Internet of Things:
Consider these impressive stats shared in a keynote from Cisco’s CTO and CSO Padmasree Warrior last week at Cisco Live, London:
- 50 Billion “things” including trees, vehicles, traffic signals, devices and what not will be connected together by 2020 (vs. 1000 devices connected in 1984)
- 2012 created more information than the past 5000 years combined!
- 2/3rd of the world’s mobile data will be video by 2015.
These statistics may seem a bit surprising, but the fact is, they cannot be ignored by CIOs and others chartered with the responsibility of managing IT infrastructure.
Impact on Enterprise and SP Infrastructure strategies
Further, these trends are not silo’d and are certainly not happening in a vacuum. For example, Bring-your-Own Device (BYOD) and the exponential growth of video endpoints, may be happening in the “access”, but they are causing a ripple effect upstream in the data center and cloud environments, and coupled with new application requirements, are triggering CIOs across larger Enterprise and Service Providers to rapidly evolve their IT infrastructure strategies.
It is much the same with cloud infrastructure strategies. Even as Enterprises have aggressively adopted the journey to Private Cloud, their preference for hybrid clouds, where they can enjoy the “best of both worlds” – public and private have grown as well. However, the move to hybrid clouds has been somewhat hampered by challenges as outlined in my previous blog: Lowering barriers to hybrid cloud adoption – challenges and opportunities.
The Fabric approach
To address many of these issues, Cisco has long advocated the concept of a holistic data center fabric, heart of its Unified Data Center philosophy. The fundamental premise of breaking silos, and bringing together disparate technology silos across network, compute and storage is what makes this so compelling. At the heart of it, is the Cisco Unified Fabric, serving as the glue.
As we continue to evolve this fabric, we’re making three industry-leading announcements today that help make the fabric more scalable, extensible and open.
Let’s talk about SCALING the fabric first:
- Industry’s highest density L2/L3 10G/40G switch: Building upon our previous announcement of redefining fabric scale, this time we introduces a New Nexus 6000 family with two form factors – 6004 and 6001. We expect these switches to be positioned to meet increasing bandwidth demands, for spine/leaf architectures, and for 40G aggregation in fixed switching deployments. We expect the Nexus 6000 to be complementary to the Nexus 5500 and Nexus 7000 series deployments, and is not to be confused with the Catalyst 6500 or Nexus fabric interconnects.
The Nexus 6000 is built with Cisco’s custom silicon, and 1 micro-second port to port latency. It has forward propagated some of the architectural successes of the Nexus 3548, the industry’s lowest latency switch that we introduced last year. Clearly, as in the past, Cisco’s ASICs have differentiated themselves against the lowest common denominator approach of the merchant silicon, by delivering both better performance as well as greater value due to the tight integration with the software stack.
The Nexus 5500 incidentally gets 40G expansion modules, and is accompanied by a brand new Fabric Extender – the 2248PQ, which comes with 40G uplinks as well. All of these, along with the 10G server interfaces, help pair the 10G server access with 40G server aggregation.
Also as part of the first step in making the physical Nexus switches services ready in the data center, a new Network Analysis Module (NAM) on the Nexus 7000 also brings in performance analytics, application visibility and network intelligence. This is the first services module with others to follow, and brings in parity with the new vNAM functionality as well.
- Industry’s simplest hybrid cloud solution: Over the last few years, we have introduced several technologies that help build fabric extensibility – the Fabric Extender or FEX solution is very popular extending the fabric to the server/VM, as are some of the Data Center Interconnect technologies like Overlay Transport Virtualization (OTV) or Location ID Separation Protocol (LISP), among others. Obviously each have their benefits.
The Nexus 1000V Intercloud takes these to the next level by allowing the data center fabric to be extended to provider cloud environments in a secure, transparent manner, while preserving L4-7 services and policies. This is meant to help lower the barriers for hybrid cloud deployments and is designed to be a multi-hypervisor, multi-cloud solution. It is expected to ship in the summer timeframe, by 1H CY13.
This video does a good job of explaining the concepts of the Intercloud solution:
Read More »
Tags: Andre Kindness, Ayman Sayed, Cisco Cloud strategy, Cisco Controller, Cisco Data Center strategy, Cisco ONE, Cisco Open Network Environment, David Ward, David Yen, GDIT, Greg Sanchez, Internet of Things (IoT), Kerby Lyons, Matt Davy, NAM, Nexus 1000V InterCloud, Nexus 6000, onePK, OpenFlow, padmasree warrior, Shashi Kiran, SunGard Availability Services, Unified Data Center, Unified Fabric