Why should you put a virtualized content delivery network (CDN) in the cloud?
This is not just a theoretical question. It has come from our customers. At our recent Cisco Live event in Milan, we demonstrated how our continued CDN technical leadership can answer this question.
First, some history, as you can’t just begin with the cloud.
At Cisco, we’ve been working hard over the years to evolve our Videoscape Distribution Suite (VDS) platform. From its roots in hardware-based appliances, to software applications powered by our data center hardware, and more recently to virtual machine implementations which can be powered by our own or third party hardware. Each technological advance to our VDS platform has netted gains for our customers in their CDN deployments; whether through more flexible deployment from greater hardware independence, faster time-to-market implementing VDS software applications, or reduced total cost of ownership thanks to server-based virtualization that optimizes footprint and power/cooling requirements.
If you are reading this blog hoping to get a universal recipe for your cloud strategy, I believe you will be disappointed. But then, you already know…. there are no ‘universal’ cloud strategies. You have to formulate a cloud strategy that best fits your business objectives and IT priorities (among a number of other factors.) Our Cisco services team for Cloud Strategy, Management and Operations has various tools including our Cisco DomainTen™ framework that will help you formulate the right cloud strategy for your organization. Parag’s blog is a great source of information in this regard.
This blog series instead will offer a set of perspectives on how I view the evolution of the World of Many Clouds ™ and what steps we are taking to align our cloud strategy to capitalize on it. This first blog will put our strategy in ‘context’ outlining our point of view in light of some important market dynamics.
The primary market research study that we conducted in collaboration with INTEL, along with additional secondary market research studies, clearly indicate that Line of Business (LoB) leaders have been playing a more important role in driving requirements for IT solutions and services. The reasons behind this trend are many, including and not limited to increasing market and competitive pressures, an uncertain business climate, variability of macroeconomic factors and a relentless need to innovate at a faster pace to stay ahead of the competition. What’s more, LOBs now have greater ability to access IT solutions – such as Software as a Service -- outside the traditional enterprise IT value chain, creating “shadow IT” initiatives. In response, IT organizations are looking for new ways to retain their leadership, control, and at times, even relevancy. Furthermore, IT organizations are now expected to support strategic business objectives and enable business growth while also harnessing new technology trends, leading to innovation and new customer experiences. To remain relevant to the business, IT must become a “change agent” and be perceived as a true strategic enabler. The question is how?
We envision IT organizations transitioning to new roles as trusted ‘brokers of IT services’. This model enables IT to add value to one or more public or private cloud services on behalf of its users. IT does this by dynamically bringing together, integrating, and tailoring the delivery of cloud services to best meet the needs of the business.
In a wide-ranging study, Cisco, in partnership with Intel®, sought to pinpoint just how these powerful trends are impacting IT. The “Impact of Cloud on IT Consumption Models” study surveyed 4,226 IT leaders in 18 industries across nine key economies, developed as well as emerging: Brazil, Canada, China, Germany, India, Mexico, Russia, United Kingdom, and the United States. The study supports our point of view. Up to 76% of the survey respondents signaled that IT will act as a “broker” of cloud services across internal and external clouds for LoBs.
In other words, when formulating their sourcing strategies, IT organizations repeatedly face service-by-service, “build-versus-buy” decisions. Therefore, IT needs a plan and a set of governance criteria that support the consistent evaluation of their IT services sourcing options (e.g., time to market, value, sustainable differentiation that the service can provide, SLAs, cost, risk profile and the experience the IT department intrinsically has with that particular service etc..)
This “IT services sourcing flexibility” enables greater levels of business agility, transparency, and speed of deployment to help LoB leaders unlock innovation and achieve core business objectives.
However, let’s step back and see how this is all fitting together. If we rewind, we introduced the concept of the World of Many Clouds ™ a couple of years ago. You can view the evolution of this world as the outcome of the intersection and progressive integration between traditional IT environments and IT services offered by public cloud providers. The roads (in our metaphor) are converging. Lines are blurring. In theory, nothing is preventing a company that consumes IT services from becoming a cloud provider itself (public or private.)
I also believe that the debate regarding private versus public cloud is over. It is about having both at the same time. And to be able to bridge and take advantage of both; hybrid cloud is the new ‘normal.’
In turn, the ability to combine and dynamically aggregate cloud services from private and public clouds can truly occur if IT organizations can rely on an open and secure hybrid cloud environment. And for that to take place you should have the ability to move your cloud workloads (and more broadly your IT services) around. Both data and applications.
You can easily envision a scenario in which a workload -- based on a set of specifications -- ‘automatically discovers’ the best infrastructure to run on. An exchange could facilitate the allocation process. An XML based standard could emerge along with a set of processes used by exchanges to match demand and supply of IT services based on SLAs, costs, data locality requirements etc… On the supply side you can also envision a scenario in which federation or capacity aggregation among suppliers of cloud services would enable increased economies of scale, consistency and a broader set of choices.
Ok … coming back to earth … our Cloud strategy intends to capitalize on some of these market dynamics and enable IT to retain control, relevance and increase its strategic profile by leveraging the evolution of the World of Many Clouds. In my next blog I will provide an overview of the actual strategy and begin focusing on it in more detail. But first I wanted to share the context.
Wireless momentum continues worldwide and mobile data traffic expected to increase nearly 11-fold
Today, Cisco released its latest Visual Networking Index (VNI) Global Mobile Forecast, 2013-2018, projecting future mobile data traffic over cellular networks (2G, 3G, or 4G) and wi-fi off-load traffic. Our detailed research, including region-specific and some country-specific data, can be found in our complete white paper. Highlighted below for your convenience are some of the key takeaways.
The OpenDaylight Project today announced that its first open source software release Hydrogen is now available for download. As the first simultaneous code release cross-community it has contributions across fifty organizations and includes over one million lines of code. Yes. ODL > 1MLOC. For those of you interested that’s approximately two hundred and thirty man-years of work completed in less than twelve months.
It was around this time last year that the media started to pick up on a few rumors that something may be in the works with software-defined networking and controllers. I remember our first meeting at Citrix where the community started to collaborate on The OpenDaylight Project and come to common ground on how to start something this large. We had multiple companies and academics in the room and many ideas of where we wanted this project to go but there was one thing we had in common: the belief and vision to drive networking software innovation to the Internet in a new way and accelerate SDN in the open; transparently and with diverse community support. Each of us had notions of what we could bring to the table, from controller offerings to virtualization solutions, SDN protocol plugins and apps to solve IT problems. Over two days at Citrix we looked at things from a customer perspective, a developer perspective and ultimately and arguably the most important, a community perspective. From there The OpenDaylight Project emerged under the Linux Foundation. As I look back I want to applaud and thank the companies, partners, developers, community members and the Linux Foundation for driving such a large vision from concept to reality in less than twelve months, which is an incredible feat in itself.
Hydrogen is truly a community release. Use cases span across enterprise, service provider, academia, data center, transport and NfV. There are multiple southbound protocols abstracted to a common northbound API for cross-vendor integration and interoperability and three editions have been created to ensure multi-domain support and application delivery as well as deployment modularity and flexibility for different domain-specific configurations. These packages have a consistent environment yet are tailored to domain and role-based needs of network engineers, developers and operators.
The Base Edition, which includes a scalable and multi-vendor SDN protocol based on OSGi, the latest (and backward compatible) OpenFlow 1.3 Plugin and Protocol Library, OVSDB, NetConf/Yang model driver SDN and Java-based YANG tooling for model-driven development.
The Virtualization Edition (which includes the Base Edition) and adds Affinity Metadata Service (essentially APIs to express workload relationships and service levels), Defense4All (DDoS detection & mitigation), Open DOVE, VTN, OpenStack Neutron NorthBound API support and a virtual tenant network offering.
The Service Provider Edition (again, including the Base Edition) that also offers the Metadata Services and Defense4All but includes BGP-LS and PCEP, LISP Flow Mapping and SNMP4SDN to manage routers, gateways switches.
More information can be found on the website with regards to the releases and projects themselves.
I want to stress the importance of how well the vision has been delivered to date. I’ve been involved in multiple standards-bodies and in open source discussions in the past but this is truly one of the largest undertakings I’ve seen come together in my entire career. OpenDaylight developers have been coding day and night to get this release out the door and it’s amazing to see the collaboration and coherency of the team as we unite to deliver on the industry’s first cross-vendor SDN and NfV Platform. In addition and frequently not mentioned is that many of the protocols listed in the Editions above are also standardized at organizations like the IETF during the same period. Code and specs at the same time. It’s been a long time since rough consensus and running code has been the norm.
Over here at Cisco we’re fully committed to OpenDaylight. We’re currently using it as a core component in our WAN Orchestration offering for service providers to allow intelligent network placement and automated capacity and workload planning. The ACI team (formerly Insieme) collaborated with IBM, Midokura and Plexxi to create a project in OpenDaylight that creates a northbound API that can set policy and be used across a wide range of network devices. And of course we’re bringing components of the OpenDaylight codebase into our own controllers and ensuring application portability for customers, partners and developers alike. From this I would expect to see more code donations going into the community moving forward as well. We made several announcements last week about our campus/branch controller that includes OpenDaylight technology.
At the end of the day an open source project is only as strong as its developers, its community and its code. As we as a community move forward with OpenDaylight I expect it to become stronger with more members joining with new project proposals as new code contributors coming onboard from different industries as well. As I look at our roadmap and upcoming release schedule I’m pumped for what’s next and so happy the community has catalyzed a developer community around networking.
Please do visit the site, download the code and take Hydrogen for a test-drive. We want to hear feedback on what we can make better, what features to add or how you’re going to utilize it. Moreover, we’d love you to participate. It’s a kick-ass community and I think you’ll have fun and the best part; you’ll see your hard work unleashed on the Internet and across multiple communities too.
By Maywun Wong, Service Provider Mobility Marketing Manager
Mobility is here to stay. According to a recent survey, 70% of all consumers use Public Wi-Fi, spending an average of 44 minutes connected to the Internet. It’s not just people connecting, but processes, things, and data, too, leading to the revolution called the Internet of Everything. At CES this year, John Chambers showed how the Internet of Everything could represent as high as $19 trillion in global opportunity in the next decade . For the public sector alone, IoE could generate up to $4.6 trillion in value in the next 8 years.
Cities around the world are using the IoE to provide intelligence to provide a better experience for their citizens and visitors. They are installing Wi-Fi across town so everyone can connect with their friends and family. They are also placing sensors in parking spaces, bus stations, light poles, and others such that the public knows what resources are available. Wim Elfrink, Cisco EVP of Industry Solutions and chief globalization officer, recognizes that “cities have the opportunity to transform the way citizens experience urban life.”
The city of Barcelona has embraced IoE in a true Smart+Connected City. They have Read More »