Cisco Blogs


Cisco Blog > Internet of Everything

4 Key Requirements to Scale the Internet of Things

April 15, 2014 at 8:00 am PST

Today the Internet of Things (IoT) is everywhere: you can easily see smart meters on houses, parking sensors in the ground, cameras attached to traffic posts, and people wearing intelligent wristband and glasses -- all of them connected to the Internet. And this is only the tip of the iceberg: while you are reading this blog post factories, trains and trucks around the world are also being connected to the Internet.

Many traditional industries have historically requested help from different types of engineers to improve their processes and gain efficiency. Now they are asking us, the Internet engineers, to contribute solving new industrial world challenges by connecting billions of new devices.

The more ambitious part of this journey is the integration between both worlds: Information Technology (IT) and Operation Technology (OT). For that a systems approach is required to scale the existing Internet infrastructure to accommodate IoT use cases, while making IT technology easy to adopt for OT operators. We are facing a historical opportunity to convergence massive scale systems in a way we have never seen before, and such an effort will unlock a multibillion-dollar business.

Scaling IoT

In order to be ready to capture this opportunity and scale in a sustainable manner, four requirements are necessary:

Read More »

Tags: , , , , , ,

Virtualization Meets Video Processing at NAB 2014

If anything is certain about the video business, it’s this: the volume of change is daunting and every change tends to make life more complicated, not less.

This is certainly true at the sharp end of the business -- digital video processing – where  “multiscreen” video, new video formats and new video technologies are together creating a perfect storm of complexity. Once there was SD over MPEG2 delivered to TVs. Now there is SD, various flavors of HD and, soon, 4K; and MPEG2, AVC and now HEVC; plus a wealth of encapsulation schemes and DRMs; And even more screen sizes and resolutions as the number of device to be supported grows ever larger.

The number of permutations of all these options is truly dizzying. Every permutation is a potential video “workflow” to be implemented – and the number of permutations is expanding rapidly, apparently endlessly and it’s exponential. Today Cisco deals with some media companies that have over 80 video workflows for their content. One more video format – for instance 4K – and this potentially doubles to 160. Another compression scheme – HEVC perhaps -- and now we have 320. And so on.

Keeping track of all these “workflows” is one thing, but Read More »

Tags: , , , , , , ,

Cisco UCS Five Years On: The Right Solution at the Right Time

March 25, 2014 at 12:06 pm PST

I knew we were on to something good when a customer told me “This is so easy, it’s CTO proof.”

Early in the business, I was talking to a front-line server admin who had found that Cisco UCS made server deployment so reliable, automated and simple that he was convinced even his CTO could pull it off without breaking anything.  The enthusiasm was real, and infectious, and it changed the face of the data center market.

Thinking back five years to March of 2009, when Cisco introduced UCS, the economy was still spiraling into the worst recession of our lifetime.  IT budgets were being slashed.   Many wondered if it was the right time for Cisco to enter a new market with deeply entrenched competitors.

As it turns out, it was the perfect time.  Because change occurs fastest when times are hard.

looking for change

In the decade leading up to 2009, computing innovation had stalled.  The incumbents still had tunnel vision on the power and cooling challenges that arose out of multi-core processing in the mid-2000’s.  Innovation was essentially focused on mechanical packaging:  blade servers for mainstream IT and  “skinless” boxes for the hyperscale crowd.   Overlooked was the real problem for the vast majority of customers:  operational complexity. Remember that server virtualization was rapidly spreading in nearly every data center.  Again, this was originally a response to a hardware problem: processor utilization; but as everyone recognized the operational benefits, virtualization was taking hold very fast.  As was cloud.  Combine all this with the disaggregation of data storage from the server, which had already moved out onto the network as NAS and SAN many years before, and you had a perfect storm of complexity threatening to outpace the capacity of many IT organizations.  The individual technologies in the data center were not overwhelmingly complex but tying them all together, into a system where you could land and scale an application in a very secure and available way, became the all-consuming job of the customer.   Collectively, the industry had failed.   In 2009, more than ever, customers needed something to help them slash OPEX in the data center and free people up to face the challenges of the day.  This was the innovation vacuum that UCS had been designed to fill.

Think of UCS as the Turducken of the data center:  the sum is much, much greater (and tastier) than the parts.   A lot of true innovation has gone into UCS in the areas of server I/O and in fundamental advancements to server management technology.   The latter is especially critical, because what is often overlooked in virtualization and cloud discussions is the underlying issue of deploying, managing and scaling the physical infrastructure itself (details, details…)   The advent of UCS completed the total abstraction and automation of hardware in crucial ways that hypervisor and cloud technology still can’t acheive on their own.   API-controlled data center hardware is a foundational element of modern IT innovation, and UCS started it all.  This may be Cisco’s greatest contribution to the industry and charted the course for Cisco ACI in the broader data center.

Cisco Unified Computing System Five Years of Data Center Innovation from Cisco Data Center

 

Cisco’s not stopping.   In the intervening five years, new innovation opportunities have appeared.   Most recently, the addition of flash systems to Unified Computing in the form of UCS Invicta, which opens up a whole new chapter for what customers will be able to achieve with the System.  UCS Director is taking on a pivotal role for automation across Cisco solutions and the integrated infrastructures that we construct with our storage partners.  The future is so bright, our partners need sunglasses.

UCS timeline

The team has put together this interactive timeline that commemorates many of the milestones in the first five years of UCS.   Looking back over it, I can only feel proud and humbled to be associated with the team here at Cisco, our technology and channel partners, and most importantly with our customers, who have clearly proven that UCS was (and is) the right solution at the right time.

Tags: , , , ,

Cloud Services to Move the Internet of Everything (IoE) – and the SP Business – Forward, Faster

Throughout my recent meetings with Service Provider customers at Cisco Live Milan and Mobile World Congress in Barcelona, there were two recurring themes throughout my meetings (and there were a lot of them as my peers and I hosted nearly 1000 of them in the four days of MWC alone).

The first was the power and promise of the cloud.  Whether carriers were leveraging Cisco’s advanced capabilities in the areas of Network Function Virtualization (NfV) or various virtualization, orchestration and automation capabilities – all with the goal of increasing revenue, reducing Opex and enhancing agility – each Service Provider was keenly interested in the impact Clouds can and will have on their businesses.  That’s why the Evolved Services Platform announcement we made resonated so well.

The second was the heightened level of discussion around the dramatic changes Service Providers are seeing in the way people, process, data and things are being connected – essentially the Internet of Everything (IoE) – and thus driving the need to leverage advanced capabilities.  While Cisco has spoken about this for the past year, the idea of the IoE is now being recognized as moving beyond vision to actual opportunity for providers who sit at the center of it all.  The recurring questions they had was around how to seize that opportunity and what was best path forward for their business to create value and differentiation amidst so much and so fast the speed of change.

This is where the two themes come together.  This is where Cisco Cloud Services come into play.

At the Cisco Partner Summit today, we are announcing our Cisco Cloud Services.  Designed as a suite of Cisco application- and network-centric cloud services on a truly open and global public cloud infrastructure comprised of many different clouds tied together, or Intercloud if you will, it provides cloud capabilities for any of our global service providers and partners to leverage quickly.  Cisco Cloud Services combine the flexibility, efficiency and scalability of a public cloud, with the security and control of a private cloud, with the scale and reach that only Cisco and its partners can enable.

It also Read More »

Tags: , , , ,

The Three Mega Trends in Cloud and IoT

A consequence of the Moore Nielsen prediction is the phenomenon known as Data Gravity: big data is hard to move around, much easier for the smaller applications to come to it. Consider this: it took mankind over 2000 years to produce 2 Exabytes (2x1018 bytes) of data until 2012; now we produce this much in a day! The rate will go up from here. With data production far exceeding the capacity of the Network, particularly at the Edge, there is only one way to cope, which I call the three mega trends in networking and (big) data in Cloud computing scaled to IoT, or as some say, Fog computing:

  1. Dramatic growth in the applications specialized and optimized for analytics at the Edge: Big Data is hard to move around (data gravity), cannot move data fast enough to the analytics, therefore we need to move the analytics to the data. This will cause a dramatic growth in applications, specialized and optimized for analytics at the edge. Yes, our devices have gotten smarter, yes P2P traffic has become largest portion of Internet traffic, and yes M2M has arrived as the Internet of Things, there is no way to make progress but making the devices smarter, safer and, of course, better connected.
  2. Dramatic growth in the computational complexity to ETL (extract-transform-load) essential data from the Edge to be data-warehoused at the Core: Currently most open standards and open source efforts are buying us some time to squeeze as much information in as little time as possible via limited connection paths to billions of devices and soon enough we will realize there is a much more pragmatic approach to all of this. A jet engine produces more than 20 Terabytes of data for an hour of flight. Imagine what computational complexity we already have that boils that down to routing and maintenance decisions in such complex machines. Imagine the consequences of ignoring such capability, which can already be made available at rather trivial costs.
  3. The drive to instrument the data to be “open” rather than “closed”, with all the information we create, and all of its associated ownership and security concerns addressed: Open Data challenges have already surfaced, there comes a time when we begin to realize that an Open Data interface and guarantees about its availability and privacy need to be made and enforced. This is what drives the essential tie today between Public, Private and Hybrid cloud adoption (nearly one third each) and with the ever-growing amount of data at the Edge, the issue of who “owns” it and how is access “controlled” to it, become ever more relevant and important. At the end of the day, the producer/owner of the data must be in charge of its destiny, not some gatekeeper or web farm. This should not be any different that the very same rules that govern open source or open standards.

Last week I addressed these topics at the IEEE Cloud event at Boston University with wonderful colleagues from BU, Cambridge, Carnegie Mellon, MIT, Stanford and other researchers, plus of course, industry colleagues and all the popular, commercial web farms today. I was pleasantly surprised to see not just that the first two are top-of-mind already, but that the third one has emerged and is actually recognized. We have just started to sense the importance of this third wave, with huge implications in Cloud compute. My thanks to Azer Bestavros and Orran Krieger (Boston University), Mahadev Satyanarayanan (Carnegie Mellon University) and Michael Stonebraker (MIT) for the outstanding drive and leadership in addressing these challenges. I found Project Olive intriguing. We are happy to co-sponsor the BU Public Cloud Project, and most importantly, as we just wrapped up EclipseCon 2014 this week, very happy to see we are already walking the talk with Project Krikkit in Eclipse M2M. I made a personal prediction last week: just as most Cloud turned out to be Open Source, IoT software will all be Open Source. Eventually. The hard part is the Data, or should I say, Data Gravity…

Tags: , , , , , , , , , , , , , , , , ,