Today’s networks are an essential part of business, education, government, and home communications. Many residential, business, and mobile Internet Protocol (IP) networking trends are being driven largely by the combination of video, social networking, and advanced collaboration applications, termed “visual networking.” In fact, total Internet traffic has experienced dramatic growth in the past decade alone. Take a look at this interactive infographic from Cisco that shows key trends and forecasts the growth of global IP traffic from 2013 to 2018. You can choose a category and filter the geographic regions in the map to view the impact of global IP traffic. According to Cisco’s Visual Networking Index (VNI), globally, there will be 20.6 billion networked devices by 2018, up from 12.4 billion in 2013. VNI is part of Cisco’s ongoing effort to forecast and analyze the growth and use of IP networks worldwide. VNI also forecasts that global Internet Protocol (IP) traffic will increase nearly three-fold over the next five years due to more Internet users and devices, faster broadband speeds and increased video viewing. Global IP traffic for fixed and mobile connections is expected to reach an annual run rate of 1.6 zettabytes – more than one and a half trillion gigabytes per year by 2018.
So who and what are responsible for the projected increase in overall internet traffic?
Read More »
Tags: broadband, Cisco, global ip traffic, Internet of Everything, IoE, ip traffic, M2M, Machine to Machine, Service Provider, visual networking index, vni, wi-fi, wifi, zettabyte
In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
- Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
- As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
- Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90’s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
Tags: Big Data, big data analytics, CERN, cloud, Data Gravity, Fog computing, gravity, IoT, IoTSP, ISP, keynote, LHC, Linux, LinuxCon, M2M, Moore’s law, Nielsen's Law, open source, SP
The Internet of Everything (IoE) describes machine-to-machine (M2M) compute entities that track and measure real-time data that can be used to build out a data history for analytics that could be used to optimize the quality of life. The opportunity is represented by devices used in a person’s everyday life that are connected to the Internet, have the ability to learn a person’s consumption behavior, and embody the goal to improve the efficacy of services and goods delivery and consumption. Cisco Systems CEO John Chambers says that the Internet of Everything could be a $19 trillion opportunity. 1 Read More »
Tags: #ciscochampion, cloud, Internet of Everything, M2M, Machine to Machine
Did you know in Japan, 90% of mobile phones are waterproof because youngsters use them even in the shower?
Did you know that Japan consists of over 6,800 islands?
Did you know Japan suffers 1,500 earthquakes every year.
In Japan Mobile data traffic grew 92% in 2012 and 66% from 3Q 2012 to 3Q 2013, according to Japan’s Ministry of Internal Affairs and Communications.
According to the GSMA estimates for Machine-To-Machine (M2M), ten countries account for 70% of all M2M connections as of year-end 2013, comprising China, the US, Japan, Brazil, France, Italy, the UK, Russia, Germany and South Africa.
So what is the problem? Well as you can see the people of Japan will take and use their mobile devices anywhere and at any time. The country is geographically dispersed, and earthquakes occur all of the time (mostly very small). All along the amount of mobile traffic is growing at an astounding rate with no signs of slowing down, with the M2M industry just beginning. So what is an operator like NTT Docomo supposed to do?
What’s the solution? NTT Docomo has Read More »
Tags: Catalyst 4900, Cisco Prime Fulfillment, Cisco Quantum™ Virtualized Packet Core, EPC, esp, evolved services platform, IOT Cloud Connect, Jim O’Leary, LTE, M2M, QvPC, Service Provider, UCS, Virtualized packet core, vni
Wow how time flies, three months ago there was snow and cold weather in Boston but at the same time it was quite hot at the 2014 MWC. Hot from all of the discussions on Network Functions Virtualization (NFV) and Orchestration were on everyone’s minds, as both mobile operators and vendors try to answer the question “How to handle Machine-to-Machine (M2M), Internet-Of-Things (IOT) and the 50 Billion new devices by 2020”.
Fast forward to today and NFV & Orchestration Proof Of Concepts (POC) are being requested by or being inserted in every mobile operators network. Many of whom will be publically launching in specific application areas like M2M soon, and several operators are evaluating insertion into their main network sooner than expected.
Cisco Quantum™ Virtualized Packet Core (Quantum vPC), the industry’s most complete, fully virtualized EPC (Evolved Packet Core) that Read More »
Tags: Cisco Quantum™ Virtualized Packet Core, EPC, esp, Jim O’Leary, LTE, M2M, PMB, Premium Mobile Broadband, QSB, QvPC, Service Provider, Virtualized packet core