In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90′s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
If you’re like me, you probably remember the days when computers meant oversized monitors, loud, humming power supplies, and more cables than you knew what to do with. Thanks to Moore’s Law, those days are long gone. With devices getting less costly, smaller, and capable of more efficient computing power, people and businesses of today and tomorrow have more opportunity to connect to the Internet of Everything (IoE).
Take the Raspberry Pi, for example. This low-cost computer was developed to provide computer science learning experiences for children around the world. For $35, the device features USB ports for a keyboard and mouse and an HDMI port to hook up to a monitor. The Raspberry Pi Foundation officially launched the device in February 2012. By September, more than half a million had been sold, and thousands were being manufactured each day, making computing accessible to everyone.
But even more interesting, when the Raspberry Pi went on sale, hackers and experimenters ordered them by the handful to create special purpose applications. They dedicated a whole low-cost computer to the task and moved the computing function to the edge of the network, shifting how we solve the computing problem. So again, we now have another Moore’s Law phenomena. As computers get smaller, more energy efficient, and less expensive, it causes us to rethink where we put the computing in the network and whether it is centralized or at the edge. Moore’s Law enables this natural progression, allowing us to recentralize through the web and distribute through the cloud.
The Nest Thermostat demonstrates a great example of this. Through a combination of sensors, algorithms, machine learning, and cloud computing, Nest learns behaviors and preferences and begins to adjust the temperature up or down. It can be controlled from your laptop, smartphone, or tablet, and it starts to recognize your preferences, automatically adjusting faster and faster and becoming more and more efficient. You have an entire computer (thermostat) on the wall, a classic convergence of more and more things being connected.
This, in turn, changes what’s happening in the data center and the cloud, because having more entry points enables us to connect more things. Sensor technology is also being affected, becoming smaller and less expensive. Texas Instruments now makes a chip that runs an IPv6 stack for connectivity, has built-in wireless, and only costs ninety-nine cents. Moore’s Law has led to a low-powered, low-cost chip, giving us yet another opportunity to rethink and innovate the use of computing.
With these growing ubiquitous opportunities, we can connect more and learn more. As more devices are added to the network, the power and potential for what they will make possible will continue to grow exponentially. Anything you can measure will be measured. Anything you can sense will be sensed. It’s an economical model making the case to be measured for nearly no cost. This shift will help connect the 99 percent of things that are still unconnected in the world, creating real value for the IoE.
How will the amazing possibilities enabled by the IoE affect you? I’d love to know your thoughts. Send me a tweet @JimGrubb.
“Everywhere we go in the world, the things that we come across aren’t intelligent. Like this wall that I’m looking at, it’s just separating the room from the other side. In actuality, that wall should be intelligent.”
He goes on to say, “The next 10 years [will be] nuts.” I couldn’t agree more.
Cisco defines IoE as bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before—turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries.
To help more people “get it,” I thought it would be useful to provide more detail about each of the components—people, process, data, and things—that make up IoE. Read More »
Data is one of your most important assets. Fast Data implies real-time analytics – the ability to analyze your business and marketplace in real-time. Fast Data can give you significant competitive advantage by helping you to predict and respond to situations instantaneously. Today we announced  that the combination of Cisco UCS and Actian Vectorwise Analytical Database can provide such capability at much lower cost of ownership than traditional solutions. Demonstrated by industry standard TPC-H publications, the extreme performance is coming from the ability of Vectorwise to scale extremely well in multi-core environments and effective caching of large data sets enabled by Cisco UCS extended memory technology eliminating IO bottlenecks.
These two benchmark results achieved the best performance and price-performance for 2-socket servers in 100GB  and 300GB  scale factors, new additions to our 60+ world record performance records on Cisco UCS since its introduction just three years ago.