In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90′s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
The worlds of Digital Analytics and Marketing Analytics have frequently led somewhat independent lives -- with the Digital Analyst spending time looking at digital channels (web/mobile/social), reading out metrics, understanding conversion rates, focused on conversion funnels, A/B and multi-variate testing and the like while the Marketing Analyst was more concerned with Survey Analysis, developing “What-If” simulators for product features and concerning themselves with ROI from campaigns.
There has been an inevitability in the growth in popularity of the digital medium even as more and more content was consumed through digital channels -- and quite naturally, the marketing and advertising dollars followed suit. This graphic from IAB captures this rapid growth: Read More »
We don’t invoke the term innovation lightly at Cisco. As Frank Palumbo recently talked about, change is the only constant, and our data center customers need to stay in front of that change. What we’re hearing from them often centers on three critical concepts:
1. We need a common operating environment that spans from the data center to the very edge. “Edge” in this sense is used to describe the many worlds that exist beyond the walls of the data center, where the demand for computing power is inexorably growing. For service providers that can mean IT infrastructure located at the Customer Premise. For large enterprise and public sector IT teams the Edge is found in the branch offices, retail locations and remote sites where innovation is exploding with dynamic customer experiences and new ways of doing business. It’s at the wind farm and the end of the drill bit miles below the oil rig. It’s in the “fog” of connected sensors and smart objects in connected cities. And it is the handheld devices that billions of people are using today to consume and generate unprecedented volumes of data and insight, and the 50 billion people and things that Cisco estimates will be connected by 2020.
2. We need a stronger engine to accelerate core applications and power data-intensive analytics. (AKA, “you’re going to need a bigger boat”) The imperative for faster and better decisions has never been greater and the tools to extract the signal from the noise in the data deluge require big horsepower. Recommendation engines, real-time price optimization, personalized location-based offers, improved fraud detection… the list goes on in terms of opportunity created by Big Data and the IoE. All while IT continues to deliver the core applications -- that keep business running – uninterrupted and faster than before.
3. We need a common operating environment that spans traditional and emerging applications. Complexity is the bane of innovation and the bane of IT. In addition to the familiar workloads, which are well understood in terms of bare metal scalability and virtual encapsulation, there is growing use of applications architected for massive horizontal scale. In-memory, scale up analytics are being utilized right alongside cloud-scale technologies like MapReduce to tackle different elements of business problems in different ways. Very different architectures, with very different demands on computing infrastructure. The conditions for complexity loom. Will a hero emerge?
When UCS was born it shook up many of the fundamental assumptions of what data center infrastructure should be expected to do and what IT could do to accelerate business. With this launch, history repeats itself, as we work to help customers future proof the data center for change tomorrow and transformation today. Our development team has taken the next stride in the journey of re-inventing computing at the most fundamental levels, to power applications at every scale.
I hope you will join us for the event on 9/4 to see how we’re taking our strategy forward in the data center. We have a bit of a baseball theme in the launch since we’re delighted to be joined by Major League Baseball’s Joe Inzerillo at our event in New York. So follow the conversation at it unfolds over coming weeks with #UCSGrandSlam and #CiscoUCS. The bases are loaded.
I recently returned from my seventh annual Boulder BI Brain Trust presentation. The BBBT as everyone likes to call it, is unique in the business intelligence, data and analytics industry.
Since 2006, the BBBT has advanced this industry by organizing half-day vendor presentations to their over 140 members. During these presentations, vendors such as the Cisco’s Data and Analytics organization, update BBBT members on new strategies, evolving technologies, customer adoption and more. In return the vendors get valuable feedback from the BBBT’s global network of analysts, consultants and academics.
Cisco’s Expanded Data and Analytics Portfolio
Mike Flannagan, General Manager of Cisco’s Data and Analytics Business Group, led off this year by identifying four key trends creating new business opportunities for our customers, as well as disrupting their traditional data management approaches.
Increased speed of business and rising customer expectations
Data is the new competitive battlefield
Data is increasingly distributed
Data at the edge volumes are extreme
Mike then discussed the coming together of Cisco’s data and analytics portfolio over the past year in order to comprehensively address these trends. These solutions include:
Cisco Prime Analytics, the former Truviso products.
Cisco Data In Motion, from the TigerMe acquisition.
Cisco Connected Analytics, a set of packaged analytics applications targeted for specific market segments including retail, healthcare, service provider, city infrastructure, call center, and more.
Billions of Devices Generating Even Bigger Data
Following Mike, Jim Green, CTO for Mike’s group, discussed the data and analytic implications that will result as 30 billion additional devices connect over the network within then next five years.
The business outcome and analytics opportunities from these devices are endless. However the data volumes generated will make even today’s big data seem small. And how all these come together in an already complex data landscape is an Internet of Everything challenge everyone will soon face.
Data Virtualization Advances
Kevin Ott, General Manager of the Data Virtualization Business Unit, and I closed out this year’s BBBT with updates on data virtualization market dynamics, customer adoption trends and our product strategy for maintaining product leadership in this increasingly important foundation technology. Join us at Data Virtualization Day on October 1, 2014, in New York City where Cisco, our customers and prominent analysts will share more on these topics. Sign up soon as space is limited. #DVDNYC
Gain a BBBT Insider’s View
Check out these three sources to gain an insider’s view on Cisco’s BBBT presentation:
Listen to Mike Flannagan and Jim Green’s podcast with BBBT co-founder Claudia Imhoff.
Read acknowledged data warehousing pioneer and BBBT member, Barry Devlin’s blog.
Review over 100 tweets from BBBT members by filtering on #BBBT.
To learn more about Cisco Data Virtualization, check out our page.
While certainly exciting, buying a new house, can also serve as a revealing exercise in understanding data science.
A couple of weeks ago I went to my bank to investigate my financial options for buying a new house. To my surprise, my account manager gave me a stack of paperwork to fill out—and I soon realized that my bank was already in possession of 90 percent of the information I was being asked to provide. So why was I having to take the time to fill in information the bank already had, or could easily acquire? And more importantly, why couldn’t my account manager quickly access information about my client status and my personal preferences, and immediately provide a tailored offering, decreasing the chance that I would look elsewhere for this service?
Figure 1. Centralized, Decentralized, and Distributed Networks. A distributed, virtualized approach to database management enables quick combination and analysis of large volumes of data—where and when it is needed.
Source: Paul Baran, Rand Corporation.
I wrote in one of my recent blogs about the issues and solutions related to quickly combining data that comes in large volumes by focusing on data virtualization and cloud. This can enable seamless customer interactions and decrease client churn, be it in financial services or in the telecom sector. But what is required at an organizational level so that people, process, data, and things come together to enable a superior customer experience and create entirely new revenue possibilities?