Cisco Blogs


Cisco Blog > Data Center and Cloud

Why ISVs Must Transform In SMAC Environment

In today’s era of SMAC – Social, Mobile, Analytics and Cloud based solution, Pay-Per-Use licensing and Dev Ops software development methodology, Independent Software Vendors (ISV) are facing major challenges on many fronts. ISVs strive to differentiate from their competitors and gain new customers, as well as retain existing customers and generate additional revenue from them. This shift is happening throughout the software developer market and has surfaced technological and business changes for ISVs.

Read More »

Tags: , , , , , , , , , , , , , , ,

Next Generation Applications and Data Analytics

I was speaking with a customer today at VMworld and, unlike many discussions, which are focused on the infrastructure (servers, storage, networking), this one turned primarily on the application. This person was describing to me his need to match the server to a new set of applications he is being asked to support and then what to do with all the data being generated.   With much of the conversation at the show focusing on virtualization of resources, he made the point that consideration of the architecture itself – how servers, storage and networking is leveraged – was still critical to mapping the requirements of the application back to what that application lives on.

This is a trend we’re seeing more and more.  A new breed of applications, and the increasing density of data, is driving a new way of thinking about the underlying infrastructure.  Often, these applications are developed internally, leveraging many of the toolkits available on the market today, and delivered through a private or public cloud.  These applications can be run from Read More »

Tags: , , , , , ,

Paradigm Shift with Edge Intelligence

In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.

I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity.  Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.

We all laughed and then the conversation grew deeper:

  • Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
  • As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
  • Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”

Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.

The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.

Very reminiscent of the early 90’s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security.  These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.

The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?

Tags: , , , , , , , , , , , , , , , , , ,

Marketing Analytics adding heft to Digital Analytics

The worlds of Digital Analytics and Marketing Analytics have frequently led somewhat independent lives – with the Digital Analyst spending time looking at digital channels (web/mobile/social), reading out metrics, understanding conversion rates, focused on conversion funnels, A/B and multi-variate testing and the like while the Marketing Analyst was more concerned with Survey Analysis, developing “What-If” simulators for product features and concerning themselves with ROI from campaigns.

There has been an inevitability in the growth in popularity of the digital medium even as more and more content was consumed through digital channels – and quite naturally, the marketing and advertising dollars followed suit. This graphic from IAB captures this rapid growth:  Read More »

Tags: , , ,

The Bases are Loaded for a Cisco UCS Grand Slam

“The innovation pipeline is very strong, and you can expect to see announcements in the fall that will continue to accelerate our momentum with UCS and add to our competitive advantage.”

Those are comments from Cisco’s earnings call last week, and on September 4th I hope you will join us for the unveiling of the next wave of Unified Computing that John Chambers was speaking of.

We don’t invoke the term innovation lightly at Cisco.   As Frank Palumbo recently talked about, change is the only constant, and our data center customers need to stay in front of that change.   What we’re hearing from them often centers on three critical concepts:

 1.  We need a common operating environment that spans from the data center to the very edge.   “Edge” in this sense is used to describe the many worlds that exist beyond the walls of the data center, where the demand for computing power is inexorably growing.  For service providers that can mean IT infrastructure located at the Customer Premise.  For large enterprise and public sector IT teams the Edge is found in the branch offices, retail locations and remote sites where innovation is exploding with dynamic customer experiences and new ways of doing business.   It’s at the wind farm and the end of the drill bit miles below the oil rig.   It’s in the “fog” of connected sensors and smart objects in connected cities.  And it is the handheld devices that billions of people are using today to consume and generate unprecedented volumes of data and insight, and the 50 billion people and things that Cisco estimates will be connected by 2020.

 2.    We need a stronger engine to accelerate core applications and power data-intensive analytics.  (AKA, “you’re going to need a bigger boat”)  The imperative for faster and better decisions has never been greater and the tools to extract the signal from the noise in the data deluge require big horsepower.  Recommendation engines, real-time price optimization, personalized location-based offers, improved fraud detection… the list goes on in terms of opportunity created by Big Data and the IoE.  All while IT continues to deliver the core applications – that keep business running – uninterrupted and faster than before.

 3.    We need a common operating environment that spans traditional and emerging applications.  Complexity is the bane of innovation and the bane of IT.  In addition to the familiar workloads, which are well understood in terms of bare metal scalability and virtual encapsulation, there is growing use of applications architected for massive horizontal scale.    In-memory, scale up analytics are being utilized right alongside cloud-scale technologies like MapReduce to tackle different elements of business problems in different ways.  Very different architectures, with very different demands on computing infrastructure.  The conditions for complexity loom.  Will a hero emerge?

When UCS was born it shook up many of the fundamental assumptions of what data center infrastructure should be expected to do and what IT could do to accelerate business.   With this launch, history repeats itself, as we work to help customers future proof the data center for change tomorrow and transformation today.   Our development team has taken the next stride in the journey of re-inventing computing at the most fundamental levels, to power applications at every scale.

Le_Yankee_Stadium

I hope you will join us for the event on 9/4 to see how we’re taking our strategy forward in the data center.   We have a bit of a baseball theme in the launch since we’re delighted to be joined by Major League Baseball’s Joe Inzerillo at our event in New York.   So follow the conversation at it unfolds over coming weeks with #UCSGrandSlam and #CiscoUCS.   The bases are loaded.   

Tags: , , , , ,