Cisco Blogs


Cisco Blog > Data Center and Cloud

#EngineersUnplugged S5|Ep4: Big Data

March 26, 2014 at 10:56 am PST

In this week’s episode of Engineers Unplugged, Floris Grandvarlet (Cisco) and Richard Pilling (Intel) take on Big Data across the proverbial pond, at Cisco Live Milan. Where are we now, how are we going to approach the ever increasing amount of data (an ocean of it) to fish for information? This is a great overview for the challenges and the evolution of approaches.

Let’s watch and see what they propose to address the challenges:

It’s our very first seahorse--outsmarted once more.

**The next Engineers Unplugged shoot is at EMC World, Las Vegas, May 2014! Contact me now to become internet famous.**

Read More »

Tags: , , ,

The Three Mega Trends in Cloud and IoT

A consequence of the Moore Nielsen prediction is the phenomenon known as Data Gravity: big data is hard to move around, much easier for the smaller applications to come to it. Consider this: it took mankind over 2000 years to produce 2 Exabytes (2x1018 bytes) of data until 2012; now we produce this much in a day! The rate will go up from here. With data production far exceeding the capacity of the Network, particularly at the Edge, there is only one way to cope, which I call the three mega trends in networking and (big) data in Cloud computing scaled to IoT, or as some say, Fog computing:

  1. Dramatic growth in the applications specialized and optimized for analytics at the Edge: Big Data is hard to move around (data gravity), cannot move data fast enough to the analytics, therefore we need to move the analytics to the data. This will cause a dramatic growth in applications, specialized and optimized for analytics at the edge. Yes, our devices have gotten smarter, yes P2P traffic has become largest portion of Internet traffic, and yes M2M has arrived as the Internet of Things, there is no way to make progress but making the devices smarter, safer and, of course, better connected.
  2. Dramatic growth in the computational complexity to ETL (extract-transform-load) essential data from the Edge to be data-warehoused at the Core: Currently most open standards and open source efforts are buying us some time to squeeze as much information in as little time as possible via limited connection paths to billions of devices and soon enough we will realize there is a much more pragmatic approach to all of this. A jet engine produces more than 20 Terabytes of data for an hour of flight. Imagine what computational complexity we already have that boils that down to routing and maintenance decisions in such complex machines. Imagine the consequences of ignoring such capability, which can already be made available at rather trivial costs.
  3. The drive to instrument the data to be “open” rather than “closed”, with all the information we create, and all of its associated ownership and security concerns addressed: Open Data challenges have already surfaced, there comes a time when we begin to realize that an Open Data interface and guarantees about its availability and privacy need to be made and enforced. This is what drives the essential tie today between Public, Private and Hybrid cloud adoption (nearly one third each) and with the ever-growing amount of data at the Edge, the issue of who “owns” it and how is access “controlled” to it, become ever more relevant and important. At the end of the day, the producer/owner of the data must be in charge of its destiny, not some gatekeeper or web farm. This should not be any different that the very same rules that govern open source or open standards.

Last week I addressed these topics at the IEEE Cloud event at Boston University with wonderful colleagues from BU, Cambridge, Carnegie Mellon, MIT, Stanford and other researchers, plus of course, industry colleagues and all the popular, commercial web farms today. I was pleasantly surprised to see not just that the first two are top-of-mind already, but that the third one has emerged and is actually recognized. We have just started to sense the importance of this third wave, with huge implications in Cloud compute. My thanks to Azer Bestavros and Orran Krieger (Boston University), Mahadev Satyanarayanan (Carnegie Mellon University) and Michael Stonebraker (MIT) for the outstanding drive and leadership in addressing these challenges. I found Project Olive intriguing. We are happy to co-sponsor the BU Public Cloud Project, and most importantly, as we just wrapped up EclipseCon 2014 this week, very happy to see we are already walking the talk with Project Krikkit in Eclipse M2M. I made a personal prediction last week: just as most Cloud turned out to be Open Source, IoT software will all be Open Source. Eventually. The hard part is the Data, or should I say, Data Gravity…

Tags: , , , , , , , , , , , , , , , , ,

Open Source is just the other side, the wild side!

March is a rather event-laden month for Open Source and Open Standards in networking: the 89th IETF, EclipseCon 2014, RSA 2014, the Open Networking Summit, the IEEE International Conference on Cloud (where I’ll be talking about the role of Open Source as we morph the Cloud down to Fog computing) and my favorite, the one and only Open Source Think Tank where this year we dive into the not-so-small world (there is plenty of room at the bottom!) of machine-to-machine (m2m) and Open Source, that some call the Internet of Everything.

There is a lot more to March Madness, of course, in the case of Open Source, a good time to celebrate the 1st anniversary of “Meet Me on the Equinox“, the fleeting moment where daylight conquered the night the day that project Daylight became Open Daylight. As I reflect on how quickly it started and grew from the hearts and minds of folks more interested in writing code than talking about standards, I think about how much the Network, previously dominated, as it should, by Open Standards, is now beginning to run with Open Source, as it should. We captured that dialog with our partners and friends at the Linux Foundation in this webcast I hope you’ll enjoy. I hope you’ll join us in this month in one of these neat places.

As Open Source has become dominant in just about everything, Virtualization, Cloud, Mobility, Security, Social Networking, Big Data, the Internet of Things, the Internet of Everything, you name it, we get asked how do we get the balance right? How does one work with the rigidity of Open Standards and the fluidity of Open Source, particularly in the Network? There is only one answer, think of it as the Yang of Open Standards, the Yin of Open Source, they need each other, they can not function without the other, particularly in the Network.  Open Source is just the other side, the wild side!

Tags: , , , , , , , , , , , , , , , , , ,

RSA 2014 Live Broadcast – Recap

Last week at RSA 2014, Chris Young and I joined a Live Social Broadcast from the Cisco Booth to discuss our announcements of Open Source Application Detection and Control and Advanced Malware Protection, as well as to answer questions from you, our partners and customers, about the trends, the challenges, the opportunities we’ve seen in the security industry this year.

Below is a link to view the recording of the broadcast. If you have any questions that didn’t get answered, please leave them in the comments, and Chris or I will get back to you.

http://newsroom.cisco.com/feature-content?type=webcontent&articleId=1346930

Tags: , , ,

Cisco Announces OpenAppID – the Next Open Source ‘Game Changer’ in Cybersecurity

One of the big lessons I learned during the early days, when I was first creating Snort®, was that the open source model was an incredibly strong way to build great software and attack difficult problems in a way that the user community rallied around. I still see this as one of the chief strengths of the open source development model and why it will be with us for the foreseeable future.

As most every security professional knows, cloud applications are one of the most prevalent attack vectors exploited by hackers and some of the most challenging to protect. There are more than 1,000 new cloud-delivered applications per year, and IT is dependent on vendors to create new visibility and threat detection tools and keep up with the accelerating pace of change. The problem is that vendors can’t always move fast enough and IT can’t afford to wait. Countless custom applications pile on even more complexity.

So today, Cisco is announcing OpenAppID, an open, application-focused detection language and processing module for Snort that enables users to create, share, and implement application detection. OpenAppID puts control in the hands of users, allowing them to control application usage in their network environments and eliminating the risk that comes with waiting for vendors to issue updates. Practically speaking, we’re making it possible for people to build their own open source Next-Generation Firewalls.

Read More »

Tags: , , , , ,