Cisco Blogs


Cisco Blog > Open at Cisco

Back to the Future: Do Androids Dream of Electric Sheep?

As information consumers that depend so much on the Network or Cloud, we sometimes indulge in thinking what will happen when we really begin to feel the effects of Moore’s Law and Nielsen’s Law combined, at the edges: the amount of data and our ability to consume it (let alone stream it to the edge), is simply too much for our mind to process. We have already begun to experience this today: how much information can you consume on a daily basis from the collective of your so-called “smart” devices, your social networks or other networked services, and how much more data is left behind. Same for machines to machine: a jet engine produces terabytes of data about its performance in just a few minutes, it would be impossible to send this data to some remote computer or network and act on the engine locally.  We already know Big Data is not just growing, it is exploding!

The conclusion is simple: one day we will no longer be able to cope, unless the information is consumed differently, locally. Our brain may no longer be enough, we hope to get help, Artificial Intelligence comes to the rescue, M2M takes off, but the new system must be highly decentralized in order to stay robust, or else it will crash like some kind of dystopian event from H2G2. Is it any wonder that even today, a large portion if not the majority of the world Internet traffic is in fact already P2P and the majority of the world software downloaded is Open Source P2P? Just think of BitCoin and how it captures the imagination of the best or bravest developers and investors (and how ridiculous one of those categories could be, not realizing its potential current flaw, to the supreme delight of its developers, who will undoubtedly develop the fix — but that’s the subject of another blog).

Consequently, centralized high bandwidth style compute will break down at the bleeding edge, the cloud as we know it won’t scale and a new form of computing emerges: fog computing as a direct consequence of Moore’s and Nielsen’s Laws combined. Fighting this trend equates to fighting the laws of physics, I don’t think I can say it simpler than that.

Thus the compute model has already begun to shift: we will want our Big Data, analyzed, visualized, private, secure, ready when we are, and finally we begin to realize how vital it has become: can you live without your network, data, connection, friends or social network for more than a few minutes? Hours? Days? And when you rejoin it, how does it feel? And if you can’t, are you convinced that one day you must be in control of your own persona, your personal data, or else? Granted, while we shouldn’t worry too much about a Blade Runner dystopia or the H2G2 Krikkit story in Life, the Universe of Everything, there are some interesting things one could be doing, and more than just asking, as Philip K Dick once did, do androids dream of electric sheep?

To enable this new beginning, we started in Open Source, looking to incubate a project or two, first one in Eclipse M2M, among a dozen-or-so dots we’d like to connect in the days and months to come, we call it krikkit. The possibilities afforded by this new compute model are endless. One of those could be the ability to put us back in control of our own local and personal data, not some central place, service or bot currently sold as a matter of convenience, fashion or scale. I hope with the release of these new projects, we will begin to solve that together. What better way to collaborate, than open? Perhaps this is what the Internet of Everything and data in motion should be about.

Tags: , , , , , , , , , , , , , , , , , , , , , ,

Snort 2.9.6.0 from Sourcefire, now a part of Cisco

Yesterday, the Snort team here at Sourcefire conducted its first major release of Snort now that we are part of the Cisco family, Snort 2.9.6.0.  You can read more about this release over on the Snort.org Blog.

In this version we released a lot of new features.  Features that have been requested by our community, and features that pave the way for further innovation and work here at Sourcefire, now a part of Cisco.  We’re extremely proud of this release and always look forward to hearing your feedback about how we are doing!

As Marty said in his initial blog posts during the acquisition, we are committed to keeping Sourcefire’s Open Source projects and its Open Source culture alive, and we’re hoping you’ll download the new version of Snort and give the new features a try!

I’m still the Open Source manager, and you can always reach me via my email or the mailing lists here:  http://www.snort.org/community/mailing-lists

My Top 7 Predictions for Open Source in 2014

My 2014 predictions are finally complete.  If Open Source equals collaboration or credibility, 2013 has been nothing short of spectacular.  As an eternal optimist, I believe 2014 will be even better:

  1. Big data’s biggest play will be in meatspace, not cyberspace.  There is just so much data we produce and give away, great opportunity for analytics in the real world.
  2. Privacy and security will become ever more important, particularly using Open Source, not closed. Paradoxically, this is actually good news as Open Source shows us again, transparency wins and just as we see in biological systems, the most robust mechanisms do so with fewer secrets than we think.
  3. The rise of “fog” computing as a consequence of the Internet of Things (IoT) will unfortunately be driven by fashion for now (wearable computers), it will make us think again what have we done to give up our data and start reading #1 and #2 above with a different and more open mind. Again!
  4. Virtualization will enter the biggest year yet in networking.  Just like the hypervisor rode Moore’s Law in server virtualization and found a neat application in #2 above, a different breed of projects like OpenDaylight will emerge. But the drama is a bit more challenging because the network scales very differently than CPU and memory, it is a much more challenging problem. Thus, networking vendors embracing Open Source may fare well.
  5. Those that didn’t quite “get” Open Source as the ultimate development model will re-discover it as Inner Source (ACM, April 1999), as the only long-term viable development model.  Or so they think, as the glamor of new-style Open Source projects (OpenStack, OpenDaylight, AllSeen) with big budgets, big marketing, big drama, may in fact be too seductive.  Only those that truly understand the two key things that make an Open Source project successful will endure.
  6. AI recently morphed will make a comeback, not just robotics, but something different AI did not anticipate a generation ago, something one calls cognitive computing, perhaps indeed the third era in computing!  The story of Watson going beyond obliterating Jeopardy contestants, looking to open up and find commercial applications, is a truly remarkable thing to observe in our lifespan.  This may in fact be a much more noble use of big data analytics (and other key Open Source projects) than #1 above. But can it exist without it?
  7. Finally, Gen Z developers discover Open Source and embrace it just like their Millennials (Gen Y) predecessors. The level of sophistication and interaction rises and projects ranging from Bitcoin to qCraft become intriguing, presenting a different kind of challenge.  More importantly, the previous generation can now begin to relax knowing the gap is closing, the ultimate development model is in good hands, and can begin to give back more than ever before. Ah, the beauty of Open Source…

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

The Age of Open Source Video Codecs

The first time I met Jim Barton (DVR pioneer and TiVo co-founder) I was a young man looking at the hottest company in Silicon Valley in the day: SGI, the place where Michael Jackson and Steven Spielberg just arrived to visit, the same building in Mountain View as it were, that same week in late Spring, 1995.

The second question that Jim asked me that day was if I knew H.263 – a fledgling, new specification promising to make video ubiquitous, affordable over any public or private network – oh, those 90’s seem so far away…

For a hard core database, kernel and compiler hacker, that was a bit too much telco chit-chat for me, though remembering this was supposed to be an interview, and that the person who asks the questions is in control, not knowing the answer, I managed to mumble a question instead of an answer.  Jim liked the conversation and obliged me with an explanation equally encrypted, that one day, we will have these really cool, ubiquitous players on all sorts of video devices, not just “geometry engines” running workstations in “Jurassic Park” post-production studios (actually, come to think of it, the scene itself), but over all sorts of networked devices and maybe that should be a great opportunity to dive into and change the world.

Open standards and open source live in an entangled relationship, or so I wrote about it years ago, the Yang of Open Standards, the Ying of Open Source.  Never has it been more intertwined and somewhat challenging than with the case of H.264, MPEG4 and the years old saga of so-called “standard” video codecs.

Almost a generation later, even if H.263 and its eventual successors H.264 and MPEG4 came a long way, we still don’t have a truly standard and open source implementation of such a video codec, though we are hoping to change that now!

My colleagues announced today that we are open sourcing our H.264 codec.  We still have a bit of work left to do as we start this new open source project and I am counting on both communities to receive it with “open” arms.  It is meant to remove all barriers, to be truly free and open, as open source was meant to be.

Please join us this morning in a twitter chat covering this event.  We are convinced no matter how one looks at this, it is a positive move for the industry.

Tags: , , , , , , , , , , , , , ,

Why I Chose the Open Source Model I did for OpenDaylight

Now that OpenDaylight has arrived, it’s time to explain why I made the Open Source choices eventually embraced by its Founders and the community at large.  One doesn’t often see such leaders as Cisco, IBM, Intel, HP, Juniper, RedHat, VMWare, NEC, Microsoft and others agree, share and collaborate on such key technologies, let alone the latter engaging in a Linux Foundation based community (some thought hell will freeze over before that would ever happen, though it got pretty cold at times last Spring).

For those of you not familiar with OpenDaylight (see “Meet Me On The Equinox”, not a homage to Death Cab for Cutie or my Transylvanian homeland), IBM and Cisco have actually started this with an amazing set of partners, nearly that ephemeral Equinox this year (~11am, March 20th) though we couldn’t quite brag about it until all our partners saw the daylight, which by now, we’re hoping everyone does.  It was hard not to talk about all this as we saw those half baked, speculative stories before the Equinox – amazing how information flew, distorted as it were, but it did; I wish source code would be that “rapid”, we’d all be so much better for it…

The Open Source model for OpenDaylight is simple, it has only two parts: the community is hosted in the Linux Foundation and the license is Eclipse.  The details are neatly captured in a white paper we wrote and published in the Linux Foundation.  Dan Frye, my friend and fellow counterpart at IBM and I came up with the main points after two short meetings.  It would have been one, but when you work for such giants as our parent companies and soon to be OpenDaylight partners, one has to spend a little more time getting everyone to see the daylight.  It boils down to two things, which I am convinced are the quintessential elements of any successful open source project.

1) Community.  Why?  Because it trumps everything: code, money and everything else.  A poor community with great code equals failure (plenty of examples of that).  A great community with poor (or any) code equals success (plenty of examples of that too).  Why? Because open source equals collaboration, of the highest kind: I share with you, and you with me, whatever I have, I contribute my time, my energy, my intellectual property, my reputation, etc.. And ultimately it becomes “ours”.  And the next generation’s.  Open Source is not a technology; it’s a development model.  With more than 10 million open source developers world wide, it happens to be based on collaboration on a scale and diversity that humanity has never experienced before.  Just think about what made this possible and the role some of the OpenDaylight partners have already played in it since the dawn of the Internet.  Dan Frye and I agreed that the Linux Kernel community is the best in the world and so we picked the closest thing to it to model and support ours, the Linux Foundation.

2) Fragmentation, or anti-fragmentation, actually.  Why?  The biggest challenge of any open source project is how to avoid fragmentation (the opposite of collaboration).  Just ask Andy Rubin and the Android guys what they fear the most.  Just ask any open source project’s contributors, copyright holders, or high priests, how much they appreciate an open source parasite that won’t give back.  Though we would have liked to go deeper, we settled on Eclipse, largely because of the actual language and technology we dealt with in the OpenDaylight Controller: most, if not all the initial code is Java, and though some are worried about that, I’m sure Jim Gosling is proud (btw, I’m not sure the Controller has to stay that way, I actually agree with Amin Vahdat), but we had to start somewhere.  Plus having a more friendly language NB (northbound, as in the applications run on top of the Controller) is such a cool thing, we think that the #1 open source (Eclipse) and the #1 commercial (Microsoft) IDE’s are going to be very good to it, so why not?  There are more reasons that pointed in the Eclipse direction, and other reasons for such wonderful alternatives (as APL or MPL, perhaps the subject of another post, some day).  But when it comes to understanding the virtues of them all, no one understands them better than the amazing founders of these license models, most of them from IBM, of course (I wish they did that when I was there).

What happened between the Equinox and Solstice is a fascinating saga within the OpenDaylight community which I think played its course in the spirit of total and complete openness, inclusion, diversity, respect of the individual and the community, and most of all, that code rules – we do believe in running code and community consensus.  I tip my hat to all my fellow colleagues that learned these two things along the way, the enormous talent at the Eclipse and Linux Foundation that helped us launch, and even the analysts who tried (and did incredibly well at times) to speculate the secret reasons why these partners came up with the model we did: there is no secret at all, my friends, we’re simply creating a community that is truly open, diverse, inclusive, and never fragmented.  Just like a big, happy family.  Welcome to OpenDaylight, we hope you’ll stay!

Tags: , , , , , , , , , , , , , , , ,