Cisco Blogs


Cisco Blog > Open at Cisco

OpenDaylight Unleashes Hydrogen to the Masses

The OpenDaylight Project today announced that its first open source software release Hydrogen is now available for download. As the first simultaneous code release cross-community it has contributions across fifty organizations and includes over one million lines of code. Yes. ODL > 1MLOC. For those of you interested that’s approximately two hundred and thirty man-years of work completed in less than twelve months.

It was around this time last year that the media started to pick up on a few rumors that something may be in the works with software-defined networking and controllers. I remember our first meeting at Citrix where the community started to collaborate on The OpenDaylight Project and come to common ground on how to start something this large. We had multiple companies and academics in the room and many ideas of where we wanted this project to go but there was one thing we had in common: the belief and vision to drive networking software innovation to the Internet in a new way and accelerate SDN in the open; transparently and with diverse community support. Each of us had notions of what we could bring to the table, from controller offerings to virtualization solutions, SDN protocol plugins and apps to solve IT problems. Over two days at Citrix we looked at things from a customer perspective, a developer perspective and ultimately and arguably the most important, a community perspective. From there The OpenDaylight Project emerged under the Linux Foundation. As I look back I want to applaud and thank the companies, partners, developers, community members and the Linux Foundation for driving such a large vision from concept to reality in less than twelve months, which is an incredible feat in itself.

Hydrogen is truly a community release. Use cases span across enterprise, service provider, academia, data center, transport and NfV. There are multiple southbound protocols abstracted to a common northbound API for cross-vendor integration and interoperability and three editions have been created to ensure multi-domain support and application delivery as well as deployment modularity and flexibility for different domain-specific configurations. These packages have a consistent environment yet are tailored to domain and role-based needs of network engineers, developers and operators.

  • The Base Edition, which includes a scalable and multi-vendor SDN protocol based on OSGi, the latest (and backward compatible) OpenFlow 1.3 Plugin and Protocol Library, OVSDB, NetConf/Yang model driver SDN and Java-based YANG tooling for model-driven development.
  • The Virtualization Edition (which includes the Base Edition) and adds Affinity Metadata Service (essentially APIs to express workload relationships and service levels), Defense4All (DDoS detection & mitigation), Open DOVE, VTN, OpenStack Neutron NorthBound API support and a virtual tenant network offering.
  • The Service Provider Edition (again, including the Base Edition) that also offers the Metadata Services and Defense4All but includes BGP-LS and PCEP, LISP Flow Mapping and SNMP4SDN to manage routers, gateways switches.

More information can be found on the website with regards to the releases and projects themselves.

I want to stress the importance of how well the vision has been delivered to date. I’ve been involved in multiple standards-bodies and in open source discussions in the past but this is truly one of the largest undertakings I’ve seen come together in my entire career. OpenDaylight developers have been coding day and night to get this release out the door and it’s amazing to see the collaboration and coherency of the team as we unite to deliver on the industry’s first cross-vendor SDN and NfV Platform. In addition and frequently not mentioned is that many of the protocols listed in the Editions above are also standardized at organizations like the IETF during the same period. Code and specs at the same time. It’s been a long time since rough consensus and running code has been the norm.

Over here at Cisco we’re fully committed to OpenDaylight. We’re currently using it as a core component in our WAN Orchestration offering for service providers to allow intelligent network placement and automated capacity and workload planning. The ACI team (formerly Insieme) collaborated with IBM, Midokura and Plexxi to create a project in OpenDaylight that creates a northbound API that can set policy and be used across a wide range of network devices. And of course we’re bringing components of the OpenDaylight codebase into our own controllers and ensuring application portability for customers, partners and developers alike. From this I would expect to see more code donations going into the community moving forward as well. We made several announcements last week about our campus/branch controller that includes OpenDaylight technology.

At the end of the day an open source project is only as strong as its developers, its community and its code. As we as a community move forward with OpenDaylight I expect it to become stronger with more members joining with new project proposals as new code contributors coming onboard from different industries as well. As I look at our roadmap and upcoming release schedule I’m pumped for what’s next and so happy the community has catalyzed a developer community around networking.

Please do visit the site, download the code and take Hydrogen for a test-drive. We want to hear feedback on what we can make better, what features to add or how you’re going to utilize it. Moreover, we’d love you to participate. It’s a kick-ass community and I think you’ll have fun and the best part; you’ll see your hard work unleashed on the Internet and across multiple communities too.

Tags: , , , , , , , , , , , , , , , , , , ,

Back to the Future: Do Androids Dream of Electric Sheep?

As information consumers that depend so much on the Network or Cloud, we sometimes indulge in thinking what will happen when we really begin to feel the effects of Moore’s Law and Nielsen’s Law combined, at the edges: the amount of data and our ability to consume it (let alone stream it to the edge), is simply too much for our mind to process. We have already begun to experience this today: how much information can you consume on a daily basis from the collective of your so-called “smart” devices, your social networks or other networked services, and how much more data is left behind. Same for machines to machine: a jet engine produces terabytes of data about its performance in just a few minutes, it would be impossible to send this data to some remote computer or network and act on the engine locally.  We already know Big Data is not just growing, it is exploding!

The conclusion is simple: one day we will no longer be able to cope, unless the information is consumed differently, locally. Our brain may no longer be enough, we hope to get help, Artificial Intelligence comes to the rescue, M2M takes off, but the new system must be highly decentralized in order to stay robust, or else it will crash like some kind of dystopian event from H2G2. Is it any wonder that even today, a large portion if not the majority of the world Internet traffic is in fact already P2P and the majority of the world software downloaded is Open Source P2P? Just think of BitCoin and how it captures the imagination of the best or bravest developers and investors (and how ridiculous one of those categories could be, not realizing its potential current flaw, to the supreme delight of its developers, who will undoubtedly develop the fix — but that’s the subject of another blog).

Consequently, centralized high bandwidth style compute will break down at the bleeding edge, the cloud as we know it won’t scale and a new form of computing emerges: fog computing as a direct consequence of Moore’s and Nielsen’s Laws combined. Fighting this trend equates to fighting the laws of physics, I don’t think I can say it simpler than that.

Thus the compute model has already begun to shift: we will want our Big Data, analyzed, visualized, private, secure, ready when we are, and finally we begin to realize how vital it has become: can you live without your network, data, connection, friends or social network for more than a few minutes? Hours? Days? And when you rejoin it, how does it feel? And if you can’t, are you convinced that one day you must be in control of your own persona, your personal data, or else? Granted, while we shouldn’t worry too much about a Blade Runner dystopia or the H2G2 Krikkit story in Life, the Universe of Everything, there are some interesting things one could be doing, and more than just asking, as Philip K Dick once did, do androids dream of electric sheep?

To enable this new beginning, we started in Open Source, looking to incubate a project or two, first one in Eclipse M2M, among a dozen-or-so dots we’d like to connect in the days and months to come, we call it krikkit. The possibilities afforded by this new compute model are endless. One of those could be the ability to put us back in control of our own local and personal data, not some central place, service or bot currently sold as a matter of convenience, fashion or scale. I hope with the release of these new projects, we will begin to solve that together. What better way to collaborate, than open? Perhaps this is what the Internet of Everything and data in motion should be about.

Tags: , , , , , , , , , , , , , , , , , , , , , ,

Snort 2.9.6.0 from Sourcefire, now a part of Cisco

Yesterday, the Snort team here at Sourcefire conducted its first major release of Snort now that we are part of the Cisco family, Snort 2.9.6.0.  You can read more about this release over on the Snort.org Blog.

In this version we released a lot of new features.  Features that have been requested by our community, and features that pave the way for further innovation and work here at Sourcefire, now a part of Cisco.  We’re extremely proud of this release and always look forward to hearing your feedback about how we are doing!

As Marty said in his initial blog posts during the acquisition, we are committed to keeping Sourcefire’s Open Source projects and its Open Source culture alive, and we’re hoping you’ll download the new version of Snort and give the new features a try!

I’m still the Open Source manager, and you can always reach me via my email or the mailing lists here:  http://www.snort.org/community/mailing-lists

My Top 7 Predictions for Open Source in 2014

My 2014 predictions are finally complete.  If Open Source equals collaboration or credibility, 2013 has been nothing short of spectacular.  As an eternal optimist, I believe 2014 will be even better:

  1. Big data’s biggest play will be in meatspace, not cyberspace.  There is just so much data we produce and give away, great opportunity for analytics in the real world.
  2. Privacy and security will become ever more important, particularly using Open Source, not closed. Paradoxically, this is actually good news as Open Source shows us again, transparency wins and just as we see in biological systems, the most robust mechanisms do so with fewer secrets than we think.
  3. The rise of “fog” computing as a consequence of the Internet of Things (IoT) will unfortunately be driven by fashion for now (wearable computers), it will make us think again what have we done to give up our data and start reading #1 and #2 above with a different and more open mind. Again!
  4. Virtualization will enter the biggest year yet in networking.  Just like the hypervisor rode Moore’s Law in server virtualization and found a neat application in #2 above, a different breed of projects like OpenDaylight will emerge. But the drama is a bit more challenging because the network scales very differently than CPU and memory, it is a much more challenging problem. Thus, networking vendors embracing Open Source may fare well.
  5. Those that didn’t quite “get” Open Source as the ultimate development model will re-discover it as Inner Source (ACM, April 1999), as the only long-term viable development model.  Or so they think, as the glamor of new-style Open Source projects (OpenStack, OpenDaylight, AllSeen) with big budgets, big marketing, big drama, may in fact be too seductive.  Only those that truly understand the two key things that make an Open Source project successful will endure.
  6. AI recently morphed will make a comeback, not just robotics, but something different AI did not anticipate a generation ago, something one calls cognitive computing, perhaps indeed the third era in computing!  The story of Watson going beyond obliterating Jeopardy contestants, looking to open up and find commercial applications, is a truly remarkable thing to observe in our lifespan.  This may in fact be a much more noble use of big data analytics (and other key Open Source projects) than #1 above. But can it exist without it?
  7. Finally, Gen Z developers discover Open Source and embrace it just like their Millennials (Gen Y) predecessors. The level of sophistication and interaction rises and projects ranging from Bitcoin to qCraft become intriguing, presenting a different kind of challenge.  More importantly, the previous generation can now begin to relax knowing the gap is closing, the ultimate development model is in good hands, and can begin to give back more than ever before. Ah, the beauty of Open Source…

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

The Age of Open Source Video Codecs

The first time I met Jim Barton (DVR pioneer and TiVo co-founder) I was a young man looking at the hottest company in Silicon Valley in the day: SGI, the place where Michael Jackson and Steven Spielberg just arrived to visit, the same building in Mountain View as it were, that same week in late Spring, 1995.

The second question that Jim asked me that day was if I knew H.263 – a fledgling, new specification promising to make video ubiquitous, affordable over any public or private network – oh, those 90’s seem so far away…

For a hard core database, kernel and compiler hacker, that was a bit too much telco chit-chat for me, though remembering this was supposed to be an interview, and that the person who asks the questions is in control, not knowing the answer, I managed to mumble a question instead of an answer.  Jim liked the conversation and obliged me with an explanation equally encrypted, that one day, we will have these really cool, ubiquitous players on all sorts of video devices, not just “geometry engines” running workstations in “Jurassic Park” post-production studios (actually, come to think of it, the scene itself), but over all sorts of networked devices and maybe that should be a great opportunity to dive into and change the world.

Open standards and open source live in an entangled relationship, or so I wrote about it years ago, the Yang of Open Standards, the Ying of Open Source.  Never has it been more intertwined and somewhat challenging than with the case of H.264, MPEG4 and the years old saga of so-called “standard” video codecs.

Almost a generation later, even if H.263 and its eventual successors H.264 and MPEG4 came a long way, we still don’t have a truly standard and open source implementation of such a video codec, though we are hoping to change that now!

My colleagues announced today that we are open sourcing our H.264 codec.  We still have a bit of work left to do as we start this new open source project and I am counting on both communities to receive it with “open” arms.  It is meant to remove all barriers, to be truly free and open, as open source was meant to be.

Please join us this morning in a twitter chat covering this event.  We are convinced no matter how one looks at this, it is a positive move for the industry.

Tags: , , , , , , , , , , , , , ,