Growing Up Open Source
“I guess I got tired of waiting around for someone else to do it for me?” ~ Young Frank Walker.
This quote was taken from the Movie “Tomorrowland.” This was the innocent response young Frank Walker gave when asked why he built a jet pack. I couldn’t find a better description to answer the common question, “How did Open Source Software come to existence?”
Open Source reflects an open-mind. It does not only describe a type of software license, it is bigger than that, it is a culture. One way we can understand this culture, is to observe and recognize it through its impact on an individual’s life, and that’s what this post is about.
Twenty years ago, in my teenage years, I got struck by cupid’s arrow, for the internet. A network with the power to connect the world, is too strong to miss. It wasn’t long before I sacrificed a sound system capable of playing the nastiest rap tunes that were so dear to me, to buy a computer, get on the internet bandwagon.
Just like every other teenage kid, my jaw dropped every time I came across news of how teenagers can break security of sophisticated systems from their bedrooms, I was intrigued, wanted to begin but not sure where to go? I headed down to where these genius teenagers would hang out, the IRC chat rooms of EFNet.
On these chat rooms, lots of ambiguous technical terms were continuously scrolling, the one that caught my eyes, ended with the letter X (Linux).. It took me some time to realize that LinuX is not really a hacking tool, it is nothing but an Operating System.
Completely clueless but extremely determined, my mission in life was installing Slackware Linux on my Desktop. Bear in mind, that was 1994, so installing Linux was not as easy as today, I remember holding 3 floppy disks, not knowing what to do with them, one had the Kernel, one had the Master boot record and one with the Shell. After nights on end of head banging, “Linux” was installed, only with a black terminal and a blinking cursor. A 13 year old kid could not be happier.
Ten years later (2004), I found myself responsible for securing the network of an Internet Service Provider, serving thousands of subscribers. Such a responsibility was never going to be possible for a young guy of my age, without the Open Source exposure of my teenage years. That time, the challenge was different, but the solution was the same. Find a way to stop Denial of Service attacks on the network, without paying top dollar for fancy solutions.
The IRC world of Open Source enthusiasts turned into serious business, I found myself sitting in meetings with Executives, explaining why our internet gateway was receiving millions of malicious packets, filling our pipes, and how I came across an ‘experimental’ piece of software with a funny name (Zazu), to stop these attacks. As you would expect, Zazu is Open Source. The author and I collaborated to modify it and ultimately ended up with a solution that was tailored to our needs.
Fast forward again, ten years later (2015), where Open Source has gone mainstream. The culture is expanding into new frontiers. Check out OPNFV (Open Platform for NFV). This is not your common Open Source project, it’s not about writing code, it’s about system-integration, but in an Open Source fashion.
The Linux foundation is working with Network Operators and Vendors on OPNFV, a community-driven effort to integrate NFV and SDN projects.
Cisco is heavily involved in OPNFV and other Open Source initiatives like Opendaylight, Openstack…etc. The Cisco team of contributors and I attended the OPNFV first Summit in San Francisco (November 2015), we gave different presentations on different projects, My presentation was to demonstrate a use-case of how building a fancy cloud-based service is no longer a daunting task, using Open Source components included in OPNFV.
OPNFV produced their first release (Arno) as a lab-ready reference platform that integrated Openstack, Opendaylight and OVS, Previously, that required a complex setup that takes weeks or even months. These are big projects that require lots of integration work. They also offer easy programmable interfaces (REST APIs) that the community can leverage to build valuable applications. The second release of OPNFV is called (Brahmaputra).
During the OPNFV Summit, It was particularly interesting to see the cross-vendor collaboration. Ignoring commercial or technical competitiveness, I saw people from competing vendors sit together to discuss progress, hack code, and practice slides. I found this spirit to be too good to miss, so I signed up to one of the OPNFV projects (Functest) and I am happy to be back to the IRC-style meetings that I used to enjoy 20 years ago.
In conclusion, over the span of 20 years, It is obvious that Open Source was and still is the major contributor to my career development, no matter how different my scope is, whether a child’s play, a network operator or a product vendor. Open Source culture finds a way to get involved, solve problems, encourage collaboration and networking between people and bits/bytes.
Curious about getting started in Open Source. Here is great example from OPNFV.
Guest Blog by:
Keep the conversation going on Twitter!
Tags: Cisco, cloud, Linux, open source, opendaylight, OpenStack, OPNFV
We’re off to another edition of VMworld with some great technology to showcase. Our theme this year is your data center is everywhere, the center of innovation. And talking about innovation we will be demoing some recent work around our desktop virtualization solution with VMware. We will have demo pods showcasing our UCS solutions for Virtual SAN and VDI as well as a demo showing graphic acceleration for VDI with Cisco UCS and NVIDIA GRID on Horizon. By the way, did you catch NVIDIA’s announcement today about GRID 2.0. This is pretty exciting news as graphic acceleration is key to a seamless user experience when you virtualize desktops or applications. With Cisco UCS a market leader in blade servers we’re looking forward to bringing virtualized GPUs to our B-series servers. As we see more and more graphic-intensive applications being virtualized to protect intellectual property by keeping data safely in the data center Cisco is working with NVIDIA and ISVs to qualify hardware configuration for specific applications. Read about Esri ArcGISPro recent testing on Cisco UCS C240-M4 and NVIDIA K2 GPU. Read More »
Tags: desktop virtualization, GPU, Integrated infrastructure, Linux, UCS, vdi, vmworld
Only on TechWiseTV
This is the first in a multi-part series where we cover ‘programmability’ for networking. The idea is to fully review the programming options now available inside the Nexus switches, (3000, 9000). This first episode covers new access with Linux tools, NX-API and more. Further shows will be diving into the details around Object Models and orchestration partners.
The primary point for any of these is to understand how Cisco Open NX-OS extensibility exposes greater programmability and automation capabilities. It is fascinating and full of new learning opportunities. It does not come without a few career questions of course…usually, something along the lines of: do network engineers need to become programmers now too?
Two answers: Yes. It depends.
Networking knowledge and skill should not be undersold here. Programming capabilities should be additive. They are useful in just about any tech career and obviously affecting the networking space. I think it’s foolish to ever quit learning but it does depend on your aspirations, your current level of satisfaction and perhaps how narrowly defined your skill set might be.
Full disclosure: I am not a programmer. I have been learning the fundamentals of python and a few others as I work on this series but I am not hire-able for this skill by any means. But the distinct feeling I get, and the feedback I hear from you guys: its not that hard. You are probably well versed in scripting for various CLI operations…take it up a few notches and work on some of these ‘readable’ languages that will have similar syntax. This will give you the ability to judge the appeal of what we are offering with ACI and other solutions much more credibly…and I guarantee you will find ways to get rid of redundant crap and stupid errors you may be fighting with yourself or your team.
JOIN US AT THE WORKSHOP
Live, interactive, never dull.
September 21, 2015
Programmable networks will forever change the way you manage infrastructure enabling you to dramatically accelerate configuration and deployment of your network, automate time consuming manual tasks, and allocate IT resources far more efficiently. Are you ready for the revolution?
Discover how to create a programmable network as we discuss and demonstrate the NX-API and NX-API REST (Object Model) in detail. Understand how Cisco Open NX-OS extensibility exposes greater programmability and automation capabilities that eliminate costly manual errors.
– You can sign up at the workshop tab when the date gets a bit closer, http://www.techwisetv.com
Nicolas Delecroix in the TechWiseTV Lab
Two great experts on this episode.
Six Key Points: What OPEN means for NX-OS
Shane Corban shares Six Key Points: What OPEN means for NX-OS
Changes made across the software stack to address Extensibility, Openness, Programmability.
- Auto Deployment (Bootstrap and Provisioning)
- Added support for PXE server, operationalize NX-OS software to match an existing server environment
- Extensibility – how we package software
- We did not use to expose much beyond a bash shell
- Now you can install native RPM’s, and third party applications running processes as they would on a Linux server
- Open Interfaces
- We are now adding support to leverage Linux like tools for debugging, configuration and troubleshooting…manipulate those front panel ports as native Linux interfaces within our switch software stack.
- Application Integration (Adaptable SDK)
- Published an SDK, a build environment that you can install on any Linux server, download the build agent, and put your source into that directory structure and build into an RPM for installation and run it natively.
- Build your own custom automation apps, monitoring agents, and have them run natively on our platform
- Programmability Tool Choice
- We have a native Python shell today that has a Native Cisco Library that you can utilize for automation
- NX-API – the ability to embed CLI commands and structured data (JSON, XML) for execution on the switch via HTTP/HTTPS Interface to get back structured data back on show commands.
- Management Tools
- Support for Chef and Puppet
- Agents will be publicly available on the enterprise sites
- Support for Open Stack, Neutron
NX-OS is now more modular, more open, more capable of third party integration providing a wide variety of programmability choices ideal for Dev-Ops environments.
Five case study examples
Nicolas provides five case study examples.
- Checking Software Version
- Using Python script with NXAPI and JSON to pull version numbers
- Python script to query multiple switches to check compliance against a specific version
- VLAN Provisioning
- Checking for proper VLAN provisioning
Special thanks behind the scenes to Rami Rammaha and Mark Jackson
Cisco Nexus 9000 Programmability Guide
Matt Oswalt is a great writer. You should follow his blog: Keeping it Classless. I enjoy his angles on things. Read up on his blog entry: Evolution of Network Programmability, Nexus 9000 NX-API,NX-API Update.
Some Learning Basics:
What do you think still needs to be covered? I would love any thoughts on how the rest of this series should be shaped. Leave your comments below and just to make sure…tag me on twitter. We are diving into Object Models (taping next week) and then some angle with the Orchestration Partners. Case in point: Puppet Labs is making available today a native Puppet NX-OS agent and Cisco Puppet Module.
Let me know!
Tags: ACI, Awesome, Insieme, JSON, Linux, nexus, NX-OS, Open, Programmable, python, RPC, TechWiseTV, XML
Linux containers, as a lighter virtualization alternative to virtual machines, are gaining momentum. The High Performance Computing (HPC) community is eyeing Linux containers with interest, hoping that they can provide the isolation and configurability of Virtual Machines, but without the performance penalties.
In this article, I will show a simple example of libvirt-based container configuration in which I assign the container one of the ultra-low latency (usNIC) enabled Ethernet interfaces available in the host. This allows bare-metal performance of HPC applications, but within the confines of a Linux container.
Read More »
Tags: HPC, Linux, Linux Containers, USNIC
In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
- Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
- As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
- Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90’s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
Tags: Big Data, big data analytics, CERN, cloud, Data Gravity, Fog computing, gravity, IoT, IoTSP, ISP, keynote, LHC, Linux, LinuxCon, M2M, Moore’s law, Nielsen's Law, open source, SP