Cisco Blogs


Cisco Blog > Open at Cisco

Paradigm Shift with Edge Intelligence

In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.

I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity.  Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.

We all laughed and then the conversation grew deeper:

  • Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
  • As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
  • Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”

Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.

The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.

Very reminiscent of the early 90′s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security.  These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.

The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?

Tags: , , , , , , , , , , , , , , , , , ,

Power of Open Choice in Hypervisor Virtual Switching

July 28, 2014 at 5:00 am PST

Customers gain great value from server virtualization in the form of virtual machines (VM) and more recently Linux Containers /Dockers in data centers, clouds and branches.  By some estimates, more than 60 % of the workloads are virtualized although less than 16% of the physical servers (IDC) are virtualized (running a hypervisor).  From a networking perspective, the hypervisor virtual switch on these virtualized servers plays a critical component in all current and future data center, cloud, and branch designs and solutions

As we count down to the annual VMworld conference and reflect on the introduction of the Cisco Nexus 1000V in vSphere 4.0 six years ago, we can feel proud of what we have achieved. We have to congratulate VMware for their partnership and success in opening vSphere networking to third party vendors. It was beneficial for our joint customers, and for both companies. VMware and Cisco could be considered visionaries in this sense. Recognizing this success, the industry has followed.

Similarly we praise Microsoft as well, for having also provided an open environment for third-party virtual switches within Hyper-V, which has continued gaining market share recently.  Cisco and Microsoft (along with other industry players) are leading the industry with the latest collaboration on submitting the OpFlex control protocol to the IETF. Microsoft’s intention to enable OpFlex support in their native Hyper-V virtual switch enables standards-based interaction with the virtual switches.  Another win for customers and the industry.

In KVM and Xen environments, many organizations have looked at Open vSwitch (OVS) as an open source alternative. There is an interest in having richer networking than the standard Linux Bridge provides, or using OVS as a component for implementing SDN-based solutions like network virtualization. We think that there is an appetite for OVS on other hypervisors as well.  Cisco is also committed to contributing and improving these open source efforts.  We are active contributors in the Open Virtual Switch project and diligently working to open source our OpFlex control protocol implementation for OVS in the OpenDaylight consortium.

To recap on the thoughts from above, Table 1 provides a quick glance at the options for virtual networking from multiple vendors as of today:

Table 1:  Hypervisors and Choices in Virtual Switches

Hypervisor

Native vSwitch

3-party or OpenSource  vSwitch

vSphere

•Standard vSwitch
•Distributed Virtual Switch
•Cisco Application Virtual Switch
•IBM DVS 5000V
•HP Virtual Switch 5900V

Hyper-V

Native Hyper-v Switching
•NEC
•Broadcom

KVM

Linux Bridge(some distributions include OVS natively)
•OVS

XEN

OVS -- open source project with multiple contributions from different vendors and individuals
•OVS

 

As an IT Professional, whether you are running workloads on Red Hat KVM, Microsoft Hyper-V or VMware vSphere, it is difficult to imagine not having a choice of virtual networking. For many customers, this choice still means using the hypervisor’s native vSwitch.  For others, it is about having an open source alternative, like OVS. And in many other cases, having the option of selecting an Enterprise-grade virtual switch has been key to increasing deployments of virtualization, since it enables consistent policies and network operations between virtual machines and bare metal workloads.

As can be seen in the table above, Cisco Nexus 1000V continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM. Currently, over 10,000 customers have selected this option with Cisco Nexus 1000V in either vSphere, Hyper-V, or KVM (or a combination of them).

Cisco is fully committed to the Nexus 1000V for vSphere, Hyper-V and KVM and also the Application Virtual Switch (AVS) for Application Centric Infrastructure (ACI), in addition to our open source contributions to OVS.  Cisco has a large R&D investment in virtual switching, with a lot of talented engineers dedicated to this area, inclusive of those working on open-source contributions.

Nexus 1000V 3.0 release for vSphere is slated for August 2014 (general availability). This release addresses scale requirements of our increasing customer base, as well as an easy installation tool in the form of Cisco Virtual Switch Update Manager.   The Cisco AVS for vSphere will bring the ACI policy framework to virtual servers.  With ACI, customers will for the first time benefit from a true end-to-end virtual + physical infrastructure being managed holistically to provide visibility and optimal performance for heterogeneous hypervisors and workloads (virtual or physical).  These innovations and choices are enabled by the availability of open choices in virtual switching within hypervisors.

As we look forward to VMworld next month, we are excited to continue the collaborative work with platform vendors VMware, Microsoft, Red Hat, Canonical, and the open source community to maintain and continue development of openness and choice for our customers.  We are fully committed to this vision at Cisco.

Acknowledgement:  Juan Lage (@juanlage) contributed to this blog.

Tags: , , , , , , , , , , , , , , ,

Open Source at The Large Hadron Collider and Data Gravity

I am delighted to announce a new Open Source cybergrant awarded to the Caltech team developing the ANSE project at the Large Hadron Collider. The project team lead by Caltech Professor Harvey Newman will be further developing the world’s fastest data forwarding network with Open Daylight. The LHC experiment is a collaboration of world’s top Universities and research institutions, the network is designed and developed by the California Institute of Technology High Energy Physics department in partnership with CERN and the scientists in search of the Higgs boson, adding new dimensions to the meaning of “big data analytics”, the same project team that basically set most if not all world records in data forwarding speeds over the last decade, and quickly approaching the remarkable 1 Tbps milestone.

Unique in its nature and remarkable in its discovery, the LHC experiment and its search for the elusive particle, the very thing that imparts mass to observable matter, is not only stretching the bleeding edge of physics, but makes the observation that data behaves as if it has gravity too. With the exponential rise in data (2 billion billion bytes per day and growing!), services and applications are drawn to “it”. Moving data around is neither cheap nor trivial. Though advances in network bandwidth are in fact observed to be exponential (Nielsen’s Law), advances in compute are even faster (Moore’s Law), and storage even more.  Thus, the impedance mismatch between them, forces us to feel and deal with the rising force of data gravity, a natural consequence of the laws of physics. Since not all data can be moved to the applications nor moved to core nor captured in the cloud, the applications will be drawn to it, a great opportunity for Fog computing, the natural evolution from cloud and into the Internet of Things.

Congratulations to the Caltech physicists, mathematicians and computer scientists working on this exciting project. We look forward to learning from them and their remarkable contribution flowing in Open Source made possible with this cybergrant so that everyone can benefit from it, not just the elusive search for gravity and dark matter. After all, there was a method to the madness of picking such elements for Open Daylight as Hydrogen and Helium. I wander what comes next…

Tags: , , , , , , , , , , , , , , , , , ,

Thoughts on #OpenStack and Software-Defined Storage

May 14, 2014 at 6:18 am PST

This week has been the semi-annual OpenStack Summit in Atlanta, GA. In a rare occurrence I’ve been able to be here as an attendee, which has given me wide insight into a world of Open Source development I rarely get to see outside of some interpersonal conversations with DevOps people. (If you’re not sure what OpenStack is, or what the difference is between it and OpenFlow, OpenDaylight, etc., you may want to read an earlier blog I wrote that explains it in plain English).

On the first day of the conference there was an “Ask the Experts” session based upon storage. Since i’ve been trying to work my way into this world of Programmability via my experience with storage and storage networking, I figured it would be an excellent place to start. Also, it was the first session of the conference.

During the course of the Q&A, John Griffith, the Program Technical Lead (PTL) of the Cinder project (Cinder is the name of the core project within OpenStack that deals with block storage) happened to mention that he believed that Cinder represented software-defined storage as a practical application of the concept.

I’m afraid I have to respectfully disagree. At least, I would hesitate to give it that kind of association yet. Read More »

Tags: , , , , , ,

Cisco, Linux Foundation, and OpenSSL

The recent OpenSSL Heartbleed vulnerability has shown that technology leaders must work together to secure the Internet’s critical infrastructure. That’s why Cisco is proud to be a founding supporter of the Linux Foundation initiative announced yesterday (April 24th).

The initiative will fund open source projects that are critical to core computing and Internet functions, and Cisco sees security technologies as a fundamental infrastructure component. The first project being considered for funding is OpenSSL. As a longtime contributor to open source and user, we’ve offered code and intellectual property to enhance OpenSSL. We’ve also provided patches and testing results to help address vulnerabilities. Today’s announcement takes that commitment a step further.

We are pleased to help form a critical mass of governance, funding, and focus that will support the output of open source communities like OpenSSL. By working together as an industry, we can expect greater security, stability, and robustness for components that are critical to the Internet.

For more Cisco-specific information on the Heartbleed vulnerability, please visit our event response page and Security Advisory. You may also be interested in our April 23 webinar titled, Heartbleed: Assessing and Mitigating Your Risk.

Tags: , , , , , ,