While change is a hallmark of the IT industry, the actual levers for change are have actually remained fairly stable. Vendors were the initial agents of change largely because they were the only ones with the critical mass of smart people, R&D, manufacturing and service delivery to seed and then maintain a fledgeling industry—barriers to entry were a bit higher 30 years ago than they are today because the innovation was happening at the physical layer—we were still fighting over layer 1 and layer 2. The best thing that happened to this industry was the rapid emergence of standards developing organizations (SDOs) as the next arbiter of change. The action moved up the stack and networking exploded because protocols like Ethernet, TCP/IP and BGP were standardized and created a stable, level playing field that benefited everyone alike. Over the last few years, the open source movement has emerged as the latest lever for change in the industry. By democratizing the whole process of innovation, open hardware and software is giving rise to an astounding rate of change.
Now, there is many a VC pitch that’s hinges on painting Cisco as the ossified incumbent (trust me, I have seen a few), but the inconvenient reality is we have been active contributors in the open networking initiatives that have emerged in the last few years including ONF, OpenStack, OpenDaylight, and OPNFV. To that list, I am pleased to announce that we recently joined the Open Compute Project as a Gold member. The motivation behind our membership is similar to our involvement in the aforementioned open networking projects: we see the OCP community as an excellent forum to work with our customers to co-develop solutions to meet the challenges they face.
As you many know, OCP is structured into a number of projects (networking, server design, storage, etc). While there are a number of areas where we could (and will likely) engage, the first project will be Networking (shocking, I know), where we feel we can make some useful contributions to the existing work underway.
Beyond this, I do not have a whole lot more to share—to borrow a phrase from a friend of mine, the coin of the realm is code and specs and the work is just getting started for us, but expect to see some cool stuff in the near future.
Tags: network, OCP, open source
Voice and video communications over IP have become ubiquitous over the last decade, pervasive across desktop apps, mobile apps, IP phones, video conferencing endpoints, and more. One big barrier remains: users can’t collaborate directly from their web browser without downloading cumbersome plugins for different applications. WebRTC – a set of extensions to HTML5 – can change that and enable collaboration from any browser. However, one of the major stumbling blocks in adoption of this technology is a common codec for real-time video.
The Internet Engineering Task Force (IETF) and World Wide Web Consortium (W3C) have been working jointly to standardize on the right video codec for WebRTC. Cisco and many others have been strong proponents of the H.264 industry standard codec. In support of this, almost a year ago Cisco announced that we would be open sourcing our H.264 codec and providing the source code, as well as a binary module that can be downloaded for free from the Internet. Perhaps most importantly, we announced that we would not pass on our MPEG-LA licensing costs for this binary module, making it effectively free for applications to download the module and communicate with the millions of other H.264 devices. At that time, Mozilla announced its plans to add H.264 support to Firefox using OpenH264.
Since then, we’ve made enormous progress in delivering on that promise. We open sourced the code, set up a community and website to maintain it, delivered improvements and fixes, published the binary module, and have made it available to all. This code has attracted a community of developers that helped improve Read More »
Tags: ericsson, firefox, H.264, html5, ietf, Mozilla, open source, OpenH264, video, W3C, WebRTC
Save Money Here and Now
When was the last time you won the lottery? If you are like me, it’s a pretty rare occasion indeed. The same probability can be applied to increasing the budget allocation for any business and especially for service providers. What can service providers do to save money now, enabling them to invest in new services and boost revenues? Network functions virtualization (NFV) comes to the rescue, with help of course, from software defined networking (SDN), and open source innovations.
SDN and NFV represent a significant change in networking as we currently know it. Together and separately, both target cost savings, operational complexity, and network optimization – and both hold much promise for the operator. As with all things offering great potential rewards, one must balance these benefits and address the associated risks accordingly when deploying them.
For service providers, the data center is leading target for SDN and NFV deployments. Given all the activity focused on cloud computing, content delivery, and anything-as-a-service (XaaS) offerings, the service provider data centers must advance across many fronts (security, automation, mobility, reliability analytics, and provisioning) to be successful.
Interestingly, all operators Read More »
Tags: business transformation, Cisco, data center, epn, esp, evolved programmable network, evolved services platform, network function virtualization, NFV, open source, SDN, Service Provider, software defined network
Service provider customers expect more. The pace of change around us is not just constant but continuing to accelerate. To stay competitive with the nimble new players in the market, service providers need to change how they engage all of their end customers. Not exactly an easy challenge to overcome, but rapid and successful business transformation will put operators right in the middle of a world of new opportunities to capture customer mindshare. Exciting times are ahead!
So, what will it take for service providers to save money on their current service offerings, enabling them to invest and expand their businesses? Positive outcomes are made possible by an open, agile, and application centric approach, combining emerging Software-Defined Network (SDN), Network Functions Virtualization (NFV), and Open API technologies … not just to the network… but to all of their business processes.
Faster creation of personalized services that are easy to consume is enabled by the Cisco Evolved Services Platform (ESP), automating and provisioning new services in real-time at web speed. End customers can Read More »
Tags: business transformation, Cisco, data center, epn, esp, evolved programmable network, evolved services platform, network function virtualization, network functions virtualization, NFV, open source, SDN, Service Provider, software defined network
In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
- Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
- As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
- Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90’s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
Tags: Big Data, big data analytics, CERN, cloud, Data Gravity, Fog computing, gravity, IoT, IoTSP, ISP, keynote, LHC, Linux, LinuxCon, M2M, Moore’s law, Nielsen's Law, open source, SP