Avatar

Last November, I had the honor of speaking on the auspicious occasion of the IETF’s 100th meeting (in Singapore).  Because the IETF is a standards development organization (SDO) with which I have both a deep history and interest, I wanted to provoke, catalyze and hopefully accelerate discussion and thinking about the role of that organization in what can only be called a rapidly changing standards and open source landscape.  Though I was specifically speaking to the IETF community and IESG, and in that context; I believe those thoughts apply broadly throughout our industry whether in reference to any other standards body or open source community or foundation.

About three years ago, at IETF 91, I gave a presentation on the state of SDOs like the IETF and Open Source networking communities and the industry trend of innovators (vendors, operators, entrepreneurs, developers) regardless of affiliation coming together to form developer communities in the open.   At that time, I reflected on whether an SDO like the IETF would remain relevant in a rapidly expanding environment of Open Source Software (OSS) projects.  I made outrageous claims that were proven only with emphatic assertion about the relationship between Open Standards and Open Source. Summarized; developers in the open communities are setting the pace and trajectory of the industry and not the publication of paper standards.  Honestly, it was at the beginning of the era. Networking related open source communities had just formed and established themselves. The early days of software defined networking (SDN) and network function virtualization (NFV) had finally moved to post-hype phase and protocol and product work were well underway.

As a way to carry forward the decisions from IETF 91, Jari Arkko (previous IETF chair) and Cisco started some experiments and serious work that we hoped would accelerate necessary change.  These were funded by Cisco and supported by the IESG and Secretariat at the IETF. Cisco then spread these experiments to other other SDOs: MEF, BBF, ITU, etc. Basically, we supported hackathons or in IETF speak “running code” that represented what was being developed as new standards. We focused on forwarding plane, telemetry, SDN controllers, orchestration platforms, YANG models and tooling (more later) which really catalyzed the model-driven-networking industry tsunami.  The IETF partially adopted the cultural shift, while the MEF  made the transformation to fully include standards and open source as a mission.

Three years on, the “turn” toward OSS partnership and development that was instigated then has had enough time for us to reflect and assess the impact.  We need to review any progress that has been made, evaluate successes and failures and chart the path forward.  Also, it’s always good to test one’s hypotheses that Open Standards and Open Source should close the loop of their dependencies.

I had set some personal goals in the interim that I hoped could inform this assessment. I wanted to try and answer “(How) can we move industry through infrastructure phase faster by producing developer communities, code and standards more efficiently?”, because the networking and infrastructure industry and operators were stalling both adoption of new technology and deployment.  Plus as we all know, all the cool stuff comes after the necessary, but not sufficient infrastructure phase.

To do this, I felt we had to run a number of different structures, funding models, and community development projects. Many of these communities were formed at the Linux Foundation. (ODL, SNAS, PNDA, FD.IO and many, many meetups and hackathons at the SDOs.

My own assessment from these experiences (some of which I will detail later) is that there is still a high potential for an SDO like the IETF to engage w/ OSS communities, particularly OSS communities formed around networking and standardize many items to guarantee interoperability and correctness.  On the flip side, as an SDO, the IETF needs to do more to keep up and  change its open door policy to one that seeks out developer communities to standardize and document industry de facto technologies.

My real bottom line here is that innovators can’t go faster than their customers and customers can’t go faster than their own understanding of the technology and integration, deployment and operational considerations. And, we need to reduce the fracturing of the industry because, in this interim period, a technology landscape has evolved that is littered with “Stacks”, “Controllers”, and “Virtual Fubars”.

On the “foundation” side it has turned into a situation where every new community forming felt they needed their own foundation, misunderstanding the function of a board and application of money raised. Many new communities we formed were cost-free, no (mega) boards and some had 503c’s (foundations) some focused not only on code, but on code guarantees of performance and scale (spending foundation dues on infrastructure and test platforms). Back in 2012 when I started in OpenSource developer community building and focusing on contribution of code; I was completely and utterly green. Several years later, I can claim a lot of scars and experience, met and learned from a lot of very smart people; and that code is the answer and that a healthy community is built and not launched. The exercise reified the earlier understanding from many years as a developer and standards person, the best standards come from running code.

I certainly expect some community aggregation (e.g. Linux Foundation Networking Umbrella) and larger groups of industry players (vendors, operators, entrepreneurs, etc) forming aggregate projects from the start. And, I see old and new SDOs now working towards trajectories that include OSS and direct contributions. Industries have moved as well to positions where: OSS == SDO, stacks have new DIY cycle, and career paths forged in OSS are now part of job satisfaction.

In looking at our current state, I am convinced that APIs, platforms and frameworks WILL be the future standards front for software driven network architectures.  The same standardization reasoning applies to these higher-level concepts as did to our original protocol specification efforts – consistent system design, interoperability, and choice.  And, the “Collaborative Loop” I described at IETF 91 (Figure 1) requires (minimally) tooling built to have standards and dependencies in a RCS that is open and accepts contributions.

Figure 1 The Collaborative Loop proposal from IETF91

But these convictions come with some caveats: we still need to discuss and agree IF and WHICH SDO could or should take on different pieces of our industry landscape and attempt to standardize technology while not being involved in the production of the code or, if standardization processes and open source development are unfortunately mutually exclusive.   Further, the standards consensus model and OSS consensus may not be mutually exclusive, but we haven’t found a way to make them even tangential (yet). Thankfully the discussion is beginning, although slow going.

Let’s catch up on what happened that leads me to these new and potentially outrageous claims.

In the interim between the Open Loop talks, new trajectories for standards production and open source communities have formed primarily on the Open Source (OSS) side of the coin. The Linux Foundation published “Harmonizing Open Source and Standards in a Telecom World,” including a number of high level recommendations on communications, cooperation and joint activities driven in large part by a Software Defined operator concept.   Nothing too detailed was in the document just quite a bit on communication which is of course necessary, but not sufficient. More importantly, the LF evolved and recently began to consolidate the many developer communities contained within the larger Foundation; to reduce the time, effort, number of summits and costs associated with multiple project-level foundation as they pertain to networking.   This movement underscores one of the potential problems in further partnering between SDOs and OSS projects – projects were and are still “begetting” biblically. Communities and projects are also aggregating, changing, morphing, fading way and everything in-between. In fact, OSS foundational structure is proving itself to be self-organizing, optimizing, replicating; IT’S ALIVE!

In the same timeframe, the IETF made some changes, generating both a “living standards process” through Benoit Claise’s YANGCATALOG project and discussion around referencing OSS in its drafts and RFCs.

The “normative reference” discussion (in sum, a standard needs to be able to refer to an open source development effort) remains unresolved at this point, primarily because of fundamental differences in orientation of the parties around documentation.  For an OSS community, code is quite often the documentation.  The quality and type of what an SDO may consider documentation is community dependent.  It is also unclear how projects will be picked as being viable for a reference in a standard, how to measure health and longevity of prospective communities and what aspects of the projects are to be referenced (entire project, subproject, schemas, specific implementations, code versions, libraries, …, ???).

A continued speed mismatch between organizations persists. Code moves as fast as a developer community can produce, and the mainline of a project frequently led by the project’s committers. The IETF works as fast as someone writes text and that text is debated at three meetings a year or on email lists. In the end, only a document is produced. Often after the specification is published do implementations (potentially) follow, products created, deployed and operationalized. Then the cycle goes to the top of updating the spec. This can be a very, very long feedback cycle. Rarely can a Standards Body set the trajectory for industry uptake of a technology by publishing a spec and then hoping the industry will follow. The SDO (the IETF in this case) runs the risk of either becoming “scribe” for existing code or missing an industry shift if they cannot answer these questions and move closer in operation/process to OSS.

Figure 2 YANGCATALOG.org cross SDO/OSS/Vendor dependency map

The YANGCATALOG (Figure 2) project has to be considered one of the IETF’s most successful experiments with a live, open organization and OSS tooling.  What has been accomplished is the creation of a live tree of dependent modules that together (though being worked on independently) create the new way to operate a software driven network.   Notably, a new “customer” focus has developed that includes other SDOs (e.g. MEF, IEEE, BBF) and OSS projects (e.g. OpenConfig) use tooling that simplifies both the education and development process. The dependency map can be used to not only show the relationship of the SDOs and developer communities, but also the use of technology in different architectural stacks (Figure 3). This takes the LF Network Umbrella view of the architecture and then I superimposed the red outlines on projects that use YANG data modeling.

Figure 3 Dependency map across SDOs applied to an architecture stack

At this point, I’ve gotten completely drunk on my own Kool-Aid and believe that the way to understand the impact of a technology (specific to the IETF) is not on the RFC ID but on the product of the RFC.  And, the product of that RFC is represented by a multi-SDO, multi-open source community dependency graph.

But the success of YANGCATALOG and the potential contribution it might make to the path forward in bridging SDO to OSS makes me wonder how we carry the experiment forward.  Is the IETF able to maintain the tools or is this the catalyst to build an open source community and/or non-profit foundation?  Will the potential process changes YANGCATALOG represents toward live models across communities change the funding model?

Beyond YANGCATALOG, the IETF can look to its Hackathons as yet another successful experiment for different reasons.  These events, organized by Charles Eckel, have been driving renewed interest in the organization.  As I pointed out in my talk, while the attendance trend line for IETF meetings is generally slightly decreasing, Hackathon attendance is noticeably “up and to the right” and the component of hack attendees that are attending only for the Hackathon (and not the workgroup meetings or plenary) is also noticeably increasing.  For many Hackathon attendees, this is their first IETF (Figure 4).

Figure 4 IETF General Attendance (left) and Hackathon Attendance (right).

As with the YANGCATALOG experiment, IETF Hackathon success is tempered with questions about funding moving forward as they transition from an unsustainable privately funded experiment to an organizational norm. Can the Hackathon or more importantly Open Source communities (e.g. FD.IO, SNAS, PNDA, ODL, ONAP, OpNFV, K8s) become the experimentation and development “running codebases” of the next generation of standards? Do other standards professionals remember or realize that the best standards are produced in conjunction with code?

It’s important to look at changes in expectation in the IETF’s customer base as well.  There we see a growing movement toward operational simplification and a willingness for experimentation.  Operators are attracted to OSS because of a desire to learn, become more directly involved in the processes affecting their businesses and the attractive fast iterative model of consensus it provides. They can also develop desired functionality, fix bugs, in the end; produce and deploy products. Most importantly, they can modify the code as necessary to fit into their operational environments, integrate multiple technologies together for their own Intellectual Property. Thankfully Alissa Cooper (Chair of the IETF) appears to fully understand the new context of SDOs and of course running code and is attempting to make several cultural, organizational and process changes at the IETF.

Outside of the IETF, other SDO organizations have struggled with accommodating the desires of their target customers and making their own turn toward new models of SDO/OSS partnership.  This has led to a noticeable fracturing of the industry as SDOs fight for relevancy and (in some cases) “defend their territory”.  In general, there have been more failures than successes with numerous lessons we can apply to the IETF as it considers IASA 2.0.

Notably, the MEF has successfully reinvented itself over the last few years, moving up the stack from its Carrier Ethernet service definitions to L3-L7 services above via the LSO architecture and APIs that are delivered via their own OSS projects (OpenLSO and OpenCS). Also, by formalizing lateral partnerships with the ONF, TMForum and IETF (via the YANGCATALOG and live data model mechanism).  They have partnerships with the external ONAP OSS project.  This movement is leadership driven through the MEF Office of the CTO.  Like the IETF, they have run successful Hackathons (with Charles’ help), most often using tools from the same set of complimentary OSS projects that we see at the IETF hacks. They have also invested in the infrastructure to support these activities (MEFnet). This latter piece is critical to create “live standards” not only providing for the computers, networking infra and storage for the projects, but that tools and repos of funded code and developer communities can be homed which are critical to the MEF.

There is some potential for specification/certification partnerships between an SDO and OSS communities, but the recipe is complex.  What makes the 3GPP/GCF certification partnership work may not apply to other pairings and may not extend to the world of 5G. Is relegating an SDO to being a testing and certification body “enough” for the body to survive? In an organization like the IETF without membership or dues, can the body afford the infrastructure necessary?

The trend toward reference design driven open hardware projects may offer yet another potential model that moves beyond that of loosely coupled certification and test.  The success here is coupled to the reference designs that spark the growth of ecosystems and promote interoperability.  These fast-moving projects can be either consolidated (e.g. TIP) or scattered and independent (e.g. OpenROADM).  Outside of the model itself, these groups form another layer of dependency and interrelationship that need to be tracked.

Other potentially positive models can be seen evolving from the OSS side, perhaps embodied best by the Open Connectivity Foundation, who combine focus with a variety of techniques and processes that address some of the questions we’ve tripped over in our IETF experiments and echo some of the successes: they have clear standards partnerships, they publish reference designs, they offer certification, they have an OSS community, they sponsor a repository of live data models with dependency tracking (like YANGCATALOG) and are membership funded for these activities. Perhaps the “pulling out all the stops” approach may or may not work, but the point is that these are the recognizable activities that are a part of some of the most successful communities. As an industry, it’s a good thing to see the experimentation.

Failure scenarios in the SDO/OSS partnership model are numerous. Many of the pitfalls that the IETF and others need to avoid in order to survive relate back to my earlier observation that API specification without being involved in code generation (what I would label as “blind” specification) is the road to irrelevance.  The outcomes here are generally that either an OSS partner becomes the recognized authority on the standard or the resulting APIs are untestable, non-certifiable and unusable.  A potentially worse outcome awaits SDOs claiming standardization responsibility for technology that has already been “defacto” standardized by an OSS community. This makes the SDO (in the best case) a scribe. I cited two variants in my talk:

  • The historic example of JPC and Apache where the reputation for JCP specs became so bad that the community would ignore them until Apache (open-source implementations of JCP specifications) had fixed the problems.
  • A current example in the ETSI/OPNFV relationship where “blind” specification led to the inability to certify or test (OPNFV doesn’t certify against the ETSI NFV framework per se as the specification is too weak and have opted largely for their own set of tests).

The OSS world has progressed not only in volume but also in purpose.  While the original ethos of “open source” is embodied by the numerous communities of at least eight major foundations (relevant to the work of SDOs like the IETF), and thousands of unaffiliated projects under Github, massive organizations have begun using open source as a market moving force and strategic tool.  Successful integration into this new environment requires SDOs to develop competencies, cultures and communities and outreach suitable for interacting with the entire spectrum of the OSS environment.  Outreach is one of the most important roles in OSS and any SDO trying to make the transition into an OSS-relevant role (like partnership) will need to re-evaluate their liaison mechanisms, which are generally inadequate for this task.

Overhanging all of these scenarios are IPR considerations.  In most OSS projects, the only IPR terms are the OSS license.  For some major projects, IP is governed with additional terms like a Contributor License Agreement (CLA) or formal IP Policy. These are foreign to the RAND (Reasonable And Non-Discriminatory) policies currently used in SDOs.

These RAND policies, when combined with the SDO concentration on producing the best technical solutions may ultimately produce the worst business solutions.  This is particularly true when the SDO effort(s) become disjoint in addressing a rapidly changing legal and business environment, as witnessed in the MPEG standard evolution and replacement by AOM and Open Source solutions.

The OSS community expects that “open source” means software will be usable by anyone for free without additional license.  The standards world is beginning to see value in OSS methods. Some projects are now pushing for copyright-only “open source” projects that will require RAND patent royalty to use. This is highly controversial within the OSS community and most SDOs, fundamentally redefines open source and induces users to incur liability. Recent problems surrounding Facebook’s React.js patent rider highlight why alignment is key.  While this is a scene for later in our movie, undoubtedly populated with lawyers, coming to terms with these divergent IPR models CAN be a major stumbling block to progress.

Figure 5 Potential outcome of inaction – SDO aggregation

While all of this is a lot to consider, to not move forward is the worst possible choice.  To NOT make might not lead to irrelevance, but it can relegate SDOs to niche roles or lead to an SDO aggregation similar to the one we saw in the early 2000s of many X-Forums coming together (Figure 5).  Aggregation addresses the same problems that plague the “begetting” of OSS projects in our earlier example, leading to the same sort of consolidation within the Linux Foundation networking project umbrella – small communities of folks have to come together in a bigger meeting because their own individual effort can’t be afforded. The industry got high on building a foundation around every Open Source effort and it was learned that weight wasn’t necessary or helpful.

Figure 6 Moving Forward

I’m convinced that an individual who is an industry leader of the future is one that understands there are multiple communities that need to be influenced, that we have interdependencies between these pieces, and is able to work at the code level as well as understand the structure and need to standardize pieces. That leader understands that the output of the work they are doing in technology is setting the trajectory of the industry and sees the picture of how the relationships between all of these organizations work.

We are now at a very critical turn in our navigation forward.  In the IETF (and in some other SDOs) we have evolved the SDO role past “paper pusher” to “tinkerer/builder” but we haven’t fully figured out how to get to new roles; community based coder, industry and ad hoc standard creator, industry catalyst and multi-community influencer (Figure 6). Benoit has shown what one individual can do with a clear goal of unifying the industry (SDOs and dev communities) on YANG schemas. The next chapter of that effort will be around the schemas for telemetry.

To get there we first have to choose “vigorous relevance” which will entail changing our processes and thinking.   We need to identify emerging problems and start open source projects and work with other related SDOs and applicable communities to write code to solve and standardize solutions.  The SPIFFE/SPIRE project is a very recent example of the industry (yet again) missing the opportunity to lead in an area that an SDO like the IETF arguably has some expertise, but the development community doesn’t want to be slowed down.

At the same time, we need to remain balanced and accept our roles as technologists – that what we are creating has a very specific role in a larger ecosystem.  What gets me excited as an engineer is getting my technology out there and being used in multiple new and different ways.

Yangcatalog and M2M are models for how we may organize the fracturing of the industry, and stop the Brownian motion between multiple SDOs and OSS.  We need metadata to enrich those models. If there is going to be a relationship between SDO and OSS, understanding what makes a good standard and what makes a good OSS project need to be understood … perhaps independently.

Eventing, Telemetry, APIs and protocols (in working code) need to be our products.  The IETF (and every SDO) need to change the process to be centered around these products of the specifications (RFCs) and NOT the specification itself (RFC IDs).

As someone who has championed the privately funded phase of our evolution (these tools and other contributions and organizational and cost structures) I’m personally looking for a way to partner with the IETF, ISOC and OSS communities to bring this tooling forward. An experiment isn’t a sustainable strategy; it’s just someone sticking their neck out willing to see if something might succeed/be of value. Hopefully, it catalyzes many more.



Authors

David Ward

Senior Vice President

Chief Architect & CTO of Engineering