Avatar

As the IETF (Internet Engineering Task Force) meets in Hawaii (IETF 91), the unavoidable question for both participants and observers is whether a Standards Development Organization (SDO) like the IETF is relevant in a rapidly expanding environment of Open Source Software (OSS) projects.

For those new to the conversation, the open question is NOT whether SDOs should exist.  They are a political reality inexorably tied to trade policies and international relationships.  The fundamental reason behind their existence is to avoid a communications Tower of Babel (with the resulting economic consequences) and establish governance over the use of global commercial and information infrastructure (not just acceptable behavior, but the management of resources like addressing as well).  Rather, the question is about their role going forward in enabling innovation. 

SDO Challenges

SDOs (like the IETF) have to evolve their processes to keep up with the technological landscape and development processes to remain relevant.

Software has come to dominate what we perceive as “the Internet” and the “agile” development model has created a sharp knee in the rate of innovation over the past couple of years … innovation that NEEDS standardization.  While code is “coin of the realm” in OSS projects, code is not “normative”.

While it’s important to have SDOs and consensus-based standards, SDOs need to realize the OSS cycle time can create a market-based consensus to fill a standards void (and this realization may be the key to our collective futures).

 The impedance mismatch between SDOs and OSS is at least 2:1 (two years to a paper standard versus one year to a product that creates a de-facto standard).

Globally, the multiple extant SDOs appear incapable of defining and maintaining their boundaries and new technology study groups are exploding across them.  Every organization is potentially (and dangerously) self-perpetuating and few SDOs have a life-cycle plan that bounds their authority and scope (applied to new technologies).  Choose any new area of technical endeavor – cloud, SDN (Software Defined Networks), NFV (Network Functions Virtualization), IoT and “APIs” and you will find a ready example.

The maneuvering by the Metro Ethernet Forum (MEF) (best known for their narrow focus on Carrier Ethernet service definition) to become “Lifecycle Service Orchestration” (through the endpoint/interface/API definition methodology) and the recently negated move by the ITU-T to expand its public policy power are very recent signs of the universal SDO struggle for relevance and their potential for expansion of scope.

Tellingly, both proposals reach the conclusion that transparency and collaboration are currently lacking.  Real coordination between SDOs is not readily detectable (maybe there is in name – liaison, but it doesn’t stop massive redundancy).  This behavior dilutes the efforts/resources of participating companies and individuals and is creating confusion for consumers of these technologies.

Locally, within the IETF, we face numerous issues around our own life cycle.   How much of our time are we be spending on further standardization around established technology at the expense of more pertinent and relevant work groups?  How do we handle issues, technologies and new architectures that would span our existing structure when they arise (e.g. the recent YANG model explosion across work groups)?  What does the subject matter of popular, network-centric OSS projects imply might be missing at the IETF?

Most importantly, how do we give startup companies (new vendors) and new, invested consumers a feeling that they have a voice (and avoid the appearance of an aristocracy versus a meritocracy)?

Conway’s Law – organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

To an outsider (and even some insiders), the recent reorganizing of the workgroups has the appearance of “shuffling the deck chairs.”  It doesn’t change our process.

Certainly, Conway’s Law applies here.

Without more fundamental structural change, we can only expect more of the same process.  The world shouldn’t wait for two years for a standard for Service Function Chaining or even more for Network Virtualization Overlays (or NFV in general – which is more of an ETSI problem, since the IETF did not take it up at all).

Open Source

While there is a lot to say about the challenges both the global SDO community and the IETF face, there are potential risks with the prospect of OSS communities running away with the standardization mantle.

In short, the danger to us all is the co-opting of open source due to the lack of governance.  Open source software projects with poor governance risk multiple, equally bad fates.

Like the confusion stemming from the uncontrolled overlap of standards from multiple SDOs, OSS projects that overlap can also create confusion.  Competition can be both unintentional (difference in technical opinions) and purposeful (vendor “freeware” offered as open source with no real community diversity to offer support alternatives, complimentary products or hooks to other projects).  The result can be multiple small communities that are underfunded or understaffed monocultures, dominated by a single party. 

  • Good and impartial third-party governance helps avoid the creation of overlapping, non-diverse and confusing projects.

OSS projects that don’t connect to form larger architectures can also create fragmentation.  Fragmentation results when multiple projects each deliver 20% of an overall solution but cannot be used together  – frustrating any progress and interring with higher level innovation.

  • Good governance creates a community that considers both the upstream and downstream connectivity of a project.

Security flaws can result when the project has a weak security focus – often the result of critical technology with too few reviewers and maintainers.  This result has manifested recently in OpenSSL (HeartBleed), and is now being addressed through the Linux Foundation critical infrastructure project (for OpenSSL, OpenSSH and NTPd).

  • Good governance establishes an effective development process – not only for new contributions but also for maintenance.

Proper governance also provides essential business, legal, management and strategic processes that ensure a proper ownership and licensing of contributions, release management and open community involvement.   Excellent examples exist in the foundations: the Linux Foundation, the Apache Foundation and the OpenStack Foundation.

Alternative SDO Model

While there have been several SDO proposals to subsume and standardize network-centric architectures developed in OSS (via the endpoint/interface/API definition exercise), the Open Networking Foundation (ONF) is an example of an early attempt at a different hybrid model.  The ONF started by attempting to bridge both worlds – using the word “standard” to describe its wire protocol and “open” in describing its architecture.

While the protocol has evolved through numerous (sometimes not-backwardly-compatible) specifications, the organization moved very quickly into advocacy and market development for the protocol, the architecture and the OpenFlow controller – the latter activity not normally associated with a traditional SDO.

While there were open-source controllers and switches available, most were developed outside of the ONF by individual interest groups and the ONF provided no “reference implementation” of their own.

For some time the ONF didn’t express a more open vision (beyond “OpenFlow is SDN”), controller architecture or NorthBound API (the latter a necessary component of an “open” OpenFlow).  The organization has changed over time and is now working to broaden its scope.

This gap was eventually filled by the Linux Foundation OpenDaylight Project (ODL), which envisioned the SDN architecture as not only polyglot in protocol but also one whose efficacy was bound to open APIs and a modular framework.

There were additional opportunities for cooperation and collaboration that have been missed by the ONF community around the use of NETCONF and YANG.  NETCONF was the original switch configuration protocol for OpenFlow, but expansions of the protocol to configure local features brought into question the ownership and authority for the models used – which many participants thought resided in the IETF.  Ensuing publication of overlapping specifications has been witnessed.

The ONF may have missed the opportunity to adopt common YANG models developed for the ODL controller, which included OpenFlow functionality.  Though the ODL models (along with API generation tooling) were publicly available, the ONF NBI workgroup chose UML as their modeling language from which they will generate their own controller NorthBound API.

The important lessons from the ONF experience are fundamental to both SDO and OSS: that a marketplace is made through the openness of a solution framework and collaboration (rather than ownership) avoiding fragmentation and confusion.   Unlike in the SDO, marketing does have a place in OSS projects, but should be focused primarily on community building and engagement.

Why Open API and Framework Standards Are Important

Many of the emerging OSS projects provide broadly scoped (and connected) solution architectures.  It’s also important that we discuss the role of SDOs like the IETF in making these new architectures’ connective-tissue “normative”, so that we ensure the “functional interoperability” that some fear may diminish in this environment (see https://tools.ietf.org/html/draft-opsawg-operators-ietf-00 – posted by members of The Internet Society).

The future “standards” in a software-driven network WILL be in the form of APIs and application/service frameworks.  All the same reasons that the underlying why IP protocols are standardized – interoperability, choice, and system design – apply to these higher-level concepts.

Standardization is necessary to vanquish the myth that a future that integrates a large amount of OSS means a future in which all software and solutions are “free” and that the only viable economic model for a vendor is solely to support OSS.

On the contrary, properly designed, open and standardized frameworks, protocols, state machines, eventing, etc, allow vendors to provide intellectual property in a modular (and, if need be, replaceable) manner.  There will certainly be community-supported OSS components within developing solutions, but through standardization the incentives are still there to established and startup vendors for innovation.

In this way, vendor support of OSS becomes rational and credible – as does its consumption in the operator community.

The OpenStack project is an excellent example of an overall architecture that not only interacts with other OSS projects (SDN and NFV) and encourages vendors to collaborate on its root software architecture, but also allows them to develop plug-ins that address their own (proprietary) hardware and software components as part of an overall solution (ostensibly, cloud orchestration). There are issues with fragmentation of the project and lack of strong standardization, with the latter not being the goal of the organization. One possible outcome may be predictable.

Open Loop

In spite of political or economic mandates for existence, the right of any SDO to be an “authority” has to be earned.

The IETF is arguably the most appropriately focused of SDOs to engage in standardizing the software-driven network.  The IETF is not too broad (e.g. not involved with Health & Safety or Environment and Climate Change) or narrow (e.g. single service or network domain) and the IETF experience with architecture definition, protocol development and information/data modeling (YANG) overlaps well with the interests and outputs of network-centric OSS projects.

So, how do we make the IETF relevant in this environment (how is the IETF the SDO authority for “new” things that IT professionals and operators need)?

To start, I propose that the IETF consider reform and restructure for a more agile process.  Kill off what should be dead and make room for new work. Fail fast – more BOFs leading to more successful, relevant work groups that have shorter lifespans, less paper to wade through and more tangible outputs.  Enable new Working Groups to proceed with technical work in parallel with some of the Framework, Architecture, Requirements, Use-Case drafts that have bogged down so many people for so long. Cut the cycle time for everything (“rough consensus” shouldn’t take two or more years – that sounds more like the timeframe for “parliamentary procedure”).

Emphasize software development more in the IETF structure.  Encourage interoperability and function demonstrations all the time.  “Running code” used to be part of the IETF mantra, but “running code LATER” is not “agile”.  We seem to be suffering from premature standardization. There was a well-received demo of multi-vendor interoperability of Segment Routing for IPv6 at IETF 90.  Why is there no beer-and-gear demonstration session at every IETF?

Engage in even more research (already a strength) to engage a broader range of participants.

And (as other SDOs have concluded) fix the liaison process – because it will be critical to collaboration with OSS projects.  In fact, don’t even use the liaison process as a model.

Most importantly, I propose that we embrace Open Source projects.   The heart of this effort will require the establishment of an “open loop” engagement between the IESG and reputable OSS foundations on productive and compatible projects.  A good example of such a compatible and properly governed project is the OpenDaylight Project (Linux Foundation), which is driving the use of YANG models into the IETF as well as other open source projects.

Through the research and monitoring of OSS projects, we can invest our energy actively in emerging technologies instead of waiting for it to show up on our doorstep.

A feedback loop between parties will identify what areas of new and existing projects need to be standardized, should use existing standards or are out of compliance with standards. From a viewpoint of experience, writing code before standardizing has produced the most complete and simplistic definition of protocols.

Part of this process will require the IETF to resist the need OWN or copy everything into IETF working groups (surely some of us have heard of a “pointer”).  We need to understand that SDO geeks and OSS geeks are not the same.  Paradoxically (for the work we have to do going forward) code is not “normative” but it’s also hard to define and standardize APIs if you’re not writing code.  Forcing the integration of skills (and purpose) changes a community with potentially bad results.  Collaboration, interaction and exchange of ideas is a better model for all of us.

Note that collaboration will not work if the SDO cycle creates unnecessary drag on the OSS partner.

Finally, we must adapt and adopt new laws (and avoid Conway):

Law of OpenSource: quality and strength of the project is 100% dependent on the interests, energy and capability of the developer community

Law of OpenStandards: importance, validity and timeliness of relevant specification is 100% dependent on interests, energy and compromises of the individuals who have been empowered to manage, organize and complete the work effort of the SDO

Realizing these “Laws” exist, we have to further understand that the roles of OSS and SDOs need to change.  We need to set a new trajectory, move faster and focus on a building a bigger and better Internet.

If you missed Friday’s Host Speaker Series talk on the relationship between Open Standards and Open Source, presented by Dave Ward, Senior Vice President, Chief Architect & CTO – Engineering at Cisco Systems, the session materials and a video recording are now online.

Slides: http://www.ietf.org/meeting/91/2014.11.13_DWard-IETF-91.pdf
Meetecho recording: http://recordings.conf.meetecho.com/Playout/watch.jsp?recording=IETF91_SPEAKER_SERIES&chapter=LUNCH&t=1024

Tweet us @CiscoSP360 if you any questions or comments.



Authors

David Ward

Senior Vice President

Chief Architect & CTO of Engineering