Avatar

Daren Fulwell is a Cisco Champion, an elite group of technical experts who are passionate about IT and enjoy sharing their knowledge, expertise, and thoughts across the social web and with Cisco. The program has been running for over four years and has earned two industry awards as an industry best practice. Learn more about the program at http://cs.co/ciscochampion.

==========================================

Recently I was faced with the question – new network technologies like SDN are supposed to make networking simpler, but do they really?  There’s only one answer to that in reality – “it depends”!  Let me expand …

As in most things, accepted good network design principles can be summed up as “KISS” (Keep it Simple, Stupid) – only introduce complexity into the design where it is necessary.  We build modular repeatable designs, we standardise where possible, and within our implementation we template and use naming conventions to quickly recognise what we are doing at any point.  But the network, out of necessity, is a reflection of the complexity of the business it serves: it acts as a map of potential data flows around an organisation and as we become more and more reliant on data to transact business, so those flows become many and varied.

The underlying network topology then needs to be built to efficiently facilitate those flows, and we need to ensure there is sufficient resilience in it to keep the data flowing during failure or attack.  The level to which we protect the infrastructure against these events is obviously dependent on the criticality of those data flows. Conceivably, certain parts of the network that carry traffic relating to a particularly critical application may be required to be more resilient than others.  In these days of IoT, application flows don’t simply mean PC-to-PC either.  Building infrastructure such as physical security or environmental controls, manufacturing process equipment such as machinery or control stations, or other capabilities such as inventory tracking, can all require connectivity – and depending on the business they support, some or all of these things may be fundamental to the ability to transact.  So we can see that with the proliferation of requirements for a ubiquitous network, the complexity of the underlying infrastructure inevitably increases.

SDN techniques are helping us battle with this complexity.  While people struggle to agree on a definition of SDN, most would agree that centralising the control plane using a controller of some sort, and introducing programmability of the environment through APIs are fundamental.  These features enable automation and orchestration, and provide us with a means to abstract complexity away.  Alternative – simpler – configuration constructs are defined to represent overall technical capabilities of the entire network and configuration is applied box-by-box across the infrastructure by the controller as necessary to implement those capabilities.  These can then be built into workflows and custom scripts which can then be consumed through simple dialogs with the network operator.

At this point, I would add another necessary feature to my SDN definition – the ability to express a centralised set of policies to express desired behaviour of the network (“intent”).  Networks exist fundamentally to connect endpoints with data stores which can then be shared, and all the business really needs to care about is which endpoints are allowed to connect to the network, which endpoints are allowed to converse and how the network prioritises and treats those flows of conversation.  This can be boiled down into “user policy” – authentication, authorisation and access rights for endpoints – and “application policy” – desired treatment of traffic flows associated with a particular application through the network, such as prioritisation, performance guarantees, preferred traffic path and so on.  Once those policies are defined, the network devices themselves are given a standardised configuration and their specific behaviour is actually modified by updating the central policy engine.  This, in turn can pull data through from the customer’s Active Directory, making policy changes (and thus configuration of network behaviour) a very simple administrative task.

Does that make the network simpler?  Hmm, I’ve got a couple of thoughts here.

A network is the sum of a number of parts – there is always a need to connect users and endpoints (an access network); there is a need to connect those users to services off site (private or public WAN); and the services they consume have to be hosted somewhere (traditional DC or Cloud environment).  There is no viable single SDN solution that solves connectivity issues across all of those networks as they each have different configuration requirements and characteristics.  Each part of the network may have its own controller, and then these must be orchestrated either through a standard tool built for the purpose or through custom scripts and workflows – so adding another layer of abstraction in order to define the end-to-end behaviour at a single point of control.

And what happens when things go wrong?  Another feature usually expected of an SDN solution is visibility: the controller gives the operator an end-to-end view of how the environment is functioning and its behaviour, good or bad.  In theory, the controller can spot when things are not functioning as expected and take corrective action itself based on the parameters it has been set in the policies that it has been configured to implement.  The assumption here is that the network has been built and is operating under optimal conditions.

A typical complaint over recent years is that the large networking vendors use customers as beta testers for their software – that bugs are often found not in pre-release testing but in operation of production networks.  In an SDN environment, a new layer of software is introduced – not only the network devices themselves, but the controller now needs to be running perfect software that always functions correctly as designed in order to provide the level of availability that businesses require to transact.  In the real world, as we know, this is not likely, and so we need to understand the complexity this introduces to our operational state.  While the controller may take away the need to carry out complex configuration activities across the network, it doesn’t remove the need to understand how the network achieves its capabilities as we will always need to be able to troubleshoot the infrastructure.

With all that said and done, is network complexity necessarily a bad thing?  We’ve already seen that in order to achieve the end-to-end view it is necessary to do different things in different parts of the network – so it could be argued that in order to achieve create a network to provide the foundation for a complex business, some level of complexity is necessary, even desirable.

So, to answer the question then – in my view simplicity is a matter of perspective.  In order to build a foundation for the IT systems for a complex business, we need to create complex connectivity patterns to allow devices to talk, with complex features at the edges to protect the systems from malicious intent and failure.  The operators and maintainers of that network need to be exposed to the full complexity in order to fully support and troubleshoot end-to-end should issues arise.  However – with a not-insignificant development effort, and more than half an eye on managing questionable software quality – the interface to the wider business can be drastically simplified through orchestration of SDN controllers to enable a single set of policies to determine network configuration end-to-end.  Automated change, self-healing and configuration that reacts to network events are all completely feasible in today’s networks.

So, is the network simpler?  No.  And yes.  It depends!



Authors

Daren Fulwell

Technical Architect

Cisco Champion