Cisco Blogs


Cisco Blog > Architect & DE Discussions

Complexity is just like Chocolate…

ChocolateConfession time: If someone puts a nice chocolate in my reach, I find it very hard to resist. In fact, there are few days where I don’t get my dose of dark chocolate. Chocolate is one of the pleasures in life. Of course we all know that eating too much chocolate will get you into big trouble: With your blood sugar levels, your shirt-size, your partner, and with your kids (albeit for different reasons).

Complexity is really the same thing! Let’s be honest: Can we ever resist the latest features and functionalities on our networks? And with each new feature, we ask for even more visibility, control, and sub-features? But just like with chocolate, we all know that too much isn’t good for us.

Yes, we all love complexity!
blog-pic1

Hang on – that’s not what we hear in the press, is it? “IT Complexity considered most important inhibitor to innovation and effectiveness” (1) “What causes enterprise data breaches? The terrible complexity and fragility of our IT systems” (2) You can find many quotes in this direction. So – we hate complexity?

The truth is: it’s just like with chocolate – small improvements can add lots of value, but eat too much and you become sluggish and might even get seriously sick.

The funny thing is, there is actually no definition of the term “network complexity”. Researchers have dug deep into certain areas, such as software complexity, graph complexity, routing complexity, and many others. However, networks have a bit of all of that, and there is no global view on network complexity. The Internet Research Task Force tried to get to the bottom of the topic, created the “Network Complexity Research Group”(3) in 2011, but concluded in 2014 without tangible results. Researchers and industry specialists collaborated at http://networkcomplexity.org, with lots of materials posted and discussed, but again no clear results.
blog-pic2

What we do see is that too much complexity tends to slow us down. It becomes hard to make changes to the network, because we don’t really know what will happen. Introducing new systems requires serious up front testing, because there are just too many interactions on the network, and we can’t really judge the impact of change.

But complexity can also deliver value that we want. If you want a branch office that is robust against failures, you have to do a redundant set-up (middle of the graph). But that’s more complex than the single device/line solution (on the left of the graph), isn’t it? You need some failover protocols and mechanisms that you don’t need on the simple layout? Yes, you *do* want a certain level of complexity, because it provides value, in this case robustness against outages. Other values can be: Security, agility, programmability, and so on (there are lots!). There is no right or wrong here: In my home network, the left model is perfectly ok, in a branch office of a bank it might not be. All depends on the use case and its requirements.

Obviously, there are limits: Adding a third row to this layout (right) intuitively will not increase availability, but probably cause more failures due to more complex protocol interactions that are not widely deployed.

When eating chocolates, your waist line tends to grow over time. The same is true for network and IT complexity: Most networks start out reasonably simple, but then new requirements need to be supported, you move to a new device and OS types in parts of the network, ever growing security concerns add layer by layer of protections, and so on. And: Nobody ever removes components that aren’t used any longer. When have *you* last removed some feature from a network?
blog-pic3

The Cynefin framework(4) (left) illustrates this process, and helps understanding the progression of complexity over time: Most networks start out as “obvious”, simple and consistent architecture, uniform devices, OSs, and initially simple requirements. As we add new components, make changes, allow exceptions, the network becomes more complicated. But “complicated” still means that an expert can, albeit with some effort, model the effect of a proposed change. Complex systems show an emergent behaviour: Even with massive analysis, the outcome of a change is not fully predictable. Finally, a system becomes chaotic if there is no relationship between a change and the result any more – This is unpredictability. Most networks today operate in the “complex” area, which is why analysis on paper is usually not enough. The only way to really know whether it works or not is to test it. No paper analysis can safely predict the behaviour of a complex system.

In summary:
• Complexity isn’t all bad. It’s important to balance the right level of complexity with the features you require.
• The level of complexity typically changes over the lifetime of a network.
• Complexity and Chocolates are best enjoyed in moderation.

In the next blog we’ll suggest some high level measures to control complexity. And, hopefully not surprisingly, there are actually a lot of innovations happening in Cisco which reduce complexity. Stay tuned!


This blog was prepared as part of a team project on complexity, actively supported by: Bruno Klauser, Patrick Charretour, Mehrdad Ghane, Dave Berry and Rebecca Holm Breinholdt.


(1) A.T.Kearney Study 2009
(2) http://www.zdnet.com/article/what-causes-enterprise-data-breaches-the-terrible-complexity-and-fragility-of-our-it-systems/
(3) https://irtf.org/ncrg/
(4) https://en.wikipedia.org/wiki/Cynefin
Photo credit: Dominic Lockyer, shared under the creative commons licence 2.0.

 

 

 

 

Tags: , , ,

OPNFV: Systems integration for NFV as a community effort

Can OPNFV (Open Platform for Network Function Virtualization) become the base infrastructure layer for running virtual network functions, much like Linux is the base operating system for a large number of network devices?

The first step has been taken: “Arno” – the first release of the OPNFV project came out today. What does it provide – and, more importantly, what’s in it for you?

Read More »

Tags: ,

Verify my service chain!

How do you prove that all traffic that is supposed to go through the service chain you specified actually made it through the service chain?

This blog was written by Frank Brockners, Sashank Dara, and Shwetha Bhandari.

Service function chaining is used in many networks today. The evolution towards NFV, combined with new technologies such as Segment Routing (SR) or Network Service Header (NSH) makes service chaining easier to deploy and operate – and thus even more popular. Unfortunately there is still one hard question left that management or security departments tend to ask: Can you please prove to me that all traffic that was meant to traverse a specific service chain really followed that path?

Service chain verification is here to help: By adding some meta-data to our traffic, we can now provide a packet by packet proof of the actual path followed. The meta-data can either be carried independently from the service chaining technology used (as part of in-band OAM information) or included in an NSH or SR header.

Read More »

What if you had a trip-recorder for all your traffic at line rate performance?

The case for “In-band OAM for IPv6”:  Operating and validating your network just got easier

How many times have you wanted to gain a full insight into the precise paths packets take within your network whilst troubleshooting a problem or planning a change? Did you ever need to categorically prove that all packets that were meant to traverse a specific service chain or path really made it through the specified service chain or path? “In-band OAM for IPv6 (iOAM6)” is now here to help, adding forwarding path or service path information as well as other information/statistics to all your traffic. It is “always on” OAM – and a new source of data for your SDN analytics and control tools.

Read More »

Fosdem 2015: a status update

As is our tradition by now a team of volunteers helped out with the network setup and operation of Free and Open-source Software Developers’ European Meeting (FOSDEM). The network was very similar to the one used last year and we wanted to report on the evolution of the traffic we measured.

First the bad news: due to the increased use of IPv6 we have less accurate data. This is because while IPv4 uses a unique MAC address which we can use to count the number of clients, IPv6 uses ephemeral addresses, and one physical device can use multiple global IPv6 addresses. In fact we noticed one client using more than 100 global IPv6 addresses over a period of 240 seconds. Why this client is doing this is a mystery.

The unique link local IPv6 addresses were only kept in the neighbour cache of the router for a limited time, so we have no good numbers for the amount of clients. The good news is we can still use traffic counters to compare with the previous year.

Internet traffic evolution

Internet traffic evolution


Compared to 2014 we saw a 20% increase in traffic to more than 2 terabytes of traffic exchanged with the internet.

Fosdem 2015 wireless traffic distribution

Fosdem 2015 wireless traffic distribution


More interestingly the IPv4 traffic on the wireless network decreased by almost 20% with the net result that now the IPv6 traffic is 60% of the traffic on the wireless network, while IPv4 traffic is only 40%. So IPv6 traffic is 1.5 times the IPv4 traffic. This is a good indicator that most clients now can use NAT64 and can live on a IPv6 only network.

Internet IPv4 versus IPv6 for Fosdem 2014-2015

Internet IPv4 versus IPv6 for Fosdem 2014-2015


On the internet side the IPv4 traffic increased by 5% while the IPv6 traffic almost doubled. As we use NAT64 to give access to IPv4 only hosts using IPv6 only on the internal network this measurement is a clear indication that more content is now available via IPv6.

For next year we plan to setup some more tracking systems in advance so we can investigate the number of clients on the wireless network and why some clients are using hundreds of global IPv6 addresses.

Tags: , , , ,