Twice a year, I get to experience a peculiar mix of stress and satisfaction that comes from participating in the design and operations of internal networks for the (US and European) CiscoLive – networks these events depend upon.
My role is vital to the efficiency of the whole system, but certainly not glamorous. If I were to compare the CiscoLive network with a dental office, I’d be the technician that only fixes the upper-right tooth – and ensures you continue to breathe well. More specifically, I’d be working on “the number 6 molar” – because I deal with the client-facing setup of IPv6, and the monitoring of all the related dependencies and results.
Managing IPv6 across a large conference like CiscoLive is a challenge due to the sheer number of clients and the considerations that need to be taken into account for an event of that scale. With the vast diversity of mobile clients and the very short lifetime of the network, there is essentially zero chance for any fine-tuning. Given the constant evolution of the client landscape with trends like BYOD (Bring Your Own Device), there are limitless opportunities to learn from every setup situation. IPv6 is just different enough from IPv4 to cause unfamiliar failure modes, which means fun and exciting challenges for me, so I look forward to every event.
To give you an example, Stateless Address Autoconfiguration (SLAAC) uses IPv6 multicast (there is no broadcast in IPv6) to periodically send Router Advertisements to all the clients. This means the packet needs to be replicated to all of the access points to be sent over the radio. Needless to say, in a large scale network, where there are several hundred access points, it creates several hundred copies to be sent at once, which is not very friendly to the network because it creates microbursts. That’s why there is a second mode, the so called “multicast-multicast” -- where the inner packet is first encapsulated into the CAPWAP tunnel, and the result is sent to a multicast destination address. This way, there is only a single copy of the packet on the segments of the network. This creates a new dependency on multicast that did not exist with IPv4, so if multicast does not work on the CAPWAP transport network, then IPv6 would not work on the transport network, while IPv4 would. This is just one of the subtle differences between the operation of IPv4 and IPv6.
This year’s CiscoLive events were more interesting than ever from an IPv6 standpoint -- particularly CiscoLive San Diego. Since we were using the 7.2 release of the wireless LAN controller (WLC) code, we had an opportunity to use the experimental “IPv6-only” SSID. It was hidden so only the “IPv6 session goers” got a chance to try it. (This was in addition to the fact that the main conference network was dual-stacked, something we had been doing for multiple years already) Every time we provide IPv6-only connectivity, there is a bundle of a NAT64+DNS64 combo for accessing the “legacy” IPv4-only services, allowing for the applications that work throughout the NAT64 translator.
You might be wondering: “But there is IPv4 on your IPv6-only SSID!”
Indeed, our “IPv6-only” SSID had the 100.64.0.0/16 subnet given out via DHCP. There’s a good reason for that, so let me tell you why. The vast majority of the devices today, unless explicitly instructed to turn off IPv4, will self-assign link-local addresses (the subnet of 169.254.0.0/16) if they do not receive an address from another source (DHCP or static assignment). This behavior is governed by the RFC3927. And the section 2.6.2 of that RFC says, in a nutshell, as follows:
“In the case of a device with a single interface and only a Link- Local IPv4 address, this requirement can be paraphrased as “ARP for everything”.”
So, if we have a device which correctly implements the RFC3927 in the absence of DHCPv4 *and* runs an IPv6-unaware application (and I bet you have at least one on your phone or tablet -- it’s a very popular VoIP program) -- the client will pollute the segment with useless ARP requests, at a rate of several messages per second. This fact was the reason for the decision to give out the IPv4 address. Never mind that the default gateway had the “deny IP any any” Access List applied -- so, for all the practical purposes, it was really an IPv6-only network!
Perhaps you’re now wondering: Did anyone even use the unadvertised IPv6 only SSID?? Indeed there was -- the number of Layer 2 neighbors on the first hop router (the ballpark measure that we historically use to monitor the usage of IPv4 vs. IPv6) peaked at 93 (IPv4 ARP cache). 78 of these did get the IPv6 link-local address, and 49 were able to successfully communicate with the outside world.
Why only half? In the large-scale events like this, it is unfortunately impossible to debug the behavior of every single device, but this distribution would be about right if we were to talk about the percentage of devices being able to connect over IPv6 (83%), and the percentage of the devices that could obtain the DNS server information via stateless DHCPv6 (53%). (It is worth mentioning that some of the widely-used operating systems like MacOS X only recently have picked up the DHCPv6 support, and some of them still do not have this support -- so accommodating for multiple address assignment methods from all the clients is a hard problem!)
The aggregate traffic on this SSID peaked at 11.2 Mbps (1.4 Mbytes/sec), which is just under a half of what the peak of IPv6 traffic was on the “main” conference SSID.
Speaking of the main SSID CiscoLive2012, we got a respectable 90% of the hosts that are dualstack-capable, and 68% of the clients (5.92K out of 8.66K at peak) that were actually using IPv6. How did we collect these data? It was an approximation based on the count of unique mac addresses, after the correlation with the IPv4/IPv6 addresses in the neighbor cache on the first hop router.
We also collected statistics from the Network Analysis Module for CiscoLive 2012 in San Diego from Saturday 5pm (show start: Registration) to Thursday 1:30pm, which gave the total amount of data transferred over IPv6 to be 576 GB, representing 2.37% of the total show traffic. This was four times more than the corresponding percentage at CiscoLive 2011 in Las Vegas, and several times higher than the world average. Not too surprising for an event filled in with networking geeks.
Join me with your IPv6 enabled device at the next CiscoLive event. In the meantime, feel free to exchange knowledge at the IPv6 Transition Support Forums.