Doug and I were having an interesting conversation the other day, which I thought was worth sharing….In 1965 Gordon Moore postulated in a paper that transistor density would double approximately every two years. We’ve heard people question why networking does not follow Moore’s Law, presuming that it is behind the curve. It is easy for those without the domain expertise in any particular technology or IT area to try to force fit Moore’s Law in as a catch-all measuring stick for technology evolution. So, let’s take a look at the evolution of networking contrasted with the predictable transistor densities of Moore’s law.We have to pick a starting point, so we’ll start with 1994, it’s fifteen years ago and gives us enough iterations of Moore’s Law to see if there is a noticeable trend or not. In 1994 Cisco started shipping the Catalyst 5000 series of modular LAN switches- it had a 1.2Gb/s backplane based on a shared bus and had modules supporting 12-port 100Mb Ethernet and 24-port 10Mb Ethernet. We will baseline all assumptions on a 1994 starting point with a 1.2Gb/s backplane. We will double the performance every two years on the Moore’s Law row, and track historical performance of Cisco’s networking products on the Cisco Switching Row.
If networking followed Moore’s Law backplane capacities would be around 150Gb today as opposed to the 7.2 Terabit that is shipping on the Nexus 7000. Networking outperformed Moore’s Law by a factor of 47x. Where did Moore go wrong? Simply put, he didn’t. The issue is not Moore’s Law, it’s that Moore’s Law applies to transistor densities, not to I/O speed. I/O speed is gated on a subtly different set of variables, somewhat linked to transistor density (improves processing capacity on chip), but more importantly linked to I/O pin density on the package and the ability to generate clean signal over the wires on the circuit board.Generating clean signal on the wire, depends on the signal to noise ratio of the medium. So let’s look at two mediums: Circuit Boards and External Cables: Circuit Boards: On circuit boards we can hardwire the traces and we manage the cross-talk and noise. Today we run a variety of speeds on copper traces depending on the length of the hardwired trace but it usually is around 3.125Gb/s per transmission lane (used in switch fabric design and XAUI/XGMII interfaces). The shorter the wire the stronger a signal can be received for less power input and we tune wattage to keep power efficient.Cables: In networking we are always trying to preserve the investment in the standards our customers have deployed that support our infrastructure. i.e. if we can reasonably support a new transport speed on a pre-existing medium that is commonly deployed it is great for everyone. It tends to be that the faster we want to transmit data the more ‘clean up’ we need to do to compensate for high noise on older cabling mediums. We compensate by adding buffers and digital signal processors to the PHY interfaces, this takes more power, and adds latency, so we have to balance the power and latency costs against the benefit of supporting installed-base cabling. Often this results in a variety of media types being supported with variable latency and power draw rates between media types and this feels confusing. Net-net: Link speeds will not directly follow Moore’s law, but more or less align to it. Networking backplane capacity will continue to track well ahead of Moore’s Law on a linear extrapolation. Transistor density little to do with signal-to-noise ratio on different physical cabling types, it has a little bit to do with DSP efficiency, and has nothing to do with preserving a customers investment in structured cabling.In summary, network performance has generally been significantly super-linear to the performance rates predicted if they tracked to Moore’s Law. However, singular link-speed has been roughly inline with Moore’s Laws predicted performance and future 40Gb and 100Gb Ethernet interfaces will prove this out once again.