So, I am an auto enthusiast--I like me some fast cars with big engines--and regular readers of my blogs (thank you, by the way) know that an auto analogy sneaks in every now and then--largely because cars are usually something everyone can relate to.
The funny thing about the reviews in car magazines is that they tend to highlight statistics that are interesting, but also largely irrelevant. I mean 0-60 times and top speed make for interesting reading, but aren’t particularly germane to understanding what a car is really like to live with, unless you happen to live on a race track. Track testing certainly has its place: I am interested in a car’s 60-0 time because braking is something I will do often and its good to understand relative braking performance between cars, but in the end, I am more likely to spend time on a site like Edmunds.com where I can learn about real-world experiences from actual car owners.
Which brings us to the brave world of testing network equipment….
Pity the intrepid soul who endeavors to test networking equipment. Why? Well, its a tough job. The fundamental challenge with testing is coming up with a viable and relevant testing scenario--I can tell you from experience that we have as many traffic scenarios as we have customers. The related challenge is then having the time/money/resources to re-create that scenario for testing purpose. Ironically, our customers have an advantage here--they already have the test environment--their own data centers.
One approach to is to do synthetic testing where you simply hook equipment up to a traffic generator, cranking it up, and see what happens. As with the 0-60 times in the car reviews, this approach can gave you interesting data but I am not convinced it gives you useful information. First of all (taking nothing away from the fine folks out there that make these testing tools) real world traffic is generally quite complex, so this approach gives you an incomplete picture at best. Second, testing at the extremes (i.e. we tested 24 ports in a full mesh topology with 100% link utilization) is kinda pointless. If you have links consistently running at 100% utilization, you have bigger problems. Short of ensuring a switch doesn’t burst into flames under heavy load, I am not sure this kind of testing matters.
So, this week, one of our customers called me about a test of the Nexus 5000 he recently read about, where the report did not match his experiences with the switch. After chatting for a bit, I fired up a WebEx (gotta love that collaboration thing) and we went through the test together to look at the areas that concerned him. We looked at the testing scenarios and I asked him if the test plan resembled his typical data center traffic or measured the performance characteristics he cared about--his answer was no. We then looked at the configuration tested and I asked him if he was looking at a similar configuration--again, his answer was no, he wanted to use our CX1 cables instead of the full optics version, which made his config about 60% cheaper than the config in the test. At that point, I said I did not think this test was particularly relevant to him. While he said he felt better, he did not seem completely at ease. So, I was up front and shared my views on synthetic testing vs real-world testing, then said “look at it this way, this test we just looked at is one test--we have over 1,200 customers who have bought and deployed Nexus 5000s. Like you, each of those customers tested the switch in their environment and found it to be the best, most cost-effective solution for their needs, so from my perspective, the score is something like 1,200 to 1,” which made some sense to him.
So, that final thought goes back to my initial car analogy. You can read all the reviews and browse all the websites you want, but ultimately, you should take the car for a test drive. Same goes for switches--take one for a test drive in your data center--in fact, we offer special lab bundles for that exact reason.
See you in the fast lane.