So, I was lucky enough to go to the launch event for the new Intel Xeon processors yesterday. There is plenty of more better more informed coverage out that there on the processor itself, so I am not going to try and re-create it here, but there are a couple of thoughts I did want to share.
The Xeon 7500 is certainly a significant accomplishment. A couple of the soundbites that stuck with me was the 20:1 consolidation capability for existing Xeon servers with a predicted payback of 8 months including significantly reduced energy costs. One of the interesting points Kirk Skaugen made was that, if you are so inclined, you can now deliver 20X more compute capability in the existing thermal/power envelope of your data center. Beyond the raw performance, there were some equally important enhancements around scalability, support for virtualization, and system reliability. Intel’s point was that the Xeon has the wherewithal to be your primary processor for all your workloads–and I have to admit they make a pretty good case.
But, this is IT and there is no free lunch, so what is the impact of plopping these big honking processors down in the middle of your data center?
The first thing that occurred to me during the presentation (don’t take it personally Kirk, my mind wanders) is the impact on traffic flows. As many of you knows, one of the challenges in the modern data center is effective support the rapidly growing volume of “east-west” (i.e. server-to-server) data flows which put different demands on the data center network than the traditional north-south (i.e. user-to-host) traffic. So, I am thinking if you can deploy systems with an ability for massive consolidation that Intel is making possible, why not collapse the east-west traffic into a single host. For those HPC/HFT folks forever on the hunt for lower network latency, I am thinking no one is going to manage to ship a switch that can beat core-to-core latency numbers.
But back to the “no free lunch” concept, I think the longer term challenge you face with taking advantage of this kind of compute capability is how do you maintain a balanced system design. You might be able to physically shoehorn a Corvette engine into a Chevy Cobalt, but unless you upgrade the transmission, brakes, etc you are not going to get the desired results and perhaps some seriously undesired results. Without a balanced design, either you cannot take advantage of your new compute capacity (i.e. ending up memory-bound or I/O-bound) or you simply move the stress to some other part of you data center. The problem is, if you cannot create a balanced design, you can never really translate cool new technology into meaningful improvements in application performance and the business results.
On a final note, a couple of folks pinged me about the lack of Cisco announcements yesterday. All I will stay at this point is stay tuned–those of you paying attention to Intel’s briefing will have noticed that Intel did announce a couple of new benchmark world records set by our UCS, so let that be your hint as to what we have been up to.