Did you hear about the Graph 500 at SC’10? You might not have. It got some fanfare, but other press releases probably drowned it out.
Even though it’s a brand new “yet another list”, it’s worth discussing because it’s officially a Good Idea. Here’s what Rich Murphy, Official Chief Graph 500 Cat Herder (ok, I might have made up that title), tells me about it:
Basically, what we’re trying to do is create a complementary measure to Linpack for data intensive problems. A lot of us on the steering committee believe that these kinds of problems will dominate high performance computing over the next decade. We’ve given some “business areas” as examples of these kinds of applications: cybersecurity, medical informatics, data enrichment, social networks, and symbolic networks. These basically exist to support the assertion that this could be huge someday.
+1 on what he says.
If you look at the Graph 500 steering committee, it’s a Who’s Who list of HPC. These are smart people. They spent a lot of time coming up with the metrics used in the benchmark code. They also have a good amount of clout in the HPC community to actually make this a popular, well-populated list.
That would be a Good Thing, IMNSHO.
Here’s one reason why: assuming the Graph 500 becomes a popular list, organizations will run both Linpack (for the Top 500 list) and Graph. And running two computationally-expensive benchmarks may give a better approximation of real application performance than just one.
Specifically, many organizations highly tune their systems just for Linpack runs. After running Linpack, they have effectively obtained one set of metrics of how their system operates and performs. These metrics may or may not reflect how real applications will perform on their machine.
But note that the Graph benchmark stresses the processors and network in a different way than Linpack. Consequently, if organizations run both Linpack and Graph — both of which are large, complex benchmarks that require tuning in multiple dimensions — analyzing the union of the resulting metrics may actually provide a better overall picture of the system’s performance.
Benchmarks are certainly never a replacement for running real application codes on your system. But complimenting Linpack with another well-thought-out benchmark is definitely a step in the right direction. More specifically: if we’re all going to spend time running benchmarks to get bragging rights on the (XYZ) 500 lists, we might as well get useful data as a result — data that helps system and network administrators understand (and therefore run) their machines better.
That’s a Good Thing.