X petaflops, where X>1
Lotsa news coming out in the ramp-up to SC. Probably the biggest is that about China being the proud owners of the 2.5-petaflop computing monster named “Tianhe-1A”.
Congratulations to all involved! 2.5 petaflops is an enormous achievement.
Just to put this in perspective, there are only three other (publicly disclosed) machines in the world right now that have reached a petaflop: the Oak Ridge US Department of Energy (DoE) “Jaguar” machine hit 1.7 petaflops, China’s “Nebulae” hit 1.3 petaflops, and the Los Alamos US DoE “Roadrunner” machine hit 1.0 petaflops.
While petaflop-and-beyond may stay firmly in the bleeding-edge research domain for quite some time, I’m sure we’ll see more machines of this class over the next few years.
But consider — the stuff that’s powering that 2.5 petaflop behemoth is pretty much the same stuff that powers your desktop, and pretty much the same stuff that powers your teenager’s passion for playing “Halo” and “Call of Duty”. Specifically, although the individual servers in these petaflop machines use high-end, server-class CPUs and GPUs, they’re not that much different that what are used by billions of people around the world every day.
Of course, there’s a ginormous amount of engineering and know-how that is required to make a petaflop-class machine — you can’t just slap a bunch of servers together on a LAN and treat it like a supercomputer. Even though high-performance computing is becoming more and more mainstream (heck, there’s a growing trend of renting computing cycles on demand), running a data center of any flavor — not just HPC — is still a bit of an art form. This particular blog ramble is more about hardware than software, but many of my other entries talk about the complexity of parallel computing software.
It’s important to remember that the race for floppage isn’t just for fun; vendors wouldn’t build this stuff if it wasn’t needed.
I heard a US DoE supercomputer architect talk a few years ago; he had asked many of his users what their computing requirements were for 5 years in the future. “5 years? Today’s technology is still far too slow!”, they replied. Their calculations showed that they needed 10 exaflops (1 exaflop = 1,000 petaflops) to start to get useful results. 1 exaflop might be enough to scape by and get barely-usable results. They scoffed at 1 petaflop. “Call us when you hit 500 petaflops; then we’ll talk. We’ll even buy the coffee.”
We have a lot of work ahead of us.