Two Different Top500 Supercomputing Benchmarks Show Two Different Top Supercomputers

In the new TOP500 Supercomputer Rankings, who’s number one depends on which benchmark you use

Illustration of supercomputing machines as a bar graph
Illustration: iStockphoto

The 50th TOP500 semi-annual ranking of the world’s supercomputers was announced earlier today. The topmost positions are largely unchanged from those announced last June, with China’s Sunway TaihuLight and Tianhe-2 supercomputers still taking the #1 and #2 positions, and the Swiss Piz Daint supercomputer still at #3. The only change since June, really, to the handful of computers at the very top of the list is that the one U.S. computer to make the top-five cut, Oak Ridge National Laboratory’s Titan, slipped from #4 to #5, edged out by a Japanese supercomputer called Gyoukou.

The top 10 now look like this:’s November 2017 ranking
Position Name Country Teraflops Power (kW)
1 Sunway TaihuLight China 93,015 15,371
2 Tianhe-2 China 33,863 17,808
3 Piz Daint Switzerland 19,590 2,272
4 Gyoukou Japan 19,136 1,350
5 Titan United States 17,590 8,209
6 Sequoia United States 17,173 7,890
7 Trinity United States 14,137 3,844
8 Cori United States 14,015 3,939
9 Oakforest-PACS Japan 13,555 2,719
10 K Computer Japan 10,510 12,660

What’s more interesting to me is not this usual “TOP500” ranking but a second ranking the TOP500 organization has tracked recently using a different software benchmark, called High Performance Conjugate Gradients, or HPCG. This relatively new benchmark is the brainchild of Jack Dongarra, one of the founders of the TOP500 ranking, and Pitor Luszczek (both of the University of Tennessee) along with Michael Heroux of Sandia National Laboratories.

Why was there a need for a new benchmark? The normal ranking is determined by how fast various supercomputers can run something called a LINPACK (or HPL) benchmark. The LINPACK benchmarks originated in the late 1970s and started being applied to supercomputers in the early 1990s. The first TOP500 list, which used a LINPACK benchmark, came out in 1993. Initially, the LINPACK benchmarks charted how fast computers could run certain FORTRAN code. The newer (HPL) benchmarks measure execution time of code written in C.

Experts have long understood that the LINPACK benchmark is biased toward peak processor speed and number, missing important constraints like the bandwidth of the computer’s internal data network. And it tests the computer’s ability to solve so-called dense-matrix calculations, which aren’t representative of many “sparse” real-world problems. HPCG was devised to remedy these shortcomings.

And when you rank the current crop of supercomputers according to the newer HPCG benchmark, the picture looks very different:’s November 2017 ranking using the HPCG benchmark
Position Name Country Teraflops
1 K Computer Japan 603
2 Tianhe-2 China 580
3 Trinity United States 546
4 Piz Daint Switzerland 486
5 Sunway TaihuLight China 481
6 Oakforest-PACS Japan 385
7 Cori United States 355
8 Sequoia United States 330
9 Titan United States 322
10 Mira United States 167

The 10th-ranking computer on the TOP500 list, Fujitsu’s K computer, floats all the way up to #1. And the computer that had been at the top, the Sunway TaihuLight, sinks to the #5 position. Perhaps more important is the drastic difference in performance all of these computers show when you compare results from the two benchmarks.

Take, for example, the Sunway TaihuLight. It’s theoretical top speed, known as Rpeak, is 125 petaflops (that’s 125 x 1015 floating point operations per second). Judged using the LINPACK benchmark, that computer can manage 93 petaflops, about three-quarters of theoretical performance. But with the HPCG benchmark, it achieves a mere 481 teraflops. That’s just 0.4 percent of the computer’s theoretical performance. So running many problems on the Sunway TaihuLight is like getting into a Dodge Viper, which can in theory go 200 miles per hour [322 kilometers per hour], and never driving it any faster than a Galapagos tortoise.

So are the LINPACK (HPL) results or the HPCG results more representative of real-world operations? Experts regard them as “bookends,” bracketing the range users of these supercomputers can expect to experience. I don’t have statistics to back me up, but I suspect the distribution is skewed closer to the HPCG side of the shelf. If that’s true, maybe the TOP500 organization should be using HPCG for its main ranking. That would be more logical, I suppose, but I expect the organizers would be reluctant to do that, given people’s hunger for big numbers, now squarely in the petaflop range for supercomputers and soon to flirt with exaflops.

Perhaps supercomputers should just be required to have written in small letters at the bottom on their shiny cabinets: “Object manipulations in this supercomputer run slower than they appear.”

The Tech Alert Newsletter

Receive latest technology science and technology news & analysis from IEEE Spectrum every Thursday.

About the Tech Talk blog

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.