In May an IBM-built supercomputer called Roadrunner crunched through a quadrillion floating-point operations per second, officially becoming the first supercomputer to break the petaflop barrier. But unofficially, that barrier had fallen two years before, when MDGRAPEâ''3, a machine at Japan's Riken Institute, in Wako, powered up. Accepted benchmarking methods ruled out that performance because MDGRAPE-3 is a purpose-built computer, able to model molecular interactions and little else. Yet the machine cost Riken just oneâ''tenth of Roadrunner's price--more than US$100million--and consumes just one-tenth the power.
That power-saving potential is convincing many people who have belittled special-purpose machines to give them a second look. Electricity already accounts for more than half the lifetime cost of owning and operating a supercomputer--or any large server farm, for that matter--and power's share is expected to increase.
”We think scientific computing is ripe for a change,” says Michael Wehner, a climatologist at Lawrence Berkeley National Laboratory. ”Instead of getting a big computer and saying, ’What can we do?' we want to do what particle physicists do and say, ’We want to do this kind of science--what kind of machine do we need to do it?' ”
Wehner and two engineers, Lenny Oliker and John Shalf, also of Lawrence Berkeley, have proposed perhaps the most powerful special-purpose computer yet. It is intended to model changes in climatic patterns over periods as long as a century. Specifically, it should be able to remedy today's inability to model clouds well enough to tell whether their net effect is to warm the world or cool it. To solve the problem, climatologists figure they need to deal in chunks of the atmosphere measuring 1kilometer on a side--a job for an exaflop machine, one with 1000 times more computing power than even Roadrunner can provide.
Wehner, Oliker, and Shalf estimate that a general-purpose machine using today's technology would cost $1 billion to build and 200 megawatts to power--enough for a small city. By comparison, they estimate, a specialized machine would cost just $75 million and consume just 4 MW.
The researchers are now trying to validate their claims with a hardware mock-up, which they are building in collaboration with Tensilica, a custom-chip supplier in Santa Clara, Calif. The plan is to bench-test a single processor by November and a parallel array of processors by the middle of 2009. If the claims are vindicated, the researchers hope to get government funding for a full-size machine.
Critics of special-purpose machines say they've heard it all before. ”The problem is that when we devise a new way to solve a problem, the machine designed for the old way will no longer be asgood,” says Jack Dongarra, a professor of electrical engineering and computer science at the University of Tennessee.
But according to Horst Simon, who heads the Lawrence Berkeley lab's research computing center, the proposed machine would not be so specialized that a new algorithm would render it instantly obsolete.
”We are building hardware that runs not just one algorithm but a large class of related algorithms,” he says. ”We are trying to eliminate unessential features of the architecture, much of it developed for desktop applications, and to optimize it for a class of applications that is scientifically focused.”
Not that there wouldn't still be room for superspecialized machines. As IEEE Spectrum went to press, D.E.Shaw Research of New York City said that by the end of the year it will have a specialized machine, called Anton, that can simulate molecular interactions hundreds of times as fast as anything now available.
Efficiency of World's Top 10 Supercomputers:
Average power consumption
» 1.32 megawatts
Average power efficiency
» 248 million floating-point operations per second per watt
Yearly electricity cost*
» US $ 1 029 124
*Assumes constant operation at $0.089 per kilowatt-hour.
Source: Consumption and efficiency from Top500.org.
This story was corrected on 11 August 2008