back to article Supercomputer efficiency jumps, but nowhere near exascale needs

It is not precisely the kind of leap that the supercomputer industry needs to reach exascale performance by the end of the decade, but more powerful GPU and x86 coprocessors are enabling more energy-efficient machines, at least according to the latest Green500 rankings. The Green500 list comes out two or three times a year, …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

"you will still need a 42 megawatt nuke plan to power an exascale machine."

?

Do you mean ~4% of an average nuclear plant

2
0
Silver badge
Thumb Up

Re: "you will still need a 42 megawatt nuke plan to power an exascale machine."

Toshiba 4S (Super Safe, Small and Simple)

"The actual reactor would be located in a sealed, cylindrical vault 30 m (98 ft) underground, while the building above ground would be 22×16×11 m (72×52.5×36 ft) in size. This power plant is designed to provide 10 megawatts of electrical power with a 50 MW version available in the future."

0
0
Silver badge
Holmes

> 2,351 megaflops per watt

I never understand this unit. That's in effect just "megaflop/joule". Why not use that?

An interesting point from "Energy Oddities, Part 2: Why Green Computing Is Odd by Kirk W. Cameron (IEEE Computer, March 2013)":

Consider the power profile of an average system with two 4-core Intel Xeon processors under various loads. (...), the power consumption varies according to the system’s use of memory, disks, and processors, fluctuating from about 120 W at idle to more than 200 W under an intense CPU workload. This constitutes about a 90-W or 42 % up-swing in energy use, which typically occurs in less than 50 ms.

Now, consider a datacenter with 1,000 or 10,000 of these types of systems. The largest possible increase in power consumption in less than 50 ms in the case of 1,000 systems is about 90 kW, and in the case of 10,000 systems, it is about 900 kW (...) power companies prefer steady power consumption to the impractical nature of spinning generators up and down and the high cost of overprovisioning. So, for a single datacenter with less than 10,000 systems using a typical Xeon server, these energy swings are real but not troubling because power providers operate in the range of tens to hundreds of MW.

General-purpose graphics processing units (...) engines consume significant amounts of power, ranging from 30 to 200 W per GPGPU. (---An example--) system’s baseline power increases about 40 watts with the GPGPU card installed. Furthermore, under a matrix multiply workload running on the Nvidia card, the power can fluctuate as much as 62 percent, or 270 watts, in less than 50 ms.

What are the implications for the datacenter? For 1,000 systems, swings of about 270 kW would occur, whereas for 10,000 systems, swings could exceed 2.7 MW. These are big numbers, and we have no idea whether they will wreak havoc on the power grid. Certainly a 2.7-MW swing is less than ideal from a power company’s perspective. (...) If we extrapolate the data from our system to 10,000 nodes with three GPGPUs each, swings could exceed 8 MW.

(...) individual datacenters and colocation facilities, such as those Google and Facebook are building, are getting larger. For various reasons, companies are clustering datacenters in certain areas—for example, Salt Lake City, UT, and Herndon, VA. With these types of observed fluctuations across multiple datacenters, potentially dramatic power swings in the tens of MW can place additional strains on the power grid.

0
0

As I have understood it,

Blue Gene are using PowerPC cpus, not POWER cpus. Because PowerPC are more tailored to low end, whereas POWER are tailored to high end where power consumption is not a big deal. For instance the IBM Blue Gene supercomputer, uses 750MHz PowerPC cpus. That was not impressive back when it was new. But the difficulty lies in using all cores effectively. No one has really succeeded in doing that effectively. Linear scaling does not exist. When / if we ever achieve linear scaling, then it makes sense to upgrade to better cpus. But today, cpu performance is not the problem. The connection is the problem. As has always been.

Blue Gene uses tailor made Linux to distribute the workload out to each node, in the cluster. Then a special OS takes over and does the actual number crunching. So I would not say that Blue Gene runs Linux, even if it listed as a Linux server.

0
0
This topic is closed for new posts.

Forums