Boffins in Switzerland have warned that increasingly powerful computer processors are set to guzzle the entire world electricity supply by the year 2100. They say that only 3D myria-core chips can save the day. The 3D multicore concept. Credit: EPFL Getting on top of Moore's law. "Industry’s data centres already consume as …
Why not use R410A or other non conductive, non toxic, non corrossive industrial refrigerant and rid themselves of the pipes. Just pump the coolant in one side of the stack, let it flow while evaporating and collect the resultant hot gas on the other side. No pipes, just small expanding holes etched in the surface of the bottom chip, one quick connect pipe socket in each side of the module, and you're ready to go.
Use one compressor for all paralel chips, and get the efficiencies in moving heat associated with het pump aircon systems.
The problem with that...
...would be directing the flow. As the refrigerant approached the hottest parts of the chip (or should we call it a "block" for 3D?) it would boil off and fail to cool it, which would lead to a runaway heating problem. As for why water as opposed to Freon, probably for environmental reasons. Leakage would be inevitable, and I for one would rather have the odd water molecule leak out instead of the odd Freon molecule. The conductivity of the water wouldn't be a problem; you'd need absolutely pure distilled water to avoid gumming up the coolant paths, and pure water is nonconductive.
<pedant>I object to the use of "brain density computation" for this: you state there is one transisitor in the same volume as a neuron in a human brain. Neurons exhibit much more complex behaviour than FETs!</pedant>
Beer, because that's the liquid that keeps my brain running nice and cool.
Flaw in logic.
".....increasingly powerful computer processors are set to guzzle the entire world electricity supply by the year 2100....."
Always assuming that come 2100 we're still using basically the same sort of processor technology.
Assumption. The mother of all fuckups.
Flaw in the spin
Absolutely. Also assuming that we aren't generating any more electricity in 90 years time. Back in 1920, we didn't even have electronic computers and certainly weren't chucking terawatts across the face of the globe. Fast forward to 2100 and we probably won't still be using electronics for computing and we will either have abandoned the global grid or become utterly dependent on it. (About 50:50, I'd say.)
Even in the very short term we have alternatives. If anyone has any idea how to program a myriacore chip, then you can cut power consumption by at least an order of magnitude simply by having many cores clocking more slowly. That's doable today. (Well, the hardware is, anyhow.)
2D vs 3D
So running multiple cores in a 3D rather than a 2D layout saves energy exactly how? I can understand that you can pack more cores into the same space, but that's not the same thing at all (and 3D structures allow for lower between-ocre latency).
liquid collants simply won't flow through 50 micron tubing in any meaningful way (well, liquid helium would, as it's a superfluid, but it's hardly practicable). For pipelines that small, then it's going to require a gas.
As it happens, R410A, as a refrigerant, will be a gas during part of the refrigerating cycle. Refrigerants work by exploiting the latent heat of evaporation when pressure is reduced, but that's not very compatible with narrow gauge tubing - in fact it's in the narrow gauge tubing in a typical refrigerator where the gas is turned back into a liquid and the heat is dissipated through a heat exchanger. In this case, this is precisely what you wouldn't want, as it would heat up the chips where the ultra-narrow bore tubing was used.
How 3D saves energy
Taking a signal off-chip uses loads of energy because of the capacitance and inductance of the wiring. That's why you save energy by putting everything together on the same chip. But communication within a giant 2D chip isn't cheap either, which is why people resort to a Network-on-Chip: http://en.wikipedia.org/wiki/Network_On_Chip
You might be able to solve these problems with some kind of cunning 3D interconnect ...
I would love to see you pump any appreciable volume of a high-surface tension fluid down a pipe with an ID significantly less than a human hair. That goes for gases too, which in these volumes would be just as viscous. I wonder how loud the pump detonation would be...?
As for the pure water to avoid gumming up the coolant paths, it's not going ot make a difference if the water is purely atomic or swimming with fish. Two minutes of pure water flowing through an arrangement of semiconductor this porous will very rapidly become a contained circuit of the same cutting fluids used for water jet machining of metal blocks.
Although a high pressure jet of silica in water lancing under the skin of a the first techie to stick their fingers in this diesel-injector-alike would serve as a suitable warnign to others...
Just my two penn'orth.
I see the 80s are back again.
Slots in the back face of microchips for cooling. See the Petersen review article ("Silicon as a [micro?]mechanical material) IEEE. stacked processing chips would be the Hughes airborne computer efforts. IIRC the ultimate goal was a sensor layer (UV, IR, vis whatever) with multiple layers underneath for memory and processing. The plan was to use spots of tin on the chip surface and a front to back temp gradient to drive in the tin and create a front to back conductive channel.
They might also look at what happened when Gene AMdahl set up "Trilogy" to do wafer scale integration. Teh eky problem (which broke them) was the inability to find a way to make wafers which could cut the failed sections out of path and leave the rest running, at a reasonable cost.
Note that current liquid cooling methods, most of which seem to use water in a heat pipe arrangement, which is controlled boiling.
I would suggest that all of this is a bit of a red herring. The ultimate source of *most* of that heat are the clock driver transistors to distriibute the upteen Ghz system clock, *regardless* of wheather or not that particular section is actually even operating.
Putting more chips together in closer space merely means they will swast even more energy in a confined space.
If you want lower power you implement asynchronous (clockless) systems l ike the manchester ARM develeopments or the design libraries of Phillips.
People will only start looking at this when soemwone works out a way to sell clockless processors while differentiating different grades (IE cost) based on some sort of parameter people can compare. Some kind of agreed "throughput" measure would be reasonable, but the time from reading some values from off-chip RAM to writing (to off-chip RAM) is likely to rather longer than the sub-ns duration of a clock cycle.
AFAIK the limitations on speed up bought by parellel processing have not gone a way. Somewhere in the 10-16 processor area is where the shared memory approach hits the skids. There were very good reasons why the transputer was conceived as it was. People who ignore them are asking for trouble.
As for "Neuronal density," individual transistors are long past that in the 2d sense. The *huge* relative thickness of dead silicon substrate is the problem here. All the real action is within roughly <5microns of the surface.
And of course making a unit mimic the actual action of a true net of neurons and axons is another matter.
Mines the one with Carver Mead's Analogue VLSI circuit design in it.
sharp enough to cut yourself
You raise some good points, but some of your points seem to be based on slightly outdated information.
1) Problems re wafer scale integration: We're now at the point where very few large dies are defect free. Most chip houses already have technology that allows them to blow fuses and disable whichever cores/caches have defects. Then they bin the parts and set prices accordingly. Potentially 3-d technology would allow the use of smaller dies, helping improve the relative proportion of perfect dies, which could possibly even simplify matters.
2) Energy consumption and clocks: Clock gating is now fairly established technology. Switching power drain is actually becoming less important compared to leakage power, and there are strides being made there too (e.g. through silicon on insulator technology and ground/power gating). Again, 3-d technology might help here. If the same sort of die is stacked vertically, circuit designers might be able to take advantage of vertically stacked units to reduce the size of their clock tree relative to a chip with units side-by-side (e.g. use a single layer clock grid for the entire chip, then have vertical taps down for every register-- I realize it's probably not exactly that simple). If the total wire length for the clock tree can be reduced, the capacitance should drop and with it, the energy consumption.
3) Asynchronous processors and lack of adoption: Use of clock-rate as a metric for selling parts has been steadily fazing out over the last several years. The real reason we're not seeing asynchronous processors is because they are very very difficult to design and hard to test. Given the complexity of a billion plus transistor system and the number of engineers required, this is a killer. I guarantee that if someone is able to develop methodologies that simplify asynchronous design and test to the same levels as synchronous design, the advantages in power consumption and performance will win the day.
4) Parallel processing and limitations on scaling: It all depends on what you want to do. GPUs are massively parallel, and because of the problem domain, can take advantage of it. I would suggest that as we increasingly store and record multimedia content and expect our devices to deal with it in an intelligent manner (e.g. "Computer, find me the picture of me 'n' Ted down the bar last friday."), our problem domain becomes more parallel. This sort of behavior will require a wide variety of very intense but computationally different and largely independent tasks. In that scenario, more processors -> more processing per unit time (possibly human reaction time for a hand-held, or maybe a few weeks for scientific applications), rather than more processors -> same processing, less time.
Aren't computers cool?
Whats The Problem
whats the problem, its Clear that by 2100 we will be using light interconnects inside the chips as well as between the layers or of crystral cubes.
or whats all that money been spent on to make better faster, we can rebuild it R&D thats already been spent around the world....
im getting really bored with this 90% of innovative new pruduct is just around the corner, and in the next few years PR salesman tech, just fucking made something ,get it working , stuck it on the production lines and THEN tell us its specs.....adn You can buy it This cristmas.
doesnt the IT world ever think about producing something really New you might want to actually buy Today, and actually have something sellable and actually ready and waiting for the mass christmas quarters SALES any more....
Future potential is all well and good, but this cristmas id just be happy to be able to buy a consumer SOHO 10Gbit/s ethernet router cables and 10Gbit ethernet cards +4 box set for a ton.
or hell, how about a firm or other Outlet that actually Makes and SELLS a £50 ARM Cortex A9 dual or quad motherboard,say just under the size of the generic DVD case then we can talk....
we are told these two products are potentiall available, but Not really so why even care about potential 2100 products until 2100, dont even think about PRing us in 2099 about them if you cant actually but them by then.
Roland Emmerich knows better
"Boffins in Switzerland have warned that increasingly powerful computer processors are set to guzzle the entire world electricity supply by the year 2100. They say that only 3D myria-core chips can save the day."
The world will destroy itself by 2012. Roland knows better than the swiss.
I'm surprised that the computers necessary to render the CG in that movie didn't use up the world's entire power supply on their own...
Fluids in tiny pipes? What about blood?
While I understand that it should be very difficult to pump fluid through very narrow pipes, could someone possibly explain how we get it done for us through our capillaries, which go down to about ten microns or so in diameter and are full of blood, which I imagine is pretty viscous on those scales?
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Pics Indestructible Death Stars blow up planets using glowing KILL RAY
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked