Feeds

back to article Silicon daddy: Moore's Law about to be repealed, but don't blame physics

Moore's Law, which promises exponentially increasing transistor counts due to chip-manufacturing process shrinkage, is about to hit the wall. As Intel Fellow Mark Bohr once told The Reg, "We just plain ran out of atoms." But there's one industry veteran, however, who looks at the reason for the repeal of the semiconductor …

COMMENTS

This topic is closed for new posts.

Page:

Silver badge
Holmes

Been hearing this...

... for almost 15 years.

Wake me when it really happens.

5
4
Anonymous Coward

Re: Been hearing this...

Agreed. Making predictions like this about speeds/capacities of technology, just sets you up for being wrong. (640kb enough for everyone etc).

With our current tech, sure maybe he's right. But new techniques will surely arrive.

5
3
Silver badge

Re: Been hearing this...

End of capacity predictions are also a great way to make your buddies look better in the future: "Engineers at Intel have developed an affordable 1nm die process that falsifies previous predictions regarding the demise of Moore's Law".

4
0
Silver badge

Re: Been hearing this...

There are such things as atoms you know - making a Silicon transistor smaller than an atom is tricky.

There are also a few other limits, like the speed of light, the uncertainty principle and the 2nd law of thermo-dynamics. It might be very cool and thinking outside the box to ignore them - but the universe isn't that easily fooled.

Arguably Moore's law has already ended. The original phrasing was that the most economic number of gates to put on a single chip increases exponentially. That is no longer true, the fab needed to make 14nm parts is so mind bogglingly expensive it is cheaper to make 22nm wafers with fewer gates.

22
2
Silver badge
Devil

Re: Been hearing this...

Imma have commenters not knowing about them atoms?

This is a good way to start off with the fascinating World of Real World Knowledge:

Model Rail

There are also a few other limits

P =/= NP for one...

2
0

Re: Been hearing this...

Yeah it'll all be back on track once they make the jump from silicon to carbon nanotubes is what I hear...

1
0
Silver badge

Re: Been hearing this... re atoms

I dont know if you are aware but computing doesn't have to be binary. The universe will not be fooled - but it doesn't have to be binary - the results just have to be consistent. I remember in the early 80's people were playing with multilevel logic which gave considerably higher computing densities but was a bit error prone. But we have the technology to fix the errors and now we may have the impetus to look further into non-binary computing - it may or may not lead to significant improvements but if there is a demand for it then there may just be the research that finds them.

Unless we just settle into another bloody patent war.

When I was designing 5um chips in the early 80's 2um process were 'mind bogglingly expensive' but they happened. Gaining more functionality from a slab of silicon (or whatever) will happen - though using it to process data rather than write paper shaped documents on how you would like to colour that data blue for the next release might help.

2
0
JDX
Gold badge

Re: Been hearing this...

>>Been hearing this...... for almost 15 years. Wake me when it really happens.

True, but "it hasn't happened yet" is not a good argument for "it will never happen". The boy who cried wolf did get eaten by a wolf.

4
0
Anonymous Coward

Re: Been hearing this...

Wake me when it really happens.

Yup, agree. Actually, only wake me if such improvements actually result in a computer that is actually faster instead of being absorbed by UI gimmicks and software frameworks that don't reduce my workload. The fact that I still have to wait for a computer instead of enjoying the fact that the machine now has a clock speed measured in GIGAHertz instead of the 4.7Mhz we started with..

</grumble>

8
0
Coat

Re: Been hearing this...

@JDX

Crying Wolf, very true, but.........I know it's off topic, but just like IPv4 allocation running out, it's run out, but still keeps giving with no obvious end in sight ?

What they say about necessity ?

Just say'in, I'll get me coat. Off to look for 8 groups of 4 hexadecimals and some colons (that's not meant to rude either)

0
0
Bronze badge
Facepalm

Re: Been hearing this...

"IPv4" is not "Still giving". It's hacked together with NAT to fix it. Which is poor and break some required specs for communications (but is ok for most).

For example:

http://youtu.be/01ajHxPLxAw

Same goes for the comment on "does not need to be binary". Could use signals in tertiary (which has no errors) etc. But I'm not sure it reduce the density problem.

0
0

Re: Been hearing this... re atoms

Ya beat me to it. Many ways to compute faster and more energy efficient. Higher order logic is one. DNA, quantum, phase transitions and on.

0
0
Thumb Down

Human Brain 1000000x more powerful than a computer

The computation power of the human brain - at least a million times more powerful than any desktop computer, running on a fraction of the energy per computation operation - shows that computers can be improved by at least six orders of magnitude, without even trying.

This decade will see a serious rampup of efforts to reverse engineer the brain. Such efforts should produce a spectacular array of computing improvements, more than enough to keep Moore's Law alive and kicking.

As to people who think that only a brain can produce brain like computation - if all else fails, that is exactly what will happen. The next generation of computers may need nutrient solution as well as electricity.

1
2
Silver badge

Re: Human Brain 1000000x more powerful than a computer

A 6x orders of magnitude increase of anything requires quite a bit of trying. Just increasing assholish behavior by 6x orders of magnitude is difficult.

Maybe someday computers will act more like human brains but not only are they not directly comparable, medical science still has a long way to go to even understand how the brain works. Right now they are still poking it with a sharpened stick and that's nearly the state of the in neuroscience.

Until science actually figures out the secrets of the brain, it will be impossible to build an accurate calculation model around one. It would just be a guess and likely not a good one at that.

9
0
Silver badge

Re: Human Brain 1000000x more powerful than a computer

Really? Whats 123.456 / 32.334?

My desktop computer can allegedly do a 1,000,000,000,000 of these in a second - why does it take you half an hour and a paper and pencil?

8
0
Silver badge
Happy

Re: Human Brain 1000000x more powerful than a computer

Not that I don't get your point; but ask your computer why an orchid is pretty to look at or why any strange food always tastes like chicken.

Brains and computers are not directly comparable.

9
0

Re: Human Brain 1000000x more powerful than a computer

Yes, but it took me just a few seconds to squint at your math problem and come up with 3.8, which is within 1/2 of 1% of the correct answer. Human brain does well with comparing sizes of things.

8
0
Bronze badge

Re: Human Brain 1000000x more powerful than a computer

@ewozza

More powerful? Well, that all depends on the measurement.

'Speed' and 'power' are well-defined terms. The problem is that to use such terms in such a way that they are useful, you have to be very careful to define exactly what you are measuring.

If you said that a Ferrari was faster than a bus you'd think you stood on pretty solid ground. And you'd be correct if you were measuring how quickly each vehicle could carry the driver to a given destination. But, you'd almost certainly be INCORRECT if you were measuring how quickly you can get 50 people to a given destination.

That ambiguity is why Moore didn't talking about power or speed; he talked about transistors. So, saying the human brain is "1000000x more powerful than a computer" has nothing to do with Moore's law.

Of course, I don't believe that you were implying it _did_ but, again, the question of what is more 'powerful' - the brain or a CPU - cannot be answered unless you define exactly what qualities you are assessing and what measurements you are using to come to a conclusion.

At any rate, the brain is not comparable to a CPU (which is the subject of Moore's law and this article) but to an entire computer; hardware and software. The brain has specialised components for processing different senses, short term and long term memory, speech, etc... It also has very sophisticated 'firmware' to pull it all together.

The brain does use a fraction of the power of a modern HIGH POWER processor but then 1/1.1 is a fraction so that's not really helpful! Average numbers are about 20W for the brain. That sure is less that modern Intel desktop and server CPUs but is more than the latest Atoms.

You might argue the brain is 'more powerful' than an Atom CPU but again, it really depends on what you are measuring.

As for questions about orchids etc..., that's not a CPU problem - that's a software problem. Anyway, you have to feed all the data points into the CPU. Saying a CPU can't understand an orchid is irrelevant because a brain can't either if it's removed from sensory input!!!!

14
0
Anonymous Coward

Brain vs mind

Once boffins have worked out how brains normally work, how much longer long to work out (a) how minds work (b) how mine works

Or will it just be the usual "we've sorted the hardware, now all you lot need to do is program it".

2
0
Anonymous Coward

Re: Human Brain 1000000x more powerful than a computer

I don't know where you get your "at least a million times more powerful" from. It very much depends on what you want to do. If, for instance, it is 80 bit floating point division, a microprocessor is more than a million times faster than the human brain and uses an awful lot less energy to do the calculation.

The human brain does very different things from computers and they cannot directly be compared. For instance, it is a very good pseudo real time controller that integrates inputs from a range of rather sophisticated sensors and controls a complex range of outputs. But our main problem there is that we have not yet designed an architecture for electronic sensors and actuators, along with the necessary computation. For instance, the eye is effectively a system on a chip that carries out an awful lot of signal processing and sends a very low bandwidth signal (tens of kilobits per second) to the brain. There are, however, a number of optimisations that it carries out that don't always work. (Stare at a blue sky for a while and until you change your point of view you won't notice any clouds moving over, because the rate of change of signal is too low to register). Currently the main use for optical sensing is to create images that work for the human eye. If there was a compelling use case that requires eye-like behaviour, I suspect it would get developed, and that the computing power needed would be fairly low.

"Reverse engineering" the brain has been attempted by the AI community for a long time with little success. You have a system which is optimised for controlling a primate. It can carry out a wide range of signal processing, storage and retrieval processes in a low-bandwidth way. So far computers have been designed to compensate for the weaknesses of this system. It isn't clear to me what benefits you would get from a synthetic human brain. A financial system that could make its own screwups through a desire to impress other financial systems, or through having its software screwed up by a desire to have sex with another financial system? An automated car that gets drunk and drives into road signs and other cars?

Plucking numbers out of the air with lots of zeroes should be left to politicians and Ponzi scheme operators.

8
0
Pint

Re: Human Brain 1000000x more powerful than a computer

Human brain is 1M or 1B faster than any computer in reasoning as an human, as for 4B years of evolution as living being and as for 1M years evolution as Homo.

But I'll not bet on the human for factoring prime numbers, or indexing the www.

That is the downside of benchmarks, how you define "power"?

How you define intelligence?

A dog is far more "powerful" than me in processing odors and in path finding, a bee is far a best citizen than me, a tree may be far more intelligent than me in ways I cannot imagine (think to chemical message passing...), a computer definitely compute faster.

How do you compare apples and oranges?

3
0

Re: Human Brain 1000000x more powerful than a computer

don't buy the 1million times smarter argument at all.

Almost every time a claim about a good measure of intelligence has been made computers have eventually done a better job (with a few notable exceptions).

Computers are now better than the top humans at chess, jeopardy, chip layout, optimization and path planning, mechanical assembly, specialized vision applications (spot the tanks), library science (index the web), weather prediction, stock market prediction, and probably more I'm not thinking of.

Where they aren't yet even are things like natural language parsing, artistic endeavors, and general vision applications.

Also worth noting that the human brain is about 14,000 times the volume of a single die. So a more fair comparison would be an average human against the Oakridge Titan. Transistors are pretty frickin' good already, it's the power, cooling, and interconnect on the large scale that needs work.

1
0
Silver badge

Re: Human Brain 1000000x more powerful than a computer

it's the power, cooling, and interconnect on the large scale that needs work.

Very well said. When Moore's law finally runs out, the next major breakthrough will have to be in parallel coding.

A (human) brain has ~10^11 processing elements clocked at maybe 10Hz. A CPU has ~10^9 transistors clocked at ~10^9 Hz. By that measure a humble desktop CPU should be ahead of us by a few orders of magnitude. So what gives?

Well OK, a neuron is a much more complex device than a transistor, but a million times more complex? Unless it's in some sense a quantum computational element rather than a classical one (which cannot be ruled out), I don't think there's a difference of complexity of order 10^6. Surely not 10^9 (comparing against a 1000-CPU cluster).

Which leaves the dense-packed 3D interconnect in a brain, and even more so the way it is able to utilize its hardware in parallel for a new problem without any (or much) conscious organisation of how that hardware will be organised, what algorithms it will deploy.

The next huge breakthrough will have to be in problem definition technology. Something that fulfils the same role as a compiler, but working across a much greater range between the statement of the problem and the hardware on which the code will run. There are some scary possibilities. Success on this front may reveal that a decent scientific computing cluster is actually many orders of magnitude superior to ourselves as a general-purpose problem solver, and might also reveal intelligence and consciousness as emergent properties.

Jill (*) or Skynet?

(*) Jill is the most benign first AI I know of in SF. Greg Bear, "Slant".

3
0
Silver badge

Re: Human Brain 1000000x more powerful than a computer

Subjectivity, or the lack thereof, is where suck, and always will. A non-faulty computer or calculation design will always return the same answer, assuming other variables remain constant. A Human brain is completely different in that it is likely to return a different answer each time, even if the variables remain constant.

In a Human brain each different answer to a subjective challenge is equally as valid as the previous or the next. Unlike a computer where different answers mean it is broken. Those equally valid answers are what allow Humans to assess and act wholly illogically, but still correctly.

0
0
Silver badge
Thumb Down

Re: Human Brain 1000000x more powerful than a computer

non-faulty computer or calculation design will always return the same answer, assuming other variables remain constant. A Human brain is completely different in that it is likely to return a different answer each time, even if the variables remain constant.

Straw man! assuming other variables remain constant If it's a realtime event-driven system with unpredictable and unrepeatable inputs, that is never the case. A brain is clearly such a system. One may speculate that is a large part of its superiority over a computer (though of course, an operating system is also of that nature).

0
0

Re: Human Brain 1000000x more powerful than a computer

A decently educated human brain carries a software that was made by accumulating knowledge in written form, for well over 6k years (in one way or the other). On top of that you're improving it yourself to fit your needs every day, and you get daily sensory input from some highly capable devices.

1
0
Anonymous Coward

Re: Human Brain 1000000x more powerful than a computer

Ich will

That is the difference

0
0

This post has been deleted by its author

Anonymous Coward

Re: Human Brain 1000000x more powerful than a computer

Let me give you a pencil, paper, and a printout of Windows's source code and see how long it takes you to work through the boot code.

BTW, my computer boots Windows in a few seconds.

0
0
Roo
Bronze badge

I'm with Bob.

"I'm happy for your optimism." pretty much sums up my view on this too.

That said : Don't let that stop you trying out new stuff, you might discover something genuinely useful. :)

1
0
Silver badge

Dimension Z

Since I first wrote about Dimension Z in these forums Samsung has proved 24 layer integrated circuits and say the concept should scale to thousands of layers. That takes us to 2035 at least from what we already know, and more will certainly be discovered by then.

I promised you more Moore. There it is.

1
3
Silver badge

Re: Dimension Z

"Manhattan" chip building has been the holy grail for a long time (it was being talked about in the late 1970s) but it's always encountered fundamental problems which make it unsuitable for computation devices - such as "how do you get rid of the heat?"

The resulting answer always tends to be low power equipment which is more easily replicated in 2D.

Layers are great for flash but I'm not so sure how applicable they are to anything involving non-significant amounts of energy dissipation.

0
0
Anonymous Coward

Re: Dimension Z

I think dimension Z is indeed the answer.

Power is increasingly less transistor dominated. Leakage is getting under control [trigate, finfet etc].

Interconnect -- RC wire delay and energy is the issue.

I say this having spent many painful months walking gate to gate closing timing in 45nm and 1.6 GHz, and even there wire delay was half the problem. Going finer it will be worse because wire delay depends only

on the aspect ratio of the wires, while transitor delay shrinks with process.

Why would 3d help?

Even with the same number of gates, the average distance between gates shrinks if you arrange

them in a ball instead of spread out in an urban sprawl. You take less energy to wire the chip,

and this is also becoming the bulk of the energy.

So.... Chips will consume less silicon area for the same number of latches, the interconnect will

consume less energy. A 100x in layer count is thinkable as a long term goal.

Suppose we have 2d and a 1mm^2 core.

Reducing the maximum wire length from O(1mm) to O(0.1mm) is possible with 100 layers.

Good description of the physics:

http://www.cmoset.com/uploads/Joshi.pdf

This would naively give another 10x - 100x in clock frequency assuming wire load dominance.

In power, this is only a 10x as the 1/2 CV^2 charge energy is proportional to wire length.

Given the increase in frequency the energy per chip still goes up, but of course not as much

as if we had taken the same gain in frequency in 2d.

Engineering challenges will involve tolerating and alleviating the increase in energy density:

i) heat removal

ii) dynamic clock gating

After the capital of process development has gone, cost per chip should reduce as the silicon

wafer area is 100x less.

2
0
Silver badge

Re: Dimension Z is hard

Fine. Let Samsung run away with it, I don't care. I will still get the amazing new stuff.

0
0
Anonymous Coward

So what happens

if you replace electrons with photons? Difficult, I know, but sure to be solved one day. What is the theoretical limit there?

0
0
Silver badge
Coat

Re: So what happens

At the current level we are down to counting individual atoms. Obviously any sort of optical circuit requires a physical conduit made of matter for the photons to channel through, which will be made of atoms. So not relevant to Moore's law, which is about transistor density. Going optical can increase operational frequency to terahertz and petahertz, where transistors can't go, but that is a different topic. The upper limit of a photon switch is high enough to buy us a fourth dimension to this problem if they figure it out. It Is also either impossible or at least very hard as they have been working on it since the 1960's and haven't had a big product win yet.

4
0
Anonymous Coward

Re: So what happens

I/O.

How do you get output from a photonic calculation? A photon in transit doesn't really "exist" - in fact, because it travels at c, from the point of view of the photon its life is precisely zero. It is only detected when it hits something and ceases to exist, thus having a detectable effect like bumping an electron into a conducting energy level. Your input and output, in fact, is still going to be electronic unless you can beam the photons straight into a human eye, in which case the bandwidth of your system is going to be that of the eye/brain system, which is quite small.

0
0
Silver badge

No improvements in chip architecture for economic reasons?

I beg to disagree. What I believe we'll be seeing is the development of modular and standardized chip designs and design tools, and a better process from the design table to the fab. That would lower the cost of new designs and also allow lots of customization for special tasks. I think that's what is happening right now in the ARM ecosystem.

If Intel -or AMD or whoever- follow this route, they could be able to double their performance every -say- 48 months, instead of the actual 18 months, and the new design&fabrication processes would allow these new designs to be made -relatively speaking- on the cheap. And the market will always demand more processing power, so instead of taking to market newer and faster chips every year it will make sense, from an economic standpoint, to do it every 4 years.

Of course this status quo will also end one day. And then, at last, coders and IT companies will have economic reasons for optimizing their code. :0)

4
0
Bronze badge

Re: No improvements in chip architecture for economic reasons?

Even if it proceeds exactly as you say, that still breaks Moore's law, which essentially says that the cost per-transistor will effectively half every two years.

The reason Moore's law has been so unerring is at least partly due to Intel baking it into their road-maps and business plans. And the reason they have done that is because process shrinkage leads to bigger profits. This is important as increasing the 'speed' or 'power' of their chips does not necessarily make good business sense whereas shrinking the die very much does.

That's why Moore's law has continued - economics. Once the economics of reducing die size looks unfavourable, Moore's law fails.

That's not to say that CPUs won't get more powerful, just that they won't double the "complexity for minimum component costs" ever 18-24 months as is currently the case.

1
0
Silver badge

Re: No improvements in chip architecture for economic reasons?

"Even if it proceeds exactly as you say, that still breaks Moore's law..."

I'm well aware of that fact. I was only addressing the parts of the article that put a cap -due to economy- on performance improvements through design optimization. My point is that If a chip maker takes 4 years to double their design's performance, it will still be profitable to design and market the new chip, as long as they can keep the design costs reasonable. Creating a new chip foundry for a new shrinkage level is terribly expensive, and is responsible for most of new chips costs. In my opinion, there are lots of potential improvements in that area that, due precisely to economic reasons, haven't been tried yet, as any improvement they make with current shrinkage levels has a big chance of not being usable in the next shrinkage level and provides a smaller return in terms of processing power.

0
0
Silver badge

Re: No improvements in chip architecture for economic reasons?

There's a 2nd economic force at work, the demand for more power is all but over in significant parts of the market. Not only is it getting more expensive to ramp up performance, the value of that extra performance to buyers is falling - once your PC is fast enough 95% of the time that next 5% isn't worth paying a lot for.

It's the same effect that's driving the move to lower power tablet and mobile devices, laptops before that and the massive slow down in PC replacement. Concentrating computing resources in data centres can keep the quest for more performance alive a while but we're heading for a world without a driving force behind extremely costly measures to keep Moores law going.

4
0
Thumb Down

He is effectively predicting 3D won't work out

At say 10um, a 1mm high chip can have around 2^16 layers. If we double every 2 years, that's another 3 decades after 2020.

A few weeks ago, I would of said maybe a bet against 3D was reasonable. Then Samsung released their V-NAND with 24 layers.

1
1

Re: He is effectively predicting 3D won't work out

There is a big difference between flash memory and logic gates though (density, manufacturing process, clock speeds, etc). A flash die puts out a lot less heat than a CPU die.

Thermals provide the end constraint of these systems - how do you get heat out of the middle layers? Sure you could drop the frequency to reduce the heat output, but that kind defeats the point of adding more transistors in the first place, as there are limits to how well you can parallelise so slow and wide isn't always a win.

0
0
Silver badge

Re: He is effectively predicting 3D won't work out

"Thermals provide the end constraint of these systems - how do you get heat out of the middle layers?"

Well, there is lots of room for improvement there. Adding a Peltier cooler layer every few 'processing layers' seems doable, and could effectively raise the number of layers to really big numbers, and there are other technologies that could also be of help.

0
1
Anonymous Coward

Re: He is effectively predicting 3D won't work out

I want to see how you do that. A Peltier cooler has a temperature difference between faces. Adding one every few layers would result in removing exactly the same amount of heat, except that the dies at the bottom of the stack would be cooler and the ones at the top would be much hotter. Unless, of course, you added a thermal block at each Peltier layer with a heat pipe to transfer the heat sideways out of the stack. This adds at least a few millimetres at each layer, and requires big heat sinks and fans to carry away the heat, as the hotter the heat sink gets the more power is needed to drive the Peltier device - and so eventually you run out of overhead. Meanwhile, you can't get signals through your Peltier device because it's a big semiconductor with conducting sheets top and bottom.

Do you see how ridiculous this is getting?

The whole point of 3D is to improve speed by reducing path lengths, and the Peltier solution would actually increase path lengths as the signals have to route around it. It would take up more space than just using a planar circuit board with conventional cooling.

Being downvoted has ceased to worry me, so I'll just add snarkily that as well as all the armchair CEOs on the Ballmer thread who have obviously never even tried to run a market stall, there's an awful lot of armchair engineers on this one who have never actually tried to understand and use some of the technologies they are espousing. Just because technologies A,B and C exists does not mean that they will work in combination to provide synergy. They may well, in fact, be mutually exclusive.

7
0
Bronze badge

Re: Adding a Peltier cooler layer every few 'processing layers' seems doable

Lol.

Peltier devices are heaters not coolers. Just a small proportion of the heat pumped into one side comes from the other.

1
0
Silver badge

Re: Adding a Peltier cooler layer every few 'processing layers' seems doable (@ Ribosome & JP19)

Both of you seem to be making too many assumptions on the structure of such cooling layers. Just to clarify: No, I'm not proposing to just stick a standard, contiguous Peltier device between two layers, that would be daft. There are other options that should work, e.g. using said Peltier devices to transfer heat to internal cooling channels. I've read several discussions on such channels, and several solutions were proposed, e.g. thermal conducting channels made of copper or even graphene.

As for Ribosome's objection on path lengths... if you need to add a cooling layer every -say- four processing layers, path length only increases by 25%, independently of the number of processing layers in the chip. Which is still a big improvement over single layer designs.

Disclaimer: Circa 2005 I used a Peltier plate in an overclocked system I built, so yes, I'm perfectly aware of the way said devices work.

1
0
Silver badge

Re: He is effectively predicting 3D won't work out

> as well as all the armchair CEOs on the Ballmer thread who have obviously never even tried to run a market stall

According to Leonard Mlodinow (and almost all other statisticians) most of the good and bad performance attributed to CEO's is actually more due to chance. People greatly overestimate how much the CEO affects the success of a company. Especially when it comes to pay.

2
0
Anonymous Coward

Re: Adding a Peltier cooler layer every few 'processing layers' seems doable (@ Ribosome & JP19)

You still have the problem that you have to route round those cooling channels. They are going to take up a lot of space.

I'm surprised that a Peltier device made a lot of sense in 2005.

Looking at the thermal curves for a typical "high performance" Peltier with a 40W throughput and at a temperature difference of 20C, the waste heat is around 120W. That means that with, say, an AMD64 laptop chip of that era with a 35W TDP, you would be getting maybe a 22C temperature reduction in exchange for needing a heatsink which was capable of removing nearly 160W. Three quarters of your heatsink is just going to remove waste heat from the Peltier. If you were just able to plonk that stonking great heatsink down on the CPU, with efficient heat transfer, it would now need to remove only 35W, and so obviously would be running rather colder.

It made a lot of sense when I was cooling CCD devices in the 1980s, though the reason for that was mainly noise reduction, since the actual power involved wasn't that high. Even so, the cooling load on the system increases considerably as a result of the power needed to drive the Peltier, see above. Our first attempt was a miserable failure because the technician entrusted to the thermal design put the Peltier heatsink INSIDE the box with the CCD, thinking that stirring the air round it would be enough. The box got hotter and largely negated the cooling effect. Only when there was a copper block from the back of the Peltier through the box to the heatsink did we see the expected benefits.

1
0
Anonymous Coward

Re: He is effectively predicting 3D won't work out

@ribosome - "Being downvoted has ceased to worry me..."

Saying this is almost always helps not to attract downvotes, which is why I suspect you said it. If you really were not worried about downvotes, you wouldn't have mentioned it. As for rest of your 'snarky' paragraph, it makes me think you are trying to persuade us you are an expert and/or experienced in being a CEO and/or a CPU maker. Maybe you are, maybe you're not. Just being 'right' should be enough without berating others.

2
0

Page:

This topic is closed for new posts.