back to article Silicon daddy: Moore's Law about to be repealed, but don't blame physics

Moore's Law, which promises exponentially increasing transistor counts due to chip-manufacturing process shrinkage, is about to hit the wall. As Intel Fellow Mark Bohr once told The Reg, "We just plain ran out of atoms." But there's one industry veteran, however, who looks at the reason for the repeal of the semiconductor …

COMMENTS

This topic is closed for new posts.

Page:

            1. Anonymous Coward
              Anonymous Coward

              Re: Adding a Peltier cooler layer every few 'processing layers' seems doable (@ Ribosome & JP19)

              You still have the problem that you have to route round those cooling channels. They are going to take up a lot of space.

              I'm surprised that a Peltier device made a lot of sense in 2005.

              Looking at the thermal curves for a typical "high performance" Peltier with a 40W throughput and at a temperature difference of 20C, the waste heat is around 120W. That means that with, say, an AMD64 laptop chip of that era with a 35W TDP, you would be getting maybe a 22C temperature reduction in exchange for needing a heatsink which was capable of removing nearly 160W. Three quarters of your heatsink is just going to remove waste heat from the Peltier. If you were just able to plonk that stonking great heatsink down on the CPU, with efficient heat transfer, it would now need to remove only 35W, and so obviously would be running rather colder.

              It made a lot of sense when I was cooling CCD devices in the 1980s, though the reason for that was mainly noise reduction, since the actual power involved wasn't that high. Even so, the cooling load on the system increases considerably as a result of the power needed to drive the Peltier, see above. Our first attempt was a miserable failure because the technician entrusted to the thermal design put the Peltier heatsink INSIDE the box with the CCD, thinking that stirring the air round it would be enough. The box got hotter and largely negated the cooling effect. Only when there was a copper block from the back of the Peltier through the box to the heatsink did we see the expected benefits.

              1. HelpfulJohn

                Re: Adding a Peltier cooler layer every few 'processing layers' seems doable (@ Ribosome & JP19)

                Sponge. 3D wiring, like Menger sponges. Fill the "gaps" with fluid and shell the CPU in heatsink. In short, build it with the architecture of a mechanical brain, only better.

                "Platinum-iridium"?

  1. TheElder

    Next step is better heat transmission, lower heat generation and 3D. Layers turns x,y into x,y,z. A single added layer is one more step in Moore's Law. 4 layers is another. 8 is another and so on. How closely can those layers be spaced? 16? 64? 128? 1024?

    We are nowhere near the end of Moore's Law. We still have one full dimension to grow in.

    1. dan1980

      Adding another layer onto a CPU will almost certainly INCREASE the cost per-transistor and, therefore, regardless of performance, will instantly break Moore's law.

      The crucial 'at minimum cost' part of Moore's observation really relies on process shrinkage. That's not to say some other mechanism won't take up an exponential growth again, but it will be after Moore's law has been broken.

      Such an exponential growth would be far more reasonably measured by performance (e.g. FLOPS) than 'complexity' or, more simply: "the number of components per integrated circuit".

      Remember - Moore's law is not about performance.

  2. nk

    Layering alone is not a solution. Without further miniaturization, power consumption will rise too much

    1. Paul Floyd

      This of course assumes that shrinkage leads to continued reduction in power consumption.

  3. Mike Wilson

    So long and thanks for all the chips

    I'm grateful to have lived through the Moore's law years, it has been fascinating. It has also been expensive buying a new PC every year or two. I started out with a Commodore PET which had, if memory serves, a 1MHz clock and 8k RAM. It cost about £800 in 1978. I haven't seen significant performance improvements for a decade. I recently used a five year old laptop and it was fine. Apart from having Windows Vista, obviously.

    1. Fletchulence

      Re: So long and thanks for all the chips

      Agreed - as per the article Intel are really interested in volume, not bleeding edge. Until there's a software requirement for more processing power in a mass market piece of software (I can't think of anything) then todays speeds remain more than adequate. Improvements in code seem to have reduced the need for speed (again, at the mass market).

    2. MacGyver
      IT Angle

      Re: So long and thanks for all the chips

      I completely agree. I thought the whole "Moore's Law" had been put to bed years ago too.

      If they had been keeping up with Moore's Law we'd have 14GHZ chips by now, we don't, instead we have the same basic speed as 7 years ago just across 8 cores. 8-cores, who cares, unless the program I'm running is designed to use those cores, it will run the same as if I have only a single core 3GHZ CPU, and that is 8 years old. Seriously, we really stopped increasing speed in 2006 when CPUs hit 3GHZ, did no one else notice this.

      The new trend has been to go up in speed by 100MHZ every 2 years, and double the cores every 5.

  4. John Smith 19 Gold badge
    Unhappy

    A few points on dimensions. A current transistor is about 140 *atoms* wide

    And the gate oxide is about 1/10 that.

    So roughly 2^7 width halvings gets you to 1 atom wide transistors.

    At that point you've just about run out of atoms

    But what about multiple layers. Well know you've n x 130W per chip to get rid of.

    You are probably looking at chip packages with internal fluid heat pipe cooling to do this.

    Or you cold go with the very low power neural simulation architecture started by Carver Mead more than a decade ago.

    1. Ken Hagan Gold badge

      Re: A few points on dimensions. A current transistor is about 140 *atoms* wide

      "Well now you've n x 130W per chip to get rid of."

      Only if you are over-clocking the thing and squeezing those transistors as close as they'll go. The first is not something we do so much these days (post Pentium 4) and that trend will surely continue. The second is something that 2D layout encourages you to do.

      Drop the clock rate, increase insulator spacing (to eliminate leakage current) and you might find that you can now put so many more transistors onto the chip that you get the raw performance back.

    2. Anonymous Coward
      Anonymous Coward

      Re: A few points on dimensions. A current transistor is about 140 *atoms* wide

      I think you're off a bit there ..

      The technology node size is commonly understood to be the size of a DRAM cell, not the gate width of a transistor. The gate width is smaller, but the exact details are highly proprietary. An educated guess is that in 16 nm, the gate width is ~7nm. Now, what's intresting: the crystal lattice spacing of untreated silicon is 0.543 nm, plus minus a bit for temperature, doping, etc. At 7nm wide, that's ~13-15 atoms. Which all by itself is tricky to manufacture already.

  5. Stephen Booth

    End of the law but not the end of the line.

    Yes the exponential increase in the cost of fabs mean that Moores law is close to the end if not already ended. At some point we will be able to builder smaller transistors but there just won't be any point.

    We will have to get used to a minimum cost per transistor just as we have got used to a maximum practical clock speed. However there are plenty of worthwhile ways of improving computers to explore other than just blindly throwing more transistors at the problem. None of these are going to give us decades of exponential improvement but they are worth pursuing. The good news is that once the transistor process (with its huge fab costs) stops taking centre stage then it becomes possible for smaller companies to innovate and compete.

    The GPGPU market is an example of this. Floating point performance in increased over conventional multi-core by using smaller compute units and using a greater fraction of the transistors for floating point units.

    Chip stacking won't reduce the cost per transistor. Each layer needs to be manufactured and you may ruin some good layers by bonding them to flawed ones. However it may reduce energy consumption and drastically improve the communications between different components.

    The future is going to be interesting.

  6. John G Imrie

    There cannot be an exponential that doesn't end," he said. "You can't have it."

    Could someone please hit the current bunch of Economists over the head with this.

    1. Don Jefe

      Re: There cannot be an exponential that doesn't end," he said. "You can't have it."

      But Capitalism falls down if you don't assume infinite exponential growth. It can't be (can it?) that economists are that bad at math, but they build the impossible right into their predictions and policies anyway.

      Woe unto you if you point out the obvious failings with the way things 'work' now. You'll be accused of either hating freedom or of being a dirty communist.

      1. Alan Brown Silver badge

        Re: There cannot be an exponential that doesn't end," he said. "You can't have it."

        "But Capitalism falls down if you don't assume infinite exponential growth."

        Even Adam Smith said that growth cannot continue forever. The current crop of economists suffer from a bad case of short-termism.

        Single digit economic growth is unsustainable for more than about a century. which is why there are horrific crashes at regular intervals. NO economists plan for a level market (even in Japan, where it's been flat for 20 years) because there's a herd mentality that growth always happens.

        1. Anonymous Coward
          Anonymous Coward

          Re: There cannot be an exponential that doesn't end," he said. "You can't have it."

          Not so much a herd mentality, but the day a bank economist admits that real economic growth is pretty much at an end due to energy and food constraints, and that the only way to improve living standards is to waste less and start to reduce the population - where are the next generation of bank bonuses coming from? Why, in fact, should bankers be paid so much at all? All that money should be going to engineers and scientists to improve the efficiency of what we already have.

          (I know there are flaws in this argument, but not as big as the flaws in the "eternal economic growth" argument.)

      2. Nigel 11
        IT Angle

        Re: There cannot be an exponential that doesn't end," he said. "You can't have it."

        Isn't the growth capitalism depends on measured in money? Which is subject to inflation? I've always assumed that capitalism works just fine on somewhat illusory growth. In boom times growth is ahead of inflation, in slumps inflation is ahead of growth, and if there's a fundamental reason that this cannot continue for the forseeable future I don't know of it.

        1. Anonymous Coward
          Anonymous Coward

          Re: There cannot be an exponential that doesn't end," he said. "You can't have it."

          "capitalism works just fine on somewhat illusory growth. In boom times growth is ahead of inflation, in slumps inflation is ahead of growth, and if there's a fundamental reason that this cannot continue for the forseeable future I don't know of it."

          You need a little bit more history. But in the meantime, try Rory Bremner's new series on BBC Radio 4, the episode about "Where did all the money go?"

          Readers of a sensitive disposition be warned: includes Max Keiser.

          Money isn't real (you said that yourself above).

          But its effects are. Monstrosities like zombie banks, and their inevitable counterparty, austerity.

          From time to time you need to reboot the system. Historically, that's either been "wiping the slate clean" (drop the debt, permanently), or revolution.

          Historically, stuff hasn't been as global as the Too Big To Fail banks, insurers, etc are today. So it's likely to be more interesting than previously, this time round.

          http://www.bbc.co.uk/iplayer/episode/b038jkx6/Bremners_One_Question_Quiz_Where_Did_All_the_Money_Go/

    2. CCCP
      WTF?

      Re: There cannot be an exponential that doesn't end," he said. "You can't have it."

      @John G Imrie Off topic but I have to bite. Where on earth did you read that monetary and fiscal policy is based on infinite growth? Stop reading that publication pronto.

      You seem to prefer the previous crop of economists, like the ones who advised monetary restraint in the great depression? That worked out well.

      Or maybe you prefer no one studied the subject at all, so we could have some "real", but truly clueless, people giving advice?

  7. Paul Floyd

    Many issues

    There are many issues involved in continued die shrinkage. Just to list a few. There's the problem of making masks. Currently there are large sets of design rules in order to be able to create masks with dimensions using light that is of a much larger wavelength. People have talked about moving to shorter wavelengths, but again there is a big economic barrier. Next there is the issue of what exactly scales. Back in the old days, you had 5V and you could just shrink the dimensions and nothing else. But then the electric field (voltage/distance) started getting too high, so the voltage had to start dropping. Second but, it couldn't drop as fast as dimensions shrank. This has a speed/power tradeoff, but basically silicon transistors don't work below about 0.6 to 0.7 volts (the threshold voltage where a transistor switches between off and on). High-k dielectrics were introduced to help with the electric field breakdown issues. Next there is the problem of variability. One of the important aspects of IC design is that while it isn't easy to exactly control the parameters of the transistors (e.g., to have precise resistances and gains), it used to be the case that transistors physically close on the die would be very closely matched in characteristics. When you scale down to small numbers of atoms, then each transistor has much more statistical variation. This makes design much harder.

    1. Alan Brown Silver badge

      Re: Many issues

      "it used to be the case that transistors physically close on the die would be very closely matched in characteristics. "

      It used to be the case that batches of transistors were cooked up and then assigned part numbers based on their characteristics - and those characteristics would vary widely across the batch.

      This was back in the days of 97% reject rates on TTL/CMOS chip manufacture - and most failed for the same reason (widely varying characteristics)

      If you can get hold of a copy of Baum's "A little less witchery, a little more craft" it goes into great length about the bucket chemistry approach to semiconducter manufacture in the 1970s/early 1980s.

      The holy grail for masks is xray lithography but even that has had issues with finding a stable monochromatic source dating back to the 1980s. You have to start wondering if we're going to see "pick'n'place" atom shifters used instead at some future date (that's the logical end of IBM's atom placintg experiments).

  8. mark l 2 Silver badge

    I think he is correct about economics being the factor at play. With less desktop PCs and laptops being sold and most phones and tablets having ARM, Intel is reliant on selling its top of the range chips to datacentres and the hardcore gamers who want the lastest and greatest which is a much smaller market. Sure big enterprises will still be buying desktop PCs and laptops but the current generation of low end intel chips (pentium & celeron) are more than capable of running Windows 7, office and doing email and internet so the money intel will be making will be less but yet need to invest in huge amounts into R&D to reduce the die sizes. Maybe the only way intel can still continue to invest as much as they do is to begin to fab chip for other companies

  9. itzman

    its already here...

    as far as I am concerned.

    Went to local pc shop, and enquired about upgrade.

    latest boards would be almost no performance increase over 3 year old board for similar money.

    I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)

    1. Alan Brown Silver badge

      Re: its already here...

      "I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"

      and/or better assemblers. there are amazing variations in what's produced for the same input.

      Those of us with long memories may recall the wee experiment with recompiling Atari ST/STe roms using a more modern compiler instead of Lattice C - and finding that the new code was 1/3-1/2 the original size.

      Good luck with the handcrafted assembler. You'd get better results from exposing the native RISC internals of Intel/AMD x86 chips and programming for that, rather than having to pass through the x86 microkernel inside the bloody things.

    2. Tom 7

      Re: its already here...

      "I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"

      There was someone who managed to tidy up w95 so it fitted in 1MB. Its not that C++ is bloated its that software engineers write the nearly same bits again and again and again. There is a good case for training software engineers properly - i.e. not letting them write code until they are 35 or so and have learned Knuth and Boost by heart and can break down almost any task they are given into efficient blocks of it without thinking.

      And I'd try a different OS or PC shop - my machines are still getting a lot faster. Well not my Pi obviously...

    3. Henry Wertz 1 Gold badge

      Re: its already here...

      "I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"

      Go look at some open source projects. It's not like it's a magic bullet (if you open source a project and almost no-one looks at the code, there won't magically be improvements made to it.) I've seen a few that are, well, not good, but in general the code quality is a lot higher than you might expect.

      1) Speed critical sections *are* in hand-optimized assembly (see every video player, I think the font library, crypto, even some bits of glibc that don't need to be in assembly but are for speed.) 2) Certain projects are blooooooated, but in general on these projects people get called out for bloat and the worst programming practices get kicked right out of the code. 3) The compilers now can do tricky stuff that wouldn't occur to a human trying to write optimized code.*

      *Amusing bug report for gcc-4.8, when building glibc some optimization flag has to be turned off, otherwise the compiler recognizes the code for memcpy is trying to copy a block of memory, and optimizes it into a call to the memcpy function 8-).

      1. asdf

        Re: its already here...

        >"I think what we need to do is redesign software and get back to the days of 1000 lines of hand crafted assembler to replace 10,000 lines of C++ :-)"

        Heck I would be happy if we could get people to quit believing managed code is the only way to go. Microsoft figured this out which is why they are hardly betting the company on .Net any more (not that they ever really did, they sure didn't use it to develop their products). A Hello World program should not need multi meg of memory. As for the hand crafted assembler comment all fun and games until you have to maintain and extend hand crafted code somebody else did and they are into esoteric ways of doing things. It has its place though and luckily as you imply people who do it usually know where that is.

  10. theblackhand

    Maybe I misunderstood, but...

    Moore's law is the observation that transistor counts will double every two years - breaking this law means that over a two year period, the counts don't double.

    It doesn't mean the end of the road for increases in chip performance, but does mean a dramatic change in the economics of faster chips.

    If you look at the released costs of the 14nm Intel fabs (>US$5b) and the investments in ASML (>US$4b), the costs for this look to be in the region of US$15-20b once chips start to be produced. This appears to be around double the investment of producing 22nm chips. Note these costs are my estimates - if anyone has better numbers, feel free to add them.

    Assuming the costs are accurate (or even vaguely accurate to the nearest US$5b and costs are roughly doubling with each new process node/half-node), and that the traditional volume markets (desktops/laptops) where Intel makes the majority of it's profit are shrinking, at some point it won't be worth rushing to the next process step.

  11. Buzzword

    Time to short Intel?

    I can't see Intel's high profit margins surviving with this pattern. As pointed out in other stories today, buyers are shifting from quality (faster chips) to quantity (cheaper chips). CPUs are becoming a commodity. We see this both in the server market, where Google/Amazon/Facebook's vast data centres rely on huge numbers of cheap commodity chips, and in the consumer market, where buyers are snapping up cheap new ARM tablets and clinging on to their old Intel laptops for as long as possible.

    Developers are complicit in this: they are coding for low-spec computers, rather than the old habit of coding for tomorrow's desktops and forcing the user to upgrade. A website designed for iPad users will run very smoothly on even a four year-old PC. The need to upgrade is weaker than it has ever been in the past three decades.

    Faced with cheap ARM chips entering the server market, and ARM-powered Chromebooks in the consumer market, how can Intel's high margins possibly survive?

  12. smartypants
    Pint

    Art to the rescue.

    For years, boffins have been beavering away making ever faster processors, and now they've run out of atoms or whatever. (Please don't bore me with the details!)

    Perhaps it's time for someone from an art college to help out.

    Can we build an effective processor which, though not so quick, is more amusing to watch doing its calculations, and therefore a more touchy-feely experience than boring old a+b=c?

    I'm thinking of rats with red paintbrushes tied to their tails being let loose in a big perspex box.

    Could that someday replace the brains in computers? Surely it's worth investigating the possibilities of art-informed technology, and it would fit neatly into the new 'thinking' of the 21st century, where anyone (e.g. me) can spout utter bollocks and be taken seriously.

    1. Anonymous Coward
      Anonymous Coward

      Re: Art to the rescue.

      Nah, we did that back in the day. Old PDP-11s had LEDs all over the front and you could watch the program running. You could see when (say) the wages program was in the individual employee loop, and when it was printing out the summary, or when the floating point library was being accessed, just by familiarity with the light patterns. Training systems for electronic engineers had LEDs all over the place and we had one that could be clocked down so slow you could watch the memory being accessed, the address latched, the data read, and be latched into the accumulator.

      They were also groaning slow.

  13. darklordsid

    That reminds me the statement about Britain not needing telegraph because had plenty postmen, or the other statement about all important scientific discovers being made by the end of 19th century.

    1. Mage Silver badge

      Messenger Boys

      William Preece, head of P.O. said it. He changed his mind and not only introduced telephones but gave an unknown Irish-Italian a big break. A guy called Marconi.

      But I don't see how it relates to Moore's law, which really ended already.

  14. Anonymous Coward
    Anonymous Coward

    Brain Related Comments...

    ...Remind me of a phrase in an old Neural Networks book from Uni:

    "If the brain was simple enough to understand, we'd be too simple to understand it"

  15. RobHib
    Facepalm

    Never been physics.

    Moore's Law has never been physics. It's only ever been a measure of technological development.

    Surely, no one's ever thought otherwise.

  16. Stevie

    Bah!

    Hmm.

    Did you know that IBM salesdrones now actually give presentations in which they quote performance increases as 3 ECKS, 5 EKCS etc?

    By scrupulously avoiding saying 3 TIMES, 5 TIMES etc they have a weasel clause when the promised "performance increase" (performance is not defined either) fails to show up in real life.

    I wouldn't latch on to stupid stuff like this, but IBM have a dictionary of terms you and I throw about freely but which have contractual impact if you buy their kit. The use of the nicely wooly and undefined (but with an expectation built-in that need not be fulfilled) ECKS has me suspecting it can be found in said corporate dictionary.

  17. Henry Wertz 1 Gold badge

    "Colwell postulated a future chip designer who accepted the fact that Moore's Law had run its course, but who used a variety of clever architectural innovations to push the envelope. "

    ARM? They've tended to ignore Moore's law to some extent, in favor of having *much* lower cost chips that are still lower power. Not the exciting answer Intel's looking for (since they are just assuming max speed I assume.)

    Anyway, I don't know if he's right but he has a point -- these foundry's cost billions of dollars these days, and occasionally the next shrink is cheap (some tweak like change the wavelength used etching wafers or something) but then the one after *that* involves basically starting from scratch. Each costly shrink has cost more than the last one. I can see a point where the next die shrink is physically possible but not even close to economically viable -- at which point it just won't happen.

    1. asdf

      >ARM? They've tended to ignore Moore's law to some extent

      No they haven't. They have just used it to reduce cost and power usage as opposed to focusing entirely on performance.

  18. chris lively

    First off, Moore's Law was as much a self fulfilling prophecy as any real "Law". He essentially told their sales and marketing department what to expect and plan for while simultaneously firing a shot at competitors.

    Quite frankly, transistors *could* have doubled faster. But there wasn't any real reason to do that. It was fast enough to outpace most competitors but slow enough to ensure a high rate of return on their investment. Once Intel set out their product update cycle the engineers took on the mantle to make sure it happened that way; and they did.

    Sure, at some point we won't be able to fit a transistor into those tiny spaces. However, if humanity is anything it is certainly ingenious. Instead of on/off maybe we'll move into tri-state or beyond such that a single transistor (or the equivalent) will be able to hold and pass several different values. Heck, maybe we do finally figure out how to build that computer in "hyperspace".

    Point is, someone will have the right idea to keep moving forward. For all I know that idea is already percolating in the back of an engineers head just waiting for the right moment to be revealed in exchange for an appropriately ludicrous amount of venture capital.

  19. TWB

    New architechture and improved coding

    Improved coding has already been mentioned, but I don't understand why so many computer operations insist on moving vast amounts of data from place to place e,g, load a programme from HD to RAM, why not have only non volatile RAM and the programme or data is always ready - just needs to be pointed at - this is only one tiny example of of the architecture of computers could be improved and I am not talking about today's magnetic HD or Flash or DRAM but something which is all - I've just got to design it.....

    1. Charles 9

      Re: New architechture and improved coding

      You're asking for something with the performance of DRAM but nonvolatile.

      They've been working on that stuff for...about three decades at least. Tech up to now like Bubble Memory and Flash have always had strings attached. Bubble memory was slow and had to be heated up to work, while Flash is known to be slow to write and prone to lifecycle issues.

      There are several candidates for the position: MRAM, RRAM, Racetrack memory (inspired by bubble memory), PCM, and so on. Thing is, none of them have reached wide-scale commercial release at this point. And while some are getting close, achieving the same size and scale as current DRAM tech is still going to take time, plus the tech has to survive the transition process AND be economical. Then the memory has to undergo a paradigm shift as it becomes more affordable, first replacing the RAM and THEN replacing the mass storage (which has its own level of economy of scale and will be more difficult to reach).

  20. TheElder

    How fast can computers go? Not well know is that there are computational architectures using other materials that can operate in the several hundred gigahertz range. They exist as functional and available parts used in very special technologies such as signal processing. There is no technical reason these cannot be used for ordinary computing systems, only cost. We are a very long way from the physical limits when looking at the hardware sitting on your desk, even if it is the very latest you can buy. Even the CPU I have has been clocked to over 8 ghz with extreme liquid cooling. With more attention paid to heat generation and intra chip scale cooling the potential exists for next generation chips to run at speeds far higher than they do now.

    1. Charles 9

      I think it's more than cost that blocks their use. IIRC those high-frequency devices are very simple in nature compared to, say, a CPU. Plus note you used the word "extreme". That implies a bit of risk-taking that may not be desirable in a mass-market setting.

  21. PaulR79

    Moore's law...

    I hate this and it bugs me every time I see it mentioned. It isn't a law it's a prediction or an observation at best.

  22. Zot

    I never saw this as a serious 'law'

    I always thought it was just a casual saying, that stuck around because of it's fairly amusing accuracy.

Page:

This topic is closed for new posts.

Other stories you might like