back to article Big Blue bigwig: Tiny processor knobs can't shrink forever

While at IBM’s Smarter Computing Summit last week, I had the great pleasure of hearing Big Blue's Bernie Meyerson talk about limits to today’s tech, and the associated implications. Bernie is IBM’s VP of Innovation and one of the rare technologist-scientist types who can clearly and directly explain highly technical concepts …

COMMENTS

This topic is closed for new posts.
  1. Chemist

    "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

    Sorry this universe's parameters are so inconvenient - but it's the only one we've got at the moment

    1. Fred Flintstone Gold badge

      Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

      Use smaller atoms to start with.. You may accidentally end up with a black hole when the lot collapses, but it's an idea.. :)

      1. Anonymous Coward
        Anonymous Coward

        Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

        "Use smaller atoms to start with"

        WHAT !!

        I'll look in the catalog and see if I can find some

        1. Badvok

          Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

          @AC 10:36 "WHAT !! I'll look in the catalog and see if I can find some"

          Here you go: http://www.webelements.com/, as you'll see there is a whole selection and they come in a very large range of sizes.

          1. Anonymous Coward
            Anonymous Coward

            Re: "a very large range of sizes."

            But... Do they have all of the colours in all of the sizes?

            1. chairman_of_the_bored

              Re: "a very large range of sizes."

              No. After the last particle sales, they only have quarks in red, green and blue.

      2. Dave 126 Silver badge

        Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

        Put them through the washing machine without reading the label? No harm in trying!

        [you can my flippant comment as a sign that I can only vaguely grasp what the grown ups are talking about. In a recent Reg article, the Intel 'roadmap' slide did look a bit fuzzy after 14nm "Er, maybe we can do something with graphene or nano-tubes"]

    2. Christoph
      Boffin

      Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

      No problem, we just need to work out how to stabilise muons, then use those instead of electrons. That shrinks the atom down much smaller.

      1. tony2heads

        Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

        Keep the muons relativstic - use heavy elements?

      2. harmjschoonhoven
        FAIL

        Re: Christoph

        Muons are unstable leptons. Laws of physics ....

        Will computer-users be happy if their CPUs disintegrates within 2.2 µS ?

        Delivering them from China at 0.999....c would help due to relativistic effects, but create I/O problems.

      3. Al 18

        Re: "even long enough for someone to figure out how to shrink atoms down to a more convenient size"

        That would be a muon that is ~200x the electrons mass!

  2. This post has been deleted by its author

  3. Anonymous Coward
    Anonymous Coward

    Computer says "No"

    That is all

  4. Badvok
    FAIL

    Meyerson predicted Intel's move away from speed to cores?

    Yeah, right. I think Intel's move was more to do with commercial viability than any weird 'electron clumpiness'.

    Intel processors have been pushed beyond 8GHz but that is not commercially viable because of the amount of cooling required to achieve that speed.

    1. This post has been deleted by its author

    2. f1rest0rm
      WTF?

      Re: Meyerson predicted Intel's move away from speed to cores?

      Fail?

      I love it when the armchair warriers think they know better than the Boffins ....

    3. I think so I am?
      Boffin

      Re: Meyerson predicted Intel's move away from speed to cores?

      And that its not physically, practically or probably legally viable to continually pour Liquid nitrogen onto the chip so it doesn't melt had nothing to do with it?

      Being commercially viable was probably way way way down the list.

      1. Badvok
        FAIL

        Re: Meyerson predicted Intel's move away from speed to cores?

        You don't need to "continually pour Liquid nitrogen onto the chip" to get past the absolute hard physical limit Meyerson predicted.

        Unlike Meyerson's prediction, a >5GHz processor can work, it is just not commercially viable to produce one for the mass market.

        1. pig

          Re: Meyerson predicted Intel's move away from speed to cores?

          Hang on, where does it say that he said processors could not get to 4 or 5GHz?

          I read this in the article:

          "Back in 2003 he predicted that Intel would never deliver on its promises of 4 to 5GHz CPUs and would, in fact, be forced to shift to multi-core processors."

          And from memory I note that Intel did not release 4 to 5GHz processors and indeed shifted to multi-core processors.

          He sounds pretty on the money to me.

    4. TeeCee Gold badge
      Facepalm

      Re: Meyerson predicted Intel's move away from speed to cores?

      ...because of the amount of cooling required to achieve that speed.

      And the reason you need that phenomenal level of cooling is to deal with heat from the current leakage of a processor running at a honking overvoltage and ludicrous clocks. Or "exactly what he said" in other words.

  5. Lee Dowling Silver badge

    Is this such a bad thing?

    I have, in front of me, an 8-core, 8Gb, 1Tb laptop with stupendous graphics ability. It was the cheapest that fit my criteria (which focused on things like having a numpad, having enough USB ports, etc.) And what am I doing with it? I'm browsing the web, sending email, and some mundane network admin tasks etc. Where's all my processor power actually being used most? Games. Outside of that, I'm just drawing pretty boxes in (apparently) extremely inefficient ways. I'm using 3Gb of memory with hardly anything running and although some of that is file cache, that's something that will be unnecessary soon if SSD's make their final leap to affordability.

    With the limit on processor speed, people started to take advantage of multi-core. With a limit on that, people jumped onto GPU assistance. With a limit on the power that a certain size of a device can do overall, hopefully we'll go back to some good old fashioned efficient code. Like not requiring 3Gb, having dozens of "services", and lots of "frameworks" to draw a couple of 2D apps on the 2D screen (and I don't even have flashy stuff like Aero etc. enabled!).

    I program myself, and I actually feel intimidated by the sheer amount of power available to me when I need it. And, yes, I get lazy and think "Ah, it'll be fine on a modern machine" but I think we'll have to go back to some decent programming again.

    Of course, what will happen is instruction sets will grow (apparently the AES instructions in my processor allow me to do 2Gbit/s of encryption compared to 200Mb/s in software), chips will increase in size, cooling will take precedence, and we'll end up with huge monstrosities that still take 30 seconds to load whatever-version-of-Word-is-around.

    It's both hilarious and sad that first-boot startup times, and program-first-run times haven't changed (or on average people's PC's has significantly lengthened) since the DOS days. Hell, I can emulate Windows 3.1 booting quicker than I can boot Windows 7 - and although they do a lot more now, there's not a lot there that actually ends up as end-user-visible changes.

    1. Bad Beaver
      Pint

      Word.

      Being one of the geezers that grew up in the Commodore/Atari age I vividly remember programmers using sheer prowess to squeeze the impossible out of very limited hardware on a daily basis. Video Game consoles are the last resort of this practice. Everyone else just stopped botherhing. Efficiency does not sell new hardware.

      "Sorry, we were too lazy for efficient code. Just buy a new device, will you? It will come with all sorts of sustainability PR to make you feel good about it."

      1. Dave 126 Silver badge

        Re: Word.

        @Lee Dowling.

        I agree. I remember being able to do basic office tasks (word processing, DTP) quite happily on a 25Mhz Mac at school, and similar on my PC at home, both with around 4Mb of RAM... because we only occasionally saw a beach ball like thing, we assumed that next year's model would be close to perfect for the job in hand. The only 'must have' feature I saw in years of new versions of Office was the automatic saving of your work to a temp file... otherwise, new versions of Office were a PITA, especially when I found my faculty had a new version but the university's reprographics department didn't. The new version was supposed to make organisations more efficient how?

        My modern laptop's power is rarely used, except for when generating CAD renders (and even then, the CPU is at 100% but the GPU is doing nothing). For occasional tasks like this, renting compute power from Amazon as and when you need it seems a sensible idea to explore.

      2. MacroRodent
        Thumb Up

        Re: Word.

        I have often thought that it will be just great time for those caring about software quality when hardware finally stops getting faster. People start paying attention when the bloat introduced by every new version cannot be swept under the rug by ever faster processors and more memory.

      3. Anonymous Coward
        Anonymous Coward

        Re: Word.

        Too true! —A while back some commentard on here wrote that [insert name of one of the teams in the tedious fanboi wars] deliberately made each new version of their OS run worse and worse on older hardware, to force their users into continually upgrading. At the time I dismissed this as tinfoil hat wearing nuttiness but, of late, I'm starting to think s/he may have had a point.

        My macbook Pro has a dual core 2,34GHZ processor and 4GB RAM. Under OSX 10,4 Tiger when I first got it, it ran like stink [as they say]. Now, under OSX 10,8, Mountain Lion it feels sluggish and, especially after startup, seems to take an age to reach the "All backround processes loaded. We're ready to roll!" stage, compared to the old Tiger days —and this is just to get the OS and a couple of basic everyday apps like my browser and email up and running.

        Ironically, the "heavy lifting" apps I use, such as Photoshop and After Effects seem to run as quickly as they ever did [ and Adobe ain't exactly famed for their bloat-free, efficient coding!], which suggests my old MPB is still capable of chucking bits about as fast as it ever could.

        Likwise my iPhone 3GS. I never noticed any delays when using it with its original iOS 3 but now on iOS 5, it too feels quite sluggish a lot of the time, even though I'm quite anal about force-quitting apps and keeping enough RAM free [jailbroken, so can see free RAM in menubar]. I'm not even going to bother 'upgrading' my iPhone to iOS 6, as I predict it would be barely usable.

        So where has all this processing power gone? In neither case have there been such obviously huge differences in functionality between the original OS and current OS to cause the hardware to feel woefully inadequate [perhaps with iOS you could make a case about '"retina GUI", but shouldn't my 3GS still render its lo-res version of the GUI as quick as it ever did?]

        I'm left with the conclusion that either the conspiracy theorists are right and OS writers deliberately waste cycles doing pointless "shiny shiny" to "persuade" us to upgrade, or somehow, some way computer hardware "wears out" like an old car engine, as it ages.

        [Apple gear mentioned above, as that's what I happen to use. No tedious fanboi points scoring please. I'm sure users of newest versions of Ubuntu desktop and Windows on older hardware see the same symptoms]

    2. Luke McCarthy

      8 core laptop?

      Where do you get one of those? Or do you really mean a 4 core hyperthreaded laptop...

      1. Piro Silver badge
        Pint

        Re: 8 core laptop?

        Also, 8Gb RAM? Having 1GB is not exactly that special these days, try upgrading to 8GB (or more pedantically, 8GiB).

  6. Brandon 2

    limits are made to be broken...

    if man were meant to fly, god would have given him wings... 4 ghz... that'll never happen.... supersonic... that'll never happen. Sure, the transistor as we know it may have a 12 or 7nm limit, but that does not mean computing power is going to hit a giant wall. It will just require a different innovation path... like near-threshold voltage processors...

    1. Destroy All Monsters Silver badge
      Holmes

      Re: limits are made to be broken...

      Of course, of course. But that means a whole new production and engineering chain, so might well take several decades. Unlike what politicians believe, you cannot just fart whole new approaches out of nothing.

      I still remember my amazement at this from February 1993 in Communications of the ACM. 4 KW for 250 MHz. My dad was laughing at me and thought I was a retard for believing the power uptake numbers:

      "The CPU module contains one microprocessor chip, its external cache, and an interface to the bus. A storage module contains two 32-megabyte (MB) interleaved banks of dynamic random access memory (DRAM). The I/O channels that are connected to one or two DECstation 5000 workstations, which provide disk and network I/O as well as a high-performance debugging environment. Most of the logic, with the exception of the CPU chip, is emitter-coupled logic (ECL), which we selected for its high speed and predictable electrical characteristics. Modules plug into a 14-slot card cage. The card cage and power supplies are housed in a 0.5- by 1.1-meter (m) cabinet. A fully loaded cabinet dissipates approximately 4,000 watts, and is cooled by forced air. Figures 1 and 2 are photographs of the system and the modules.

      ....

      We designed the bus to provide high bandwidth, which is suitable for a multiprocessor system, and to offer minimal latency. As the CPU cycle time becomes very small, 5 nanoseconds (ns) [250 MHz] for the DECchip 21064, the main memory latency becomes an important component of system performance. The ADU bus can supply 320MB of user data, but still is able to satisfy a cache read miss in just 200ns."

  7. Anonymous Coward
    Anonymous Coward

    Programming

    There has been a slow swing back to efficient programming. I've started to see it with some game designers - mostly in graphics storage. Maxis' Spore was the first but there have been others. When they realize that they can make massive amount of content and still store it on a single DVD, the cost savings make it worth it. (Or in the modern, impatient, age less download time). I'm sure in time this will expand to other software companies.

    1. Anonymous Coward
      Anonymous Coward

      Spore?

      That the one where everyone made penis creatures?

  8. Crisp

    Growth, year on year

    It's got to come to an end at some point.

  9. Anonymous Coward 15
    Paris Hilton

    300TB?

    Store ALL the porn!

  10. chris lively
    Thumb Up

    Shrinking processors at this point is a waste of time.

    Processors are already much faster than the Disk I/O interface. It's pretty rare to have an application truly suck 100% of the processor time, unless the app is extremely inefficient.

    3 months ago we decided to upgrade our developer workstations. At the time we bought the 4th fastest Intel processor (due to cost /benefit analysis) and hooked it up to 16GB of RAM and a nice SSD drive. A cold boot of Win7 takes less than 10 seconds. Word starts instantly (barely has time to show the splash screen); Visual Studio 2012 starts in about a second. Cost per box, excluding monitors, was right at $1k USD. This is the first machine I've worked with that spends most of its time waiting on me instead of the other way around.

    By comparison, I have an older machine at home with a 4 year old quad processor: Q9650 and 4GB of RAM. Last month I pulled the 10K RPM Raptor and replaced it with an SSD. Win7 cold boot time went from 50 seconds to about 15 seconds. Word and Visual Studio had similar boosts.

    tl;dr: The current generation of processors are data starved. The inefficient code part is only going to be solved when we finally hit hardware limits and developers are forced to actually consider what they are doing. The I/O issue will be solved when spinning disks go the way of the DoDo.

    1. Thomas Allen
      Meh

      Re: Shrinking processors at this point is a waste of time.

      You said:

      It's pretty rare to have an application truly suck 100% of the processor time

      Reply:

      Try any modern chess program. Houdini, Stockfish and many others will pin the needle for long periods.

  11. John Savard

    Some Comments

    It's possible to surf the web, do spreadsheets and word processing, on an 80386 system running Windows 3.1. Thus, the reason that being unable to make further improvements in computers is a problem in the general marketplace is because it prevents Microsoft from coming out with an even more bloated new version of Windows that forces people to upgrade their computers, creating employment over at Intel.

    It is true that improvements in cooling, and using exotic substances instead of silicon, will allow higher clock rates even with a lower bound on feature sizes - but that just gets us a little further, it doesn't admit of being continued year after year. Someday, quantum computing might blow the roof off of computer power again.

    But one reason that continuing on with Moores' Law would be nice is to have new applications - AI robots to serve us, and maybe even the option of uploading ourselves.

  12. Infernoz Bronze badge
    Boffin

    That's 2D thinking, what about 3D

    If heat pipe cooling channels were built-in, the *PU or memory could become a thick-film stack of wafers or even a deep wafer, so removing a lot of the need to shrink components.

    From what I read graphene is apparently going to replace silicon as an IC substrate, and because it is so strong and thermally conductive, it should be ideal for a heat pipe cooled 3D *PU or memory.

    1. John Smith 19 Gold badge
      Meh

      Re: That's 2D thinking, what about 3D

      What a brilliant idea.

      Perhaps you should look up Gene Amdahl & Trilogy to see how well that idea worked out.

    2. Mikel

      dimension z

      They're working on it. Modern soc packages include cpu, gpu and RAM layers in one package. Just one more leap to stacked multicore cpus for a cluster in a package.

    3. Schultz
      Flame

      'graphene is apparently going to replace silicon as an IC substrate'

      Graphene is not going to do anything useful anytime soon.

      It's only a hot research field, because there are thousands of researchers who got started on fullerenes (remember the Nobel price in 1996?). They had to quickly find something new when that field went nowhere, and we got ... carbon nanotubes! They had to quickly find something new when that field went nowhere, and we got ... graphene.

      It's a monolayer of graphite, what do you really expect? It'll burn allright.

  13. stucs201

    Diamonds are forever

    There is still at least one direction left to go : diamond microchips. The idea has been around a long time, but hasn't yet caught on (presumably due to cost and volume considerations - but that just needs the right synthetic diamond manufacturing process to solve). Diamond chips should be able to handle much higher temperatures - which means we can go back to the GHz race. Probably more useful for desktop than mobile though, since power consumption is likely to be an issue if not plugged into the wall.

  14. MacroRodent
    WTF?

    Platter size

    "Right now, a 1TB per platter is the highest density available."

    Is the diameter of a platter somehow irrevocably fixed to today's standard size?

    1. NukEvil
      Trollface

      Re: Platter size

      Yes, if you want your hard drives to sell in this economy, without having to buy a computer (case) with a larger drive bay...

      1. Nuno trancoso
        Happy

        Re: Platter size

        You don't have 5 1/4 bays in your desktop? It's not even "news" as Quantum was doing it way back when.

        http://en.wikipedia.org/wiki/Quantum_Bigfoot_%28hard_drive%29

        They did it just because of the opposite thing, to provide the same storage space at lower density, but if you kept the density the same, the larger area would instantly translate to larger storage space. Given that the things were quite slim, at least those i had were just 1 platter 1/2 height, i'm quite willing to bet they could use at least a 2 platter full 5 1/4 height and squeeze a bit of storage space from the extra space.

        Make it a 5.4k RPM one with 2/4TB and you have a willing buyer here, as i need more medium term storage without the cost of SSD and/or the power consumption and heating of the 7.2k+ jobbies.

This topic is closed for new posts.

Other stories you might like