back to article Honey I shrunk the chip ... now what?

Bigger is better in pastries, paychecks and bank accounts, but not in electronics. A recent story in HPCwire caught my interest and got me thinking about what the end of the shrink road might portend – and the potential alternatives. The ability to steadily shrink the size of the processor brains that drive computers – and …

COMMENTS

This topic is closed for new posts.
  1. Chemist

    There seems little that can be done conventionally.

    Reach a certain size and quantum effects will take over. Already quantum tunneling is a problem - get much smaller and it will be dominant. Time to rethink the paradigm.

  2. Real Ale is Best
    Boffin

    The solution is simple!

    Start coding properly in parallel. OCCAM anyone?

    1. Tim Parker

      Re : OCCAM

      "Start coding properly in parallel."

      .. well that's only going to really help where the work-load is itself parallelizable - and even when it is, it's often far from easy to actually implement sensibly... it's awfully easy (and, come on, a tad glib) to say 'do it properly', but that's not always possible or practical.

      1. Captain TickTock
        Boffin

        Occam...

        been there, done that. Mr Real Ale, I suspect you haven't. There's a reason why Occam hasn't taken off, and it's not just Inmos' failure to keep up with Intel in improving chip performance.

        Just because a language has parallel constructs built in, doesn't make parallel programming easy.

        And I remember people thinking that multi-threaded programming would be trivial in Java because it had sync primitives.

        It's just possible that functional programming might be the key that unlocks massively parallel programming. Reduce the sync problems, but you still have to move the data faster

        1. Michael H.F. Wilkinson Silver badge

          @ Captain TickTock

          Quite right, but as a parallel programming guy (C++ with libpthread and some MPI and OpenMP), I find there are often problems when processing order is data driven. Functional programming is not necessarily a boon then. One might argue that you are trying to shift the burden to the compiler designers. In effect we have the same problems when coding for GPUs. Having said that, even if we improve our parallel programming skills, memory-bandwidth bottlenecks are a key problem to be solved.

  3. Anonymous Coward
    Anonymous Coward

    And there I was

    thinking that hypertransport and whatever intel is calling their alternative these days had plenty headroom left.

  4. Josco

    Turbo? Been there, done that.

    One of my early PCs had a 'turbo' button, but like various Amstrad HiFi's it didn't make any discernible difference.

  5. Si 1
    Unhappy

    I miss the good old days of speed....

    Thinking back to 10 years ago, I would have expected CPUs to be hitting about 1.5THz by now (based on my incorrect interpretation of Moore's Law), but instead we're still floating around the 2.something GHz range with a few chips hitting 3GHz+ (or 4GHz+ if you're an overclocker).

    As much as I accept that the days of extreme speed increases is over, I do really miss those good old days when every new chip release brought a huge speed increase and suddenly all your games ran twice as fast and apps loaded in half the time they used to. Hardware is so boring these days, everything is just a small increment in power now, we never see anything that really blows everything that came before away.

  6. Naughtyhorse

    BUT.... it's the law

    it's the LAW

    moores law that is.

    .. .but ye canne break the laws of physics...

    it's the law.

    how can it be broken

    (oh no it's a load of old bollocks)

    carry on

    (nb to self - stick t othe dried frog pils in future, keep away from the amanfrommars ones)

  7. JBardey
    Flame

    Moores Law

    Common Misconception Alert!

    Moores law applies to transistor count not clock speeds. So I think we are actually keeping up.

    Mines the one with twice as many fibres per square inch per year.

    1. BristolBachelor Gold badge
      Boffin

      Thank you

      Yes, Moore's "Law" was about the number of transistors on a chip. We now manufacture using larger wafers, and yealds are doing very well, so larger dies are possible with most of them still working. Also there are technologies with redundancy on the die, so if there is a fault in the manufacturing somewhere, the whole die is not written off; it just has fewer working cores so runs a bit slower.

      Also most processors are now actually multi-chip modules, so the number of transistors in your processing unit are still rising.

      I think that the biggest problem is that simple tasks like writing a letter are now done using Office 2010 which needs a minimum of 23TB of Ram and 500THz clock speeds to actually keep up with a 1-finger typist.

      1. Michael H.F. Wilkinson Silver badge

        @ BristolBachelor: Wirth's law

        "Software is getting slower faster than hardware is getting faster"

        The fact that the minimum specs for Office 2007 equaled those of a Cray Y-MP performance and memory wise is telling

    2. Naughtyhorse

      the reason it's bollocks is...

      sometimes its tr count, somtimes it's density, sometimes 'processing power'(whatever the fuck that means), and sometimes it's clock speed - then you get to be super selective about when the 18month period starts.

      by definition there is an asymptotic component to tech advances - there used to be all sorts of talk about the die process hitting a wall and chips sat at around 1/2 a GHz for ages,then that hurdle was crossed and a huge step forward was achieved...

      now it's lumpy atoms.

      this will pass, maybe

      then it will be.... the magic blue smoke gets stuck at the corners... or some such.

      at some point when (desperately) trying to fit data to an exponential curve you just have to say " you know what, this is not an exponential phenomenon" moores worked ok for a bit in the 60's and 70's (as even the great man himself reckoned) calling it a _law_ always pissed me off. and i cant even begin to imagine how much is must have pissed off the clever techies squeezing the process, getting more and more performance (however you measure it) and having their efforts be made to look - inevetable - like they could have sat round drinking tea all day and the chips would go faster all on their own.

      </bah humbug>

  8. Anonymous Coward
    Pint

    It's back to the 80's Inmos, Occam, Transputers :)

    Ok I think those of us from the UK, can be a bit smug and say ' been there figured it out you only just got here!'

    Another point did the this announcement from Intel appear deja vu

    http://www.intel.com/pressroom/archive/releases/2009/20091202comp_sm.htm

    :)

    1. This post has been deleted by its author

      1. Ken Hagan Gold badge

        Re: Paradigm shift

        Intelligent memory won't help. The reason? All the *interesting* computations involve two pieces of data, and the second of those pieces is always in the wrong place so having a full CPU right next the first one doesn't help much.

        Think about it. Even having a full x86 core attached to every byte of memory would only reduce the data traffic on *really* hard problems by a factor of two compared to having one really fast CPU that was remote from *both* data. (The latter situation is pretty much where we are today.)

        Some problems are just Hard.

  9. Architect CAD monkey

    Why do they have to get smaller?

    Why do processors have to get smaller to get faster? What's wrong with making them bigger and faster instead? Plenty of room in my tower for a shoebox sized CPU if need be

    1. This post has been deleted by its author

  10. Mage Silver badge

    1 core

    but no L1, L2, just RAM at CPU speed on the Chip

    1. Ken Hagan Gold badge

      Re: 1 core

      Yes please. You'd probably only have about 64MB of such "full-speed RAM", so you couldn't run any recent mainstream OS release on it, but for a fair number of workloads it would probably knock current hardware into a cocked hat.

      It would probably be ideal for the sort of branch-every-sixth-instruction code that makes up the average GUI app. The sad thing is that, since you can't buy such parts, no-one is experimenting with them, so we'll never know.

      1. Danny 14
        Go

        mmm

        on CPU RAM. I shudder at the yields for those. Would be fast though.

        1. Ken Hagan Gold badge

          Re: mmm

          Yields would be fine. Hardly any of the chip area would be CPU. You'd make 65MB of RAM and just blow the fuses on any blocks that had a defect.

      2. Frank_M

        FPGAs

        You can experiment with whatever hardware config. you want with a FPGA card.

  11. Christian Berger

    We actually use Transputers

    One of our old Spectrum Analysers uses, according to it's boot log, Transputers.

  12. K. Adams
    Boffin

    Optical Interconnects

    Well, I've often thought that in-core optics would be taking off by now, but that also seems to be a technology that the Big Fabs (Intel, Global Foundries, TSMC, etc.) have sidelined for the time being.

    With recent advances in avalanche photodiodes, and since beams of light can cross each other at right angles with hardly any interference, one would think that building chips where circuit "traces" can cross each other at right angles ** and on the same layer ** would do wonders for latency and transistor density.

    I guess the problem is deciding on what one could use as a photon carrier. Silicon dioxide (SiO2), which is the primary component of common glass, doesn't exactly have a small molecular cross-section when compared against the junction sizes of the nanoscale devices in today's processors. Phosphate, borosilicate, and ZBLAN glasses have optical properties that in some ways make them much more desirable than plain-old SiO2, but the cross-sections of their component molecules are even larger than those of common silica. I suppose one could try to leave "empty" space as a light lane, and then try to seal the lane off in a vacuum, but again, quantum nanoscale effects would probably put the kibosh on that idea, too...

    1. Michael H.F. Wilkinson Silver badge
      Boffin

      One problem with light is the wavelength

      Even blue light is at 450 nm (in vacuum, about 300 in glass), MUCH larger than the components used today. Therefore, within a chip, you have to use near-field calculations, and interference is more complicated. This gets messy quite quickly. Besides, if both transmitter and receiver have dimensions much smaller than the wavelength, it is difficult to impossible to get any directional sensitivity. Optical interconnects between chips seem more feasible.

  13. E 2
    Happy

    ...now what?

    Shrink it more.

    An "APU" with 16 CPU cores and 3200 shaders clocked at 4GHz dissipating 1 watt capable of delivering 1000+ Crysis FPS sounds about right.

    1. Simon Neill

      Woah!

      Slow down dude! lets build a PC that can deliver 60FPS of crysis first.

  14. John Savard

    Mousetrap

    A chip with 64 MB of RAM and one CPU core would indeed speed matters up considerably.

    And if Microsoft doesn't feel like re-releasing Windows 98 for it, one can always use a trimmed-down version of Linux.

    But it would still make sense to have a terabyte or so of external RAM instead of just the hard drive, even with a cache.

    Having multiple hard drives, so that the head doesn't have to move as much, is nice too. Three drives for a problem involving read file A, read file B, write file C, all sequentially - and suddenly some of the slow things about the hard disks become irrelevant.

  15. Big_Boomer Silver badge
    Alert

    Moore-ons "Law"...

    was always doomed. It completely failed to take into account the laws of physics.

    The future will not be faster and faster, much the same as the car and aircraft arenas.

    With very few exceptions we are now travelling slower than we were 10 years ago.

    Commercial planes now use fuel-economical flight plans where 10 years ago the company made more money flying time-economical plans.

    Cars are faster in terms of capability but there is nowhere you can use that capability, even if you do choose to break the law.

    My personal belief is that more capable processing will come from Quantum Effect processing but in the meantime we need programmers to tighten up their code and cut out the bloat.

    For the forseeable future we are in a holding pattern..... until the next big leap.

This topic is closed for new posts.

Other stories you might like