back to article iSuppli: Moore's Law to take a breather

Is Moore's Law, the driving force behind the technology and economics of the chip business, going to take a holiday? The analysts at iSuppli think so. And sooner than you think, and maybe not for the reasons you are thinking. The old trick of cranking up clock speeds on shrinking chips to boost performance has been dead for …

COMMENTS

This topic is closed for new posts.
  1. Matthew 4
    Thumb Up

    lol

    that graph is ftw

  2. A B 3
    Thumb Down

    Hogwash

    I'd call Moore's law a loophole. If you shove enough figures into your equation you could get it to prove anything, within reason. If we had people like this in Ancient Greece defining the limits of art and science we would 'still' be in the dark ages. An upper and lower limit is determined by physicists not mathematheorists.

  3. Anonymous Coward
    Anonymous Coward

    Don't blame the programmers...

    Multi-core processors with a shared memory architecture can only be taken so far anyway, because of the problem of maintaining cache consistency. As the number of cores increases, it gets harder to ensure that all see a consistent view of the memory. Even ignoring that, you are still sharing the memory bandwidth of a single processor between many cores.

    Maybe the solution is to have multiple cores with independent memory spaces on a single chip, but that (1) requires a different programming approach again - distributed rather than shared-memory concurrency, and (2) you still have the memory bandwidth issue. If you have multiple memory busses and the cores don't share the same memory, why put all the cores in the same chip in the first place?

    More cores per processor also means heat dissipation issues, so maybe the better solution is simply to have multiple processors, each physically separate (less heat to handle) and each with its own bus to its own memory. Each processor would have multiple cores, true, but only up to the limit that shared-memory multi-cores can scale up to - and that could be as low as 16, IIRC, primarily due to the cache issues.

    That said, my existing single-core desktop pentium 4 is working out fine for me. I don't need to encode video or render 3D on a regular basis, so what's the point in anything more? The same applies to my mobile P4 laptop. I might upgrade anyway, but since my OEM Windows XP licenses are tied to the motherboards, it's just not a reasonable option. Has anyone else noticed how that policy damages Intels and AMDs sales?

  4. Anonymous Coward
    Anonymous Coward

    One game

    Just one amazing game that could use many cores, and there surely will be people to buy it.

  5. John Savard

    Just One Minor Point

    The article by iSuppli does provide a new insight.

    Given the usual reason why Moore's Law will eventually come to an end, that working with far ultraviolet light or X-rays is very difficult, and thus chip geometries have been shrunk further than originally anticipated with existing wavelengths of light used for photolithography by clever tricks... which have to run out sometime... it's not hard to believe that Moore's Law will run out of steam.

    The financial reason given by them, though, would seem to have one weakness. Any individual chip supplier would certainly like to recoup its process investments for as long as it could. Generally speaking, though, they don't really have a choice in the matter, since their competitors will move to smaller processes, letting them make cheaper and better chips. Now, if the enormous start-up costs of a new process gets rid of the "cheaper", and the difficulty in exploiting parallelism gets rid of the "better", Moore's Law might indeed take a long breather.

    Ultimately, though, this will just give technologies like Indium Phosphide a chance to catch up. Technology will advance, just not at the dizzying pace we're accustomed to while silicon was reaching its limits.

  6. Charles Manning

    re: Don't blame the programmers

    In the long term, don't blame the programmers. Blame the architecture. No matter how many cores we still really only have sequential processing. That architecture is never going to bring us "brain scale" computation speeds with millions of parallel computations.

    But in the shorter term, do blame the programmers. Why does my dual-core multi-gigaHz XP PC take almost a minute to become usable? My 100Mhz 486 could boot Win3.11 in half that.

    Programmers have been depending on Moore's Law to compensate for the increasing bloat in their code. Unless programmers change their evil ways the software will get slower and slower when Moore's Law tails off.

  7. Flybert

    at what point are features good enough ?

    even in PC video games, where it seems the rendering is soon to be about as realistic as it can get, at better framerates than cinema .. do we really need better than 2 core CPU and what is currently high end single card GPUs ?

    how about video editting and creation on a home workstation ... seems a 4 core CPU and high-end pair of graphics cards should suffice(?)

    software indeed needs to catch up to hardware, get optimized for 4 cores, 8 threads max,

    not chasing 16 threads that will never be efficient

  8. Watashi

    Two cores are better than one

    Up until dual core processors the mantra was "buy as fast a CPU as you can afford". When dual processors came along, this changed to "make sure you get a dual core CPU". We now have enough processing to do most of what we want very quickly, and the rest at a pretty good speed. A current bottom-end dual core processor will encode good quality video in real time no problems, and that's pretty much the most demanding thing most computer owners will ever do.

    The things that make a computer feel slow at the moment are: memory, HDD speed, network/internet speed. The bottleneck is getting software started, and that means loading data from the hard drive / network / internet into the RAM. With 4Gb RAM now becoming the standard on most computers, RAM isn't really a bottleneck either. With Gigabit network cards and 8 core servers coming on line, the hardware that gets data between server and computer is pretty fast too. The real place where change is needed is the hard drive, and this is exactly why so much research is being done on solid state drives.

    The perception of increased speed that will be acheived by doubling clock rate of a CPU will seem negligent when compared to a doubling in speed of hard drive data access.

    Of course, the next big thing will be quantum processing - but that's a good few decades away from being inside our home computers.

  9. John Angelico

    Sounds like the "laws of engineering" asserting themselves

    Like most mature industries (think automotive engines) there comes a time when the practical (financial realities of survival) overtake theoretical possibilities.

    There is very little demand for Formula 1 engines (V12s running at 18000rpm) to power our shopping runabouts or commuter vehicles, and nobody has asked for a Cat C-15 600hp truck engine for their SUV/4WD.

  10. Matt Bradley

    Interesting

    Isn't that interesting. The graph shows a great deal of turbulence of activity right up to the present day, then flatlines into the future. Its almost is if those composing the graph were speculating, and didn't really know what was going to happen in the future, so they assumed that the status quo would basically remain the same.

    I realise that there is some very heavy science behind all of this, but predicting future progress upon current demands and current technology is a complete nonsense. ESPECIALLY for 4 years progress.

  11. Anonymous Coward
    Alien

    @John Angelico

    <quote>

    There is very little demand for Formula 1 engines (V12s running at 18000rpm) to power our shopping runabouts or commuter vehicles, and nobody has asked for a Cat C-15 600hp truck engine for their SUV/4WD.

    </quote>

    Ah but you have not driven in Texas - there was a local company (Dallas area) doing something along those line with a HUGE pickup truck. The weirdest thing I have ever seen.

  12. Dimitri

    Time for new materials

    It was bound to happen sooner or later - we stretched the engineering limits of silicon as far as we could go. And parallism is clearly reaching a limit on the current codebases. Recent tests (I think by THG) showed that you get a massive benefit when you add one core, another smaller but still decent benefit when you add two and only tiny benefits when you go to 4 cores.

    One thing that sounds promising and probably easier than quantum computing or re-writing all software with a new approach, is using something other than silicon, which would allow you to run more current at higher frequencies through it. If the material could take it without liquefying you could take an existing quad core architecture and run it at 10 or 20 GHz.

    For years, small "out-there" startups have been looking at manufacturing artifical diamonds in custom configurations with chips being the main hopeful application. No idea if this will ever pan out, but certainly new materials might be the way to go.

    In the meantime, programmers could go a long way by killing the bloat and writing decent code! How come windows 7 beta, which is essentially an incremental version of Vista, run up to 40% faster than that hog?

    Let's not get into the speed of linux - even the latest popular "bloated" distro's beat most of the junk code out there for performance. This might be good for us after all!

  13. John Chadwick

    Gosh.

    I would never have guessed. Still Occam is still out there, oh no wait sorry, there isn't a VisualOccam(TM).

    But perhaps the problem is that most software producers don't understand how a computer really works.

    There is of course the issue that an awful lot of processes are actually linear by nature, if they weren't Oracle wouldn't have had to invent PL/SQL. It doesn't matter how many core you have, a linear process can only use one of them. For TP however it's great you just set up loads of listener to pick up transactions as they arrive, but then to do that you need to have a reasonable simple administrative interface to set up the listeners and tune the software, or the software has to do it automatically, but then creating and destroying processes takes time.

    Oh well.

  14. Anonymous Coward
    Stop

    Deja-Vu

    Haven't we been hearing similar stories to this one for the last decade or more?

    Personally, I think it's hogswash. Someone WILL come up with a revolutionary new idea and things will carry on much as before...

    To the commerter who wrote that programmers seem to be relying on Moore's Law to cover the fact that their code is more and more bloated and non-optimal. I totally agree! There are programmers today who don't even know how to optimise loops...

  15. Lionel Baden 1

    yeah im guessing

    "their costs will be so high, that the value of their lifetime productivity can never justify it,"

    Somebody obvoiusly doesnt game on their computer !!

    and for the ppl stating computers are fast enough i have a large gaming that would be more than happy to argue it out with you

    actually let me do this for them

    take your 2 year old computer with decent cpu and Graphics card. and install the latest bling game on it and see what happens

    Now go buy a new shiny fast computer with top range cpu and graphics card and play that same game !

    people will pay they always do. even if they just want to show they have something better than somebody else.

  16. Simon B
    Thumb Up

    A great read and an interesting one

    An interesting story, which explains a lot as to why the P4 has been around for so long, before even after multiple cores appeared, and why speed increases have all but halted. An interesting read and a great story :)

  17. No, I will not fix your computer

    re: Don't blame the programmers

    You are kidding aren't you?

    Yes, the current multi core architecture has limits, but programming skills (and to be fair, development tools) are nowhere near touching the capabilities of multi-proc/multi-core/multi-thread, the non uniform memory architecture (NUMA) that you mention exists and has been in use for years, which is why correctly written software like Oracle run so well on things like E25k/Superdome/P595 kit (yes, I/O is critical but how you handle the I/O is also).

    Does anyone remember when the dual proc version of Doom came out? it was soooo much smoother than single core with faster clock speeds, because a lot of effort went in to use that second proc, developers (I don't think they have earnt the title of programmer) don't think about this, and if something is slow they cry "faster procs", "more memory", "better I/O" not "how can I optimise this?"

    Developers are lazy IMNSHO it's all bells and whistles.

    Is this just a rant? well, think about it, does anybody remember the ZX81? how about the fact that you could play chess in 1k of memory (yes, 1K, 1024 bytes, and that included memory for the display), you even had highres graphics of a sort (if you had 16k), so given that it took years to get to the limit of a ZX81 (a very simple Z80 chip, and a tiny amount of memory) think about our current kit, no, the reason why the programs runs slowly is that they are written badly, not because we are hitting any hardware limits.

    Maybe there should be a Moores law for developers, every 12 months they will need twice as much power to perform the same task? Moor(on)es law?

  18. Stuart 17
    Linux

    Sorry Lionel Baden I disagree, the article is right...

    You ask to take a 2 year old PC and run the latest game on it, well I have my Q6600 with a 9800 gx2 and it plays everything, top spec, even Crysis at 1920x1080 on my TV!

    I have been saying this for ages, for years and years I didn't go 6 months without needing to upgrade something, be it GPU, RAM or HDDs, I have now been 22 months and the only time I have opened my case was to add a 1Tb Drive.

    The HDD issue is the next step, in the same system I have the WD Raptor 10k Drives and for gaming they have been the best you can get for a realistic price and that is only starting to change now as SSD gets more affordable which it is just about doing though the price per GB is still astronomically higher in most cases home users would only ever want their OS and a few App or Games installed on it.

    Networking and internet needs to change sooner in my eyes, our goverment say they are going to garrentee 2Mb to bring us to the front of the technology world however they fail to realise Korea has had 100Mb for years!

    When I have SSD and Gigabyte WAN Connection I will be ALMOST happy, but then I am a tech junkie like that!

  19. martinp
    Happy

    Lazy developers

    Couple of observations:

    - On the "lazy developer" issue, for sure developers are less careful to eke performance out of systems than before. I actually started on a ZX81 many years ago and wrote a graphical data analysis app that fit in 16k (and evaporated every time I nudged the RAM pack.) At that time a lot of time was spent on micro-optimizations just because hardware resources were so limited. Now there is no need to tune to that level. Faster hardware is not just for faster performance, it also allows developers to work more quickly by using higher level abstractions and spending less effort on tuning. If the applications do not need ultimate performance, this is a Good Thing. Hardware is cheap, developers are not.

    If/when Moore's Law does grind to a halt then you may see the balance shift again and more resources are spent on tuning.

    - On the article itself, I think it is a very insightful analysis and I'm disappointed that some here dismiss it with a wave of the hand and a "they'll think of something" comment. Clearly the current course is not sustainable, and it is not at all clear that something will come along just in time to save the day. Although it is possible, I prefer not to live by faith, and it is worth seriously considering the possibility that performance improvements will fall off dramatically in the future. That will lead to a very significant realignment of priorities in the industry, and may actually be a good thing.

  20. Anonymous Coward
    Joke

    The next logical step

    FTL fields.

This topic is closed for new posts.

Other stories you might like