back to article RISC daddy conjures Moore's Lawless parallel universe

The oft-cited Moore's Law is the fulcrum of the IT industry in that it has provided the means of giving us ever-faster and more sophisticated computing technology over the decades. This in turn allowed the IT industry to convince us that every one, two, or three years, we need new operating systems, better performance, and new …

COMMENTS

This topic is closed for new posts.
  1. Robert Ramsay

    Thank goodness

    Its nice to read an article about something that doesn't make me feel stupider at the end than I felt in the beginning.

  2. nobby

    wouldn't it be nice

    to be in an industry where things don't become useless two years down the line?

    why should i have to change my computer every two years just to be able to run the new point-release of bloatware?

    Looking into the car park here there are some cars that are ten years old, they seem to work perfectly well. Looking in the server room there isn't anything anywhere near as old as that. Looking on people's desks... nope, most computers are newer than a directors car - have to be to keep running the software. I don't change my washing machine every two years 'cos the old one's too slow, I don't change my TV every two years(*), I don't have to buy a new chair just because Cushion 2.0 doesn't fit....

    (*) although, the way TV technology is going at the moment maybe I should be..

    I'd welcome an engineering ceiling that meant that the next version of software would have to be made More Efficient instead of just Prettier. I'd welcome the ability to run a new game on full-res on my PC rather than having to upgrade something (memory/video card/mouse mat).

    Surely technologies follow a curve of improvements, with the occasional bump here or there? We've been lucky to live in the exponential part of the Chip Improvement Curve; I assume we're entering a gentler slope and so maybe its just time to chill, relax and write some EFFICIENT AND FAST SOFTWARE for a change.

  3. Anonymous Coward
    Flame

    "Going forward..."

    Anyone who uses this meaningless cliché should be shot. Otherwise, some good points there...

  4. Anonymous Coward
    Coat

    Just shows a lack of focus in the industry, business as usual?

    Sounds more like people were so caught up in the clock speeds of the chips, rather than actual data throughput and the speed at which PC tech has been outpacing software development. Software seems to pretend to keep up by becoming full of useless functions and features in their race to pretend to be "keeping pace", when in reality average people just don't need the extra speed (as there are no useful pieces of software that need it) and don't need bulky new software when the "older" streamlined versions work just fine for their purpose. With all of the applications and games that have been coming out recently it seems like they are just made to show off the newer hardware (like most newer console games), without actually showing off its usefulness nor a good reason to dish out tons of cash constantly to upgrade for one or two applications. Things like WOW will never be worth $4K to me, though the few that dish that kind of money out are loud enough to appear like major consumers, they are in fact extremely few. I'd be willing to bet money on there being at least 100x more budget PCs sold than high-end ones, maybe 1000x. Having worked on the end user side of the IT industry, this has been the trend since budget PCs were thought up. I'm all for people being able to buy supercomputers if they want, but they shouldn't be the only focus as most of those advancements are useless to the average consumer for what seems like decades, i.e. when a new version of Windows demands the extra power, it's a strained attempt at making the extra power seem necessary, while at the same time taking up that extra power with things the user doesn't even want running, like Vista, the bells and whistles have gotten too big for bigness sake. I don't want to "need" multiple cores when I do the exact same things on a single one just fine.

    I'd like to think Patterson was just being kind to the industry, being a part of it and all, and if he was really honest he could have summed it all up to the execs of the software/hardware companies by just asking, "WHY? Do you guys even know what your companies do? Because no one seems to have any focus on goals other than to keep making money faster and faster, what do YOU want to use a computer for anyway? Let's not skip the middle steps, if you build it they won't just come, and you can't just market to the top 5% of PC spenders because by the time you're computer can actually run the "top of the line" games/programs, you're telling them to upgrade again."

    Eventually they're going to make more and more products people just don't need or want becuase useful software is not demanding the extra power, and should they always have to? Most software makers, esp the smaller ones that work for contract companies writing custom software for businesses, have only gotten worse at coding because the computers are so fast they are oblivious to how bad their code is, so their code does not get fixed often enough. If it compiles it's perfect, is too many of their philosophy. Look at MS Windows! BLOATWARE

    Being IT this topic of Moore's law has always annoyed me at how talkie-point power-point execs and their lackeys are making the industry a pain in my ass with all their BS, and beliefs in how things are working based on absolutely no facts becuase there is just too many factors for their minds to wrap around in this industry, and the programs they have made for them to do the thinking for them are crap as well.

    But what do I know? I only use the stuff constantly, and have supported other users for 12 years now. Perhaps I just need to kiss the hardware/software manufacturer's asses and just screw the people, seems to be the only way to get a job nowadays.

    Anyway, since multiple CPUs were first introduced to the average consumers, it has been apparent if the hardware can't juggle the processing between chips on it's own it will NEVER go mainstream. The introduction of multiple cores without this ability to juggle the load between cores as though they were all one big processor, while at the same time peddling it to the common user, was like a kick to my nuts. It's like MS saying windows is faster becuase secretly every time you hover your mouse over an app it starts loading it, even if you don't want to run it, just so that when you DO double-click it pretends to load faster than the competition. The difference being, in the multiple core case, programmers generally are paid a fortune to be able to program for multiple CPU's, or shit themselves if they are told they need to learn.

    But I'm just insane, don't mind me, obviously I hate money and capitalism and the rich and authority figures, and I LOVE terrorism and pagan gods that need blood sacrifice... obviously ;-P Or I might just be fed up with the BS. Nah, couldn't be that simple.

  5. Anonymous Coward
    Anonymous Coward

    (untitled)

    How can Moore's Law provide the means of giving us ever-faster and more sophisticated computing technology? Surely it is merely an trend observation that has held, more or less, so far?

  6. Andy

    But...

    Fascinating. And I'd love to see these new applications. But...

    People will always want to write letters and add up columns of figures. So Word Processors and Spreadsheets, which Patterson dismisses, will still be needed in some form.

    In fact I would guess that we will need all the current types of applications that we currently use -- however limited and annoying we find them -- for the next ten years or so.

    And computers running the majority* of these applications, processor speed is no longer relevant; hasn't been for years. It's IO speed that makes things faster.

    It seems to me that Moore's Law has become an excuse for an industry and so much marketing hype. The vast majority of people are today being forced to buy computers which are much more powerful than they will ever need -- except for the fact that they are running operating systems which have bloated to require that power without any real payback for the user.

    But of course, I would say that. Every computer in my house is an old recycled one running Linux.

    (* not video editing or animation rendering, but pretty much everything else.)

  7. Eddie Edwards
    Thumb Up

    Good stuff

    Good summary of what we've known in the games industry for about five years (you wouldn't know it to listen to our "luminaries" like Carmack and Sweeney, though; they're still praying for salvation based on their old faith - which is hardly surprising given their legacy codebases, but still).

    Effective large-scale parallel programs (e.g. games) seem to split in a recursive way. The top layer may be serial, then there may be a parallel layer underneath, then each part of the parallel layer may be serial again, with each serial part implemented by a parallel algorithm. Thus can we divvy up hundreds of cores within a single program. An example might be:

    Serial: Calculate each output frame of the game in turn

    Parallel frame: Run simulation and rendering in parallel

    Serial simulation: First calculate physics, then run AI

    Parallel physics: Break the sparse matrix into chunks across many processors

    Serial chunks: Run a standard single-threaded algorithm to evaluate a chunk

    Unfortunately, the "productivity programmers" don't have a clear layer that belongs to them. Domain-specific knowledge is inserted at all levels.

    We want to avoid having the domain guys think too hard. Much customization (insertion of domain-specific knowledge) can be to do with data structures and "shaders" - small pieces of functional code which do raw data processing. But they will sometimes need to understand the parallel context this stuff runs in. The aim is to make this "sometimes" be as infrequent as possible, but this in itself requires the domain people to change their paradigm e.g. they can no longer just call out to other objects at will, because that doesn't scale. Their job still gets harder. I don't see how to avoid this.

    Ironically, the lowest-level performance programmers still need to write the best single-threaded code for a given task. They are the only ones to escape the paradigm shift, despite being the best people to take it on board.

  8. Paul Murphy

    So hopefully it won't be too long until ...

    We don't 'need' to upgrade our computers every 5 years and our OS every 3 years,

    Programs written now aren't just out of date, but un-runnable 10 years later,

    We can stop wasting resources by constantly running after the moving goalposts of performance.

    ttfn

  9. Anton Ivanov
    Coat

    Interesting

    Efficiency is out of the complete software development process equation at present. It started with the Unified Model which "threw the baby with the bath water" and removed performance metrics from the standard requirement gathering techniques. The agilistas took this even further and we are now having to deal with a whole generation of professional programmers which have very little understanding of efficiency. In fact the so called "professionals" do not want to understand efficiency, it ires them quite badly. So the only people who still strive for it are hackers or the people with grey ponytails and sandals. So I do not quite see where he is going to find his "efficiency programmers". There are less than 1% of those left around and their quantity is decreasing as the universities teach "Java" and "Agile skills" instead of data representation and search algorithms: http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html

    So with all due respect Mr Paterson thinks of the software industry in terms of what people could do and did when he was young. That software industry is dead. "Improvements to processes" sponsored by Ericsson and Chrysler killed it. R.I.P.

    On a side note - using multithread brute force search for optimisation for code is like shooting a mouse with a Grad multiple missile launcher. Yeah, fine, in some cases you will kill the mouse... And everything in a mile radius... However, you are not even guaranteed to hit the mouse on most occasions. Or if we drop the analogies, engineering roccocco is no replacement for mathematics. Never was, never would be.

    Me coat...

  10. Joe Carter
    Thumb Up

    Software developers will be in more demand

    ...to wring out every ounce of performance from the hardware that is no longer getting faster.

    Or from writing the *new* multi-threaded applications required to get the performance out of the new multi-core architectures.

  11. Watashi

    No sh!t, Sherlock.

    I've been writing this stuff in posts on this and other sites for a couple of years now. The key is economics - as the computing market has matured, the advantage gained by investing in faster CPUs has considerably diminished. Beyond the technological limit of the current CPU structure of semiconductor transistor switches, there is the limit of consumer demand.

    The biggest growth sector in the computer world has been the laptop, and that means efficiency and lower temperature are the areas to invest in. Do you want a 3.6Ghz quad core processor in your Eee PC? Nope. Also, the biggest established sector of PCs is the computer as business / home workstation. How much processing power do you need run a few office apps at work or surfing for pr0n at home? 2.6GHz Intel dual core processor is more than enough. If you want a faster general use computer, buy a solid state hard drive and a Gbit network setup, because booting up your PC, accessing data and moving it around networks are by far the slowest parts of modern computing

    So it is that my prediction of a couple of years ago has come to pass - the demand for top end CPUs has collapsed. Parallel computing is obviously the future as far as Intel and AMD are concerned... but the demand will be in specialist computers such as gaming PCs and consoles, number crunchers and system modelling machines in academic research and in servers that have to deal with big databases and the like. For the majority of the market, the most value for money research will be in reduced power consumption.

    What this means for the industry is hard to predict, but we could draw comparison with car engines. For the first part of its development improving the power output of the engine was a significant part of advancement in car technology. Providing greater horespower was more important than most other factors... until we all got engines that were about as powerful as we needed. Since the 90s, most R&D on petrol engines has been on greater reliability, fuel efficiency and tune-ability. Most research has gone into getting the more efficient deisel engine (equivalent of laptop CPU) to be more like a petrol engine in its performance. The basic acceleration of the typical car has not significantly improved - what has changed is that car types once low on acceleration have caught up with the faux-sporty saloon cars at the top end of the mainstream family car sector. Have car companies stopped investing in car R&D? Quite the opposite, and as car engines became less important, other factors like safety, aerodynamics, in car entertainment etc have more than compensated. There will always be investment in CPU research, but it will no longer be targeted at the mainstream PC and instead what we get in our home computers will be a side-effect of what is developed for other uses.

    So will processor researchers lose their jobs? Well, that comes down to the growth of the technology outside of the stand-alone computer. Parallel computing offers significant advances, but will they be ones that society wants? If on-line applications take off, then I'd say 'yes'. If they don't, I'd say 'no'. After all, current engine technology would allow me to put a 1000 horespower in my car, but the 1000 horsepower car engine will never become standard, even if cars are still being used in a thousand years time, simply because there is no use for all that power. Parallel processors have to demonstrate that they offer the consumer something they need.

  12. Lol Whibley
    Flame

    well duh!

    people already just buy new computers when the old one dies.. they're 300 quid to replace. and most of that sort of system gets a significant upgrade as the purchase crosses the dual/quad core spec change of the last 2-3 years, most 'old' pcs being around the P4s with hyper-threading era.. purchaser gets a medium spec system from today's retail and watches the performance jump massively and yet still just does the same old stuff.. shopping and pr0n.

    as for the software writing layer, it's been gettin lazy for years as a way of driving the upgrade requirement quotient. just look at the amount of bloat that ships with what was once a lean code-base.. it's not value-add when you stick features that no-one cares about into a program that should do the one job really well and was designed to do in less capaciously capable systems.

    And this guy gets paid for this? people listen to him because he's done cool stuff previously...fine enough.

    chef's are only as good as their last meal. this guy openly admits he ( and a massive chunk of industry.. Herd Science in action) did get it wildly wrong a few years ago and whole financial/corproate systems based their predictions on that path.

    And their still employing him?

    \rant <pants>

  13. Thomas

    Aren't most of us already living in a replacement market?

    You know, out here in the real world? Caring about regular upgrades is best left to the GPU adolescents - I replace my computer only when it breaks.

    I think most people replace products only when a new one is tangibly better; shouting numbers at them isn't persuasive. If anything, the market is moving towards one where netbooks are free with mobile phone contracts and 99% of people don't care about computers beyond that. The IT industry has a lot of shrinking to do.

  14. Ged Perryman
    Jobs Horns

    Perhaps Moore was wrong?

    Moore’s law has always puzzled me, in that the natural conclusion would imply that at some point in the future the bottle neck of processing would be the speed of light (I am sure some mathematician out there could predict when this should occur according to Moore’s law?).

    A more reasonable explanation would be that the increase in processing power is an exponential curve and although Moore’s law can be observed over a number of years, this is only a small part of the picture.

    (Because I have to work with them!!)

  15. Che Gannarelli
    Thumb Up

    Excellent article

    I'm in nanotech, in an area where all research justifies itself by vague aspirations to have impacts in quantum computing. Every talk to every audience seems to begin with the Moore's Law plots, as a justification for a new paradigm, to continue the trends beyond the small-transistor, high heat concentration limit. This is in itself a bit of smoke and mirrors, as QC won't necessarily be faster, or indeed practible, for all problems. The thing that interests me most is the absolute faith that Moore's Law can and must continue, with the almost faith-based (I really liked that observation) assertion that we must be able to nail QC, because otherwise Moore's Law will collapse, and that that is somehow unimaginable.

  16. OrsonX

    MS Word 2015 Sentient Edition

    128 core Work PC: would you like me to write your annual report for you?

    User: yeah!

    PC: anything else?

    User: please boot up my favourite three photorealistic virtual Facebook girlfriends....

    Until this is achieved the IT industry has no worries!

  17. Eddie Edwards
    Boffin

    @ people

    @ Ged

    "A more reasonable explanation would be that the increase in processing power is an exponential curve and although Moore’s law can be observed over a number of years, this is only a small part of the picture."

    In fact Moore's Law predicts an exponential curve. The correct explanation is almost certainly that the curve is ogive in shape - that is, S shaped. Every other growth curve is. (This is why all the predictions of "the spike" or "transcendental human experience in our lifetimes" are such utter bollocks.)

    Also, Moore's Law is related to feature density, not processing power, performance, or anything else. There are fixed limits to feature density (for instance, the diameter of a silicon nucleus). It obviously won't last forever.

    @ You ain't gonna need it

    I'm running radiosity precomputes, massive builds, raytracing, pathtracing, etc. just to make a single game level look nice. Others do this stuff just to generate a single image. Iteration time is king in the creative industries. We need to do stuff faster. Computers aren't only for running Word. If that's all you care about, the £300 computer will always suffice. As would a piece of paper and a pencil.

    Problems grow to fill the available processing power. Just look at Vista :p

  18. Anonymous Coward
    Coat

    The answer..

    "What happens to the IT industry if the performance improvements stop?"

    Why, you get a No-Moore law instead.....

  19. Marco van de Voort

    Summary: not only is your approach wrong, your problem is also wrong

    What I get from this:

    Not only is the singlethreaded way wrong, but it is the problem that the programmer has that is wrong. He should switch/embellish problems so that they are parallellizable.

  20. Anonymous Coward
    Coat

    deja vu is not just a CSNY album

    Parallelisation might be fine for niche workloads, eg supercomputing, but for the vast majority of stuff that happens on the vast majority of computers (PCs? PDAs? Phones?) the vast majority of the time, it adds very little value. Webservers, probably. But they're a special case too. As a general rule, the problem domain where parallelisation adds value is nearly negligible (but not quite, otherwise there wouldn't be SC08 reports dribbling through El Reg for weeks...)

    I know this, as does anyone else who remembers when parallelisation used to be called symmetric multiple processing. I remember Parallel C, and Parallel Fortran. Heck, I even remember Transputers and Occam, and ICL's Content Addressable File Store. Of course there wasn't an Internerd or a Wikipeedhere in those days so none of this actually happened did it.

    Parallelisation does make software and system development more complex, and most current single-threaded software barely works as it is. Parallelisation will make that worse, as although it isn't *really* that difficult a concept, finding someone who actually remembers (let alone understands) concepts like "communicating sequential processes" (which is what a set of parallel tasks boil down to) is increasingly difficult.

    geriatrically yours

    A. Engineer

    (retd)

    (I wish)

  21. Dave

    @Ged Perryman

    Like so many, you seem to misunderstand the purpose of Moore's Law. It is not intended to be a predictive device: rather it is intended to keep the development teams synchronised.

    Think of it like this: you have one team in Intel that is working on obscure chemistry, one on process engineering, one on circuit design, etc.. Each of these is basically working 'Just In Time'

    The chemists need to produce something that the engineers can use, when it becomes the right time. The engineers need to be able to produce a new process when the circuit design is ready. Moore's Law allows them all to predict where they will be in one, three, five years time and allow all the pieces to fit together.

    That is also why Intel has been able to stay ahead of its competitors, even though almost everyone else has some advantage, at one time or another.

  22. Matt Bryant Silver badge
    Boffin

    Moore's Law still applicable thanks to big bizz.

    Moore's Law will still happen because the main drivers for the chip companies are still corporates, and it is money spent on research there that trickles down into desktops and eventually home PCs. I have never met an FD who - when asked if they would like to be able to compute their payroll run, or end-of-year figures, or just daily financial activities, faster - answered anything other than "how much?". So, whilst big bizz keeps pushing for faster systems, chip manufacturers will design faster CPUs, hence Moore's Law will remain valid as shrinking chips to make them go faster for single-threaded apps will still be a valid design solution for a while, as it's just simpler than real parallel threading.

    I would personally like to see more work on the infrastructure supplying data to the chips - storage, memory, and interrconnects. Having a rocket CPU upgrade without upgradign the others just means you have a faster CPU spending more time wating for data than before. This failure to ramp up the infrastructure is perfectly demonstarted by Sun's Niagara chips, where they have effectively given up on the idea of keeping a core spinning and instead settled for having lot of cores idle and waiting whilst a few work. Intel's approach (and IBM's) has been to massively increase cache and bandwidth to the chips to try and keep them as busy as possible so the increase in clock cycles is not just and increase in time spent waiting.

    The other area of growth is still scale-up by coding for apps that may not be properly parallel but still spread threads across as many cores as possible. UNIX can do this, so can Windows (on Itanium at least) can too, and Linux is not far behind, so it is not beyond the pale to foresee quite soon versions of desktop Windows and Linux happilly making use of sixty-four cores in a single CPU, it's just the apps that will need the most work. And to go back to my FD example, if I asked my FD today if he'd like a PC that would run Excel faster by splitting the load over sixty-four cores, I'm sure his answer would be; "if the cost is right, yes please!"

  23. Mage Silver badge
    Flame

    It has happend.

    My year 2000 High End NT4.0 laptop is horribly limited and slow.

    My april 2002 Laptop out performs most new laptops under £500 and the original XP and Office are still on it. The latest version of Windows & Office goes more slowly on more powerful Laptops.

    It's a 1.8Ghz P4 desktop CPU based laptop. I could have paid a premium to get a 2.2GHz CPU. But that would eat more battery.

    Unless you are encoding, 3D rendering, decoding HD in SW etc you don't need more CPU and the Memory, Motherboard, HDD and Screen is not much faster on a new laptop nearly 7 years later.

    The examples in the article aren't needed by most people. Current applications could be written better rather than more bloated and slower on each release.

  24. Anonymous Coward
    Anonymous Coward

    :-D

    As long as it still runs my holodeck adventure program why should I worry?

  25. Anonymous Coward
    Boffin

    Who needs Moore's law?

    The answer is, lots of people -- fluid dynamics modellers like myself dream of workstations with 100+ cores. But explicit, MIMD-style parallel programming is a lot harder then serial programming; take your one bug every 10 lines of code, and that will drop to every 5.

    SIMD would be the best approach; serial code controlling the flow of the program, interrupted by blasts of lightening-speed implicitly parallel directives when needed. (This is precisely what CM-200 progams looked like, and very similar to NVidia's CUDA).

    The problem is re-expressing your problem in a way that can be easily parallelised. My job's easy here; matrix-mangling was made for this.

  26. Joe Harrison

    You boring lot

    How can everyone keep saying "pah, why do we need faster processors to run office and internet apps."

    Sounds suspiciously like "640KB should be enough for anyone." If people were always happy with existing technology we'd still be writing all our documents with vi or edlin. There is no real way to predict the killer apps that would be supported by faster hardware (although the automatic metting-minutes generator sounds good to me) and honestly twenty years ago wouldn't you have said you were happy with DOS and didn't need a graphical environment?

  27. Lewis Mettler
    Stop

    mistaken assumption

    It is an interesting article but it is based upon an assumption.

    Moore's law could be conceived as saying that performance doubles every year or so but it also could be conceived as saying that the cost halves in that same time frame.

    Which is it? And how does it alter the thinking?

    It is both or either.

    You do not have to just assume that performance doubles for the same cost. You can assume the price drops in half for the same performance.

    Of course, that is not actually the case when you look at computer systems as a whole rather than simply the processing chip.

    But, what does it mean?

    You might put off buying that Netbook simply because they will be even cheaper within a year or so. Or, you might put off buying the whole rack of servers for the same reason.

    And if you look at it from that perspective, most of the article suggests is all but meaningless.

    Instead you can focus upon the so-called digital divide, right? If systems keep getting cheaper you do not have to worry about changing software or parallelism. You only need to be concerned with making OpenOffice available to more and more people because they can afford the access. Forget Word. It is too expensive. But, open source software is readily available when the cost of hardware becomes low enough.

  28. Neil
    Black Helicopters

    Forget Moores Law

    Isn't Amdahl's Law more relevant to this article?

  29. Nick Ryan Silver badge

    Moore's Law?

    Moore's Law? Why don't they just create a CPU instruction set that doesn't suck balls?

    Anybody who's had the misfortune of using the x86 chipset with both(*) of its registers and an alternative (such as PPC) is likely to be fed up of the never ending swapping of values around between registers and the stack just to do the simplest of operations using the x86 instruction set (quite apart from the sheer inefficiency). Compare this to the almost simplistic task of implementing the same operations using a better instruction set and it's a wonder that anybody programming the x86 chipset has any sanity left.

    * Not exactly true, but it feels like it :P

  30. Wolf
    Stop

    Growl

    Ok, first of all there are some serious misconceptions about programming from A) those who never have or B) those who didn't live through the various eras.

    I've been programming for 32 years now. My first program was written on a 1962 IBM 1130 that was 15 years old when I made my first feeble attempts to teach a machine.

    When you have 8k of core memory, a card reader and a damn typewriter-style user input/output device, you learn efficiency. So I know how to write efficient code. Efficient for the *machine*, that is.

    Not so efficient for the *user*, of course. Or the programmers who have to follow you later on and make changes to your oh-so-efficient program.

    And I stayed in the game through the structured programming wars. And the object-oriented wars. And watched each and every fad that's come and gone since then.

    Not one of the people complaining about bloat knows what that means. They *think* it means the program takes up too much room, runs too slowly, or has "useless" features.

    Not true.

    I will admit there are some programs that truly are bloated. But not the ones most people consider. In fact bloated code tends to die quickly.

    There are always tradeoffs in programming. What is simple efficient code for the machine is complex and hard to use for the user. The user demands a GUI? Fine. The GUI code is 10 times the size of the program underneath--and that's not bloat.

    The GUI is too slow? It takes up 60% of the available resources? (Disk, RAM, CPU?) Tough. Could it be made more efficient? Perhaps, but at what cost?

    1) Money. Got gobs, next?

    2) A huge problem domain--GUIs are big because they *have* to be big. OK, got gobs of programmers because of gobs of money. Next?

    3) Maintainability. Not only is the problem domain huge, it's *complicated*. It takes real expertise to write simple, elegant code that avoids obfuscation. Ok, hire *GREAT* programmers with said gobs of money. Next?

    4) Maintainability and code efficiency more often than not are mutually exclusive. Increasing one *decreases* the other. Ok, got gobs of--wait, what?

    Like any engineering problem software engineering is a study in trade offs. The old saw about "Good, fast, cheap, pick any two." is dead right.

    So your program has to be:

    1) Machine efficient

    2) Easily maintained

    3) Affordable

    4) Fit to purpose--for a widely diverse audience that can't agree on which features are useful and which are not.

    5) Easy to use--ideally with NO training of the user.

    6) Income generating so you'll be around in 3 years to keep doing what you're doing.

    Name *ONE* program in the history of the world that meets all 6 of those criteria. Just one. Oh, and a choice everybody agrees with...

  31. Anonymous Coward
    Black Helicopters

    Faster CPUs

    As soon as new manufacturing technologies come on line, everybody will forget this "parallel" nonsence. The fact is, silicon is on the way out, as was predicted by microelectronics research already in the end of 1990s. Significant increase in clock speeds requires new materials and $$ tens of billions in investment. Multi-core CPUs is just a way to squeeze the last drops of profit from proven, relatively cheap silicon tech.

  32. Pascal Monett Silver badge

    No end to progress in sight

    And until the virtual environment of my games is real-world realistic, there will be no end.

    As a gamer, I would expect to be able to drive through the side of a house with my tank, but there are precious few games that allow for it, and when they do, it's in a special environment.

    I would expect to be able to knock trees over with my tank, but in most games not only does the tree stop me on the spot, it also actually assigns damage to the tank if I try.

    I would expect a nice crater to mark the spot where a bomb fell, but there are hardly any games that do that. I would also fully expect the village, and possibly the entire map, to be totally devastated by the time the level ends, but there aren't ANY games that do that at the moment.

    In most games, anything that is not a movable game object is, for all practical purposes, indestructible. Walls are impenetrable, roofs never get blown in, trees are there for all eternity. They are, structurally speaking, just as permanent as the ground.

    Changing that is going to take humongous amounts of processing power, and probably lots more RAM than the average PC has today, as well as probably a totally new approach to modeling the virtual world.

    And when we do have a realistically destructible environment in games, just think of what we will be able to do as far as science and technology are concerned !

    So we really do need to get there, and if a 64-core parallel processing environment is what it takes, then bring it on !

  33. Mikel

    The laptop performance mountain is climbed

    Laptops can now do what laptops need to do. It's time now to focus on making them do it all day on a reasonable battery, and the battery should lose weight every year.

  34. Toastan Buttar
    Unhappy

    Interesting experience

    I had to extract some files off an old PC the other day. It ran NT 4.0 on a 166MHz CPU with 64Mb of system RAM. The 'user experience' was not a frustrating one, either. Never mind Vista or XP - you'd be hard-pushed to find a mainstream Linux distro that would run happily on such hardware these days. Is the Operating System (you know, that thing that lets you run applications and talk to the hardware) really doing so much more work just 10 short years down the line ?

  35. Mike

    Moore's Law is *still* valid

    Don't forget the article says *what if* not, *it's not*, I suspect quite strongly that it's going to be valid for some time, but remember it's not about cost or easy availability it's computing power, the Power 6 chip is a good example, notwithstanding perhaps SOI is nearing it's single core day, using doped diamond with it's much better thermal profile could theoritically give us 100Ghz chips, even with no other changes is another 10 years flat.

    So, to multi core/thread, does anybody remember the day when ms word didn't spell check as you went along? spreadsheets that didn't recalculate their fields as you went along? or back further when DTP packages weren't wysiwyg? they are/do now as we have more power.

    The real question is, as Moore's law gives us more power, is there also a downturn in new added features? the differences between the early versions of ms word were vast, but what new features have been added in the last 5 years? almost none? some new file formats and a argubly inferior interface.

    If Moore's law dried up (and for this we'll have to exclude threading), then obviously the programming would become more efficient, after all, you do what you can within the resources available (anyone remember 1k chess on the ZX81?)

    On a slight aside, Moore's law was originally about transistor count (stealing the thunder from one of the UKs true heroes Alan Turing's 1950's predictions), not processing power, so cores/threads is valid rather than raw clock speed, and Moore also changed his original prediction, otherwise the law would have been wrong several years ago.

  36. This post has been deleted by its author

  37. Matthew Ellen
    Coat

    @wolf, computer programme that fulfills your criteria

    helloworld

  38. Tom
    Alert

    Singularity

    If you haven't already, go out and buy Charles Stross' Toast, and/or Accelerando, for some well-informed and thought-provoking ideas on where all this stuff is going. Throroughly enjoyable sci-fi written by a geek, for geeks. This message has been brought to you by the Charles Stross Marketing Board.

  39. Red Bren
    Stop

    @Pascal Monett

    It's not necessarily a hardware constraint that prevents destructable game environments, but the imagination of the game makers. Indestructable environment features are a simple method of ensuring the player follows the intended plot with the added bonus of saving the developers from having to think of what to put behind the walls.

    But I agree, it's bloody annoying when your battle hardened, navy beret special marine seal can scale a 100m sheer cliff face, only to be thwarted by a knee high picket fence!

  40. Anonymous Coward
    Happy

    "Moore's Law is about transistor count"

    Well remembered.

    Huge on-chip caches are good at keeping the transistor count high, and doing so easily, and *sometimes* with good reward.

    Chips like IA64 with their "VLIW" heritage *need* huge caches to get any reasonable kind of performance. Today's x86 get broadly similar performance to today's IA64 on most problems, without needing so much cache as IA64, partly because the x86 code for any given problem is more compact, partly because the 32bit data for any given problem is more compact (than the 64bit eg IA64 equivalent), and partly just because bucketloads more money and time has been spent on making x86(-64) kick a** (hardware and software) than has been spent (or will ever be spent) making IA64 kick a**.

  41. Thomas

    @Joe Harrison, Wolf

    @Joe Harrison: At least I wasn't arguing that more powerful computers won't be able to do better things that I either haven't thought of or have idiotically written off, just that computer functionality has plateaued in recent years; in the meantime all the software staples have somehow grown to need at least an order of magnitude more in terms of resources. At the minute, buying a new computer for any reason other than that your old one is broken just buys you a couple of years of using the latest versions of software to do pretty much exactly what you were doing before. From that point of view, technology has come far enough. If you prefer to look at it more pragmatically, there is no 'killer app' for the processing power we have now that we did not have ten years ago. There are a whole bunch of tiny improvements about that in aggregate have soaked up the current resources, but it's not enough to inspire anyone to buy new computers just for the sake of it.

    I'll wager that a large chunk of the sales in the past few years have been for form, not function (ie, people switching to laptops) and many of the rest have been upgrades that people thought were necessary by confusing the usual fresh install of Windows speed bounce with needing to buy a new computer and not because they actually are.

    There are a million uses of faster processing and lots of high profile - often scientific - uses for it. But at the consumer level, the industry has stagnated. I'm actually a Mac user so my opinion on it may not be valid at all, but I think a large part of the reason that Vista has such a bad reputation is not that it is inherently bad, but that people are finally kicking back against the assumption that they'll update their hardware for what appears to be similar functionality combined with the lengthy gap in releases making this jump seem more acute.

    @Wolf: further to those comments, I think the charges of bloat relate to the aggregation of functionality that most people don't use, or aren't aware that they are using, causing software that doesn't appear to be much more functional requiring much greater resources. You're right that it's an optimisation problem; I think people here are annoyed because they feel that the big software companies are not playing the various factors off against each other in an anywhere near optimal way.

  42. Anonymous Coward
    Go

    re: Growl

    Sorry for not having clarified bloat, in my case it's when 50% of a function's code does absolutely nothing but the programmer in question doesn't know enough to take it out. And since it doesn't get in the way of the desired function (aside from the horribly slow load and processing time), then it is seen as not being an existing problem. There are waaaaay too many programmers that started, and stopped, their programming education somewhere around Visual Basic 6 and got their only experience by being the ones who burst the dot.com bubble with their horrible coding methods.

    I do agree most people falsely assume bloat is what you mentioned, but I literally mean bloat: code that gets compiled and executed that actually does nothing with the data that the program was written to manipulate. My "friend" who was that programmer was eventually found out, and his boss finally looked at his code and took 75% of it out and improved it's speed by over 300%, literally. A DB process that used to take overnight to load suddenly took less than an hour, and the output was exactly the same. THAT is what I mean by bloat :-D It's enough to drive anyone crazy.

    You definitely made some very good points though, glad there's someone with your experience commenting, lots of times comments like yours are as enlightening as the articles.

  43. Louis Savain

    The Thread Concept Is the Real Problem

    It is becoming painfully obvious to all but the most hard-core addicts of last century's technology that the multithreading approach to parallel computing is not going to work as the number of cores per processor increases. That the computer industry continues to embrace the multithreading model in the face of certain failure is a sign that computer science is still dominated by the same baby boomer generation who engineered the computer revolution in the 70s and 80s. It is time for those gentle folks to either step aside and allow a new generations of thinkers to have a turn at the helm or read the writings on the wall.

    We must go to the root of the parallel programming problem to find its solution. Multithreading is a natural progression of the single thread, a concept that was pioneered more than 150 years ago by Babbage and Lovelace. The only way to solve the problem is to get rid of the thread concept once and for all. There is no escaping this. Fortunately for the industry, there is a way to design and program computers that does not involve threads at all. Here are a few links with more info if you're interested in the future of computing.

    How to Solve the Parallel Programming Crisis:

    http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

    Nightmare on Core Street:

    http://rebelscience.blogspot.com/2008/03/nightmare-on-core-street.html

    Like it or not, we need a paradigm shift. The industry will be dragged kicking and screaming into the 21st century if necessary. The market always dictates the ultimate course of technology.

    PS. Patterson should listen to one of the members on his PCL team, Dr. Edward Lee who has had a lot to say on the evils of multithreading.

  44. Anonymous Coward
    Anonymous Coward

    I do not agree...

    I think Mr. Patterson's logic is the consumer logic they use to sell us ever bigger TV's, more powerful engines, larger fridges and iPods.

    Since they can not increase the speed of the stuff we use, they develop in a way that they can develop, notably more cores, and then try to convince us to use new applications that do not improve or lives but makes the new developement look good.

    Do you WANT software to decide if someone that walks up to you if they are worth talking to ? Do I want a sound system with 32 tweeters in my living room ?

    Not really. I want to write useless comments on this board, balance my wallet with excel (and fail miserably), play a movie on my desktop and listen to a song on my mp3.

    Undoubtedly there are great opportunities for parallellism in all kinds of computer applications (think Pixar). But on my desktop ? Not really.

  45. Luther Blissett

    MS Windoze 2015 Sentient Edition

    [With qudos to OrsonX]

    128 core Work PC: would you like me to write your annual report for you?

    User: yeah!

    PC: anything else?

    User: please boot up my favourite three photorealistic virtual Facebook girlfriends....

    PC: anything else?

    User: please shutdown when I've gone to bed

    PC: anything else?

    User: what is it about "shutdown" that you do not understand?

    Until this is achieved the IT industry has no worries!

  46. Mike

    Another Geezer checks in

    I'd love to get a look at a 1962 IBM 1130, BTW. That would have been a very early lab prototype for a machine that was introduced in (IIRC) 1966. :-)

    I do not remember DOS fondly, lacking, as it did, both the snappiness of RT-11 and the portability of CP/M.

    My "graphical environment" in those days was a display client on an Atari 800 and a layout program in Fortran on a VAX. With continuous connection to teh cloud, that sort of things could come back.

    OTOH, Javascript is moving things off the server and onto the client, mostly because of who pays for server cycles. If the carriers have their way, we'll see ever more of this as "reasonable bandwidth" (for the typical ever-growing "reasonable") will get more expensive much faster than the wires or waves get faster.

    Eventually, your CFD codes will be written in javascript, by someone who knows sod-all about floating-point anomalies. That's why you need 64 cores in your laptop. And a 5kg battery. And a pair of nomex chaps.

  47. Tom

    Dont forget Gates Law

    "The pointless complexity of the OS will increase to nullify any progress in processor design."

  48. Anonymous Coward
    Linux

    Porgramming can't be more efficient...

    Until Moore's law does end. Programming labor is expensive, and as long as it is cheaper to write shit code and buy better hardware, code will be shit.

  49. Eddie Edwards
    Boffin

    @ Nick

    @ Nick

    "it's a wonder that anybody programming the x86 chipset has any sanity left."

    I retained my sanity by switching, first to C, then to C++.

    BTW the 80s called. They want to hire you.

    @ Tom

    Precisely the bullshit I was talking about. I forgot the word "singularity" though. There's nothing well-informed about someone who extrapolates an exponential curve to infinity. (Sorry, you were talking about sci-fi. Did you know that some people take this stuff seriously?)

    @ Red

    It absolutely is a hardware constraint. Game designers have wanted destructibles for years but the power isn't there to do it. Current next-gen physics engines can stack about 3,000 blocks by using the entire Cell, and the FPS will vary wildly as the simulation progresses. To just make a house out of bricks that you can knock down would take well over 10,000 blocks. To run that at a constant 60Hz ... and to have a village or town of such houses ... no, we need more processing power.

  50. Charles Manning

    Moores law also means the same for cheaper

    You can ride Moore's Law both ways:

    * The PC way: People are prepared to spend $1000 on a PC and will get more CPU/RAM.

    * The embedded way: Microcontrollers are getting cheaper and cheaper.

    We're already seeing sub-50c microcontrollers used in all sorts of toys and other applications. Coding for a micro which only has 512bytes - or even zero bytes - of RAM is challenging, but is required if you want to be able to build that hair dryer within budget.

    In embedded space there is also another reason for efficient code: bloaty slow code needs more cycles which means more power consumption. That results in more expensive batteries

    Thus far, Moore's Law has been the "get out of jail" card for crappy (inefficient) programming. A doubling in CPU speed (adjusted for RAM etc) should give you better a doubling in program execution speed, but nope - instead the extra speed just gets soaked up by bloat.

    Anyone remember Turbo Pascal? Now that was some fast software! It compiled at thousands of lines per second on a 386 and tens of thousands of lines per second on a 486. Why can't all software be like that? I have not fired it up on a more recent machine, but it should deliver something pretty amazing.

  51. Anonymous Coward
    Anonymous Coward

    why are PCs still so slow then?

    The simple answer to that is the bloatware we call Operatiing Systems with backwards compatible crap that hasn't been used in 10 to 15 years and also the speed of data transfer of hard drives.

    What REAL improvements have we had in hard drives in the last 5-7 years? Average transfer speed and seek times have only modestly improved but capacity has increased incredibly (along with HD failure rates) Compare that to CPU, GPU, and memory speeds.. well there is no comparison.. Hard Drives can't keep up and they haven't for years!

  52. Anonymous Coward
    Anonymous Coward

    @nobby

    "I don't have to buy a new chair just because Cushion 2.0 doesn't fit...."

    Not met my wife then...

  53. Anonymous Coward
    Joke

    @ Charles Manning

    Compiling a million lines a second sounds nice, until you realize it's still Pascal...

  54. Anonymous Coward
    Thumb Down

    the multi-core PC con

    >The market always dictates the ultimate course of technology.

    BS.

    The market wants faster chips, not more chips or cores. Who uses several apps concurrently, that can all saturate the CPU? Who runs multithreaded apps? Only specialised professionals (and gamers, but they have dedicated GPUs for that), regarless of the author's wishful thinking.

    The rest (99.99%) of the PC market wants fast single-threaded processing, for fast boot and snappy windows, fast spell check, etc. Give me a 8GHz, even a 4GHz, single-core CPU any day over a 2GHz quad-core. No doubt more apps will come that will require more power (accurate voice recognition or OCR, where are you?), but the bottom-line is multi-core CPUs are shoved onto us because chip manufacturers take the easy way, not because they are wanted or even useful. Multi-threaded apps are difficult to write, and even more difficult to tune. Fast single core CPUs would have the preference of users and developpers. OK, maybe 2 core are alright, in case an app is CPU-bound and the OS can't do multitasking properly (You know who you are...), but we don't need more cores. We need more speed. And we are all waiting....

  55. jon

    AC 28 Nov 03:20 is right...

    What all CPU engineers need to start working on is a multicore design, that will ALSO include some way to take the simplest program, and spread the load evenly across the cores...

    Is this possible??? this will then enable all the basic stuff to run faster, without complex expensive stuff...

    'The market' is tied up by the salesmen, who keep pushing the numbers... they just take the clockspeed, multiply that by number of cores, giving a nice big number to impress joe average... sucessfully, due to large sales pushing the price down, and single cores either dissapear, or become 'uneconomical' due to low sales, and lack of boards that will take them!!

    - then he gets that home, and get even more dissatisfied at the real speed...

    you may say different, but try walking into PCWORLD and asking for a pentium1.. even DABS has none... and I would not want second hand stuff...

  56. Marian Csontos

    It is obvious. Or is it?

    I thought impossibility of infinite exponential growth is obvious to everyone but economists and EU politicians with their infinite GDP growth, stock exchange players expecting shares to grow by 5%+ every year, people expecting interest on their savings in banks to be higher than (inflation+constant).

    How many cycles will adding cores last? 5, 10, may be 16. Certainly not 32.

    Fortunately there is enough crapware out there even 100 years is not enough to fix, and certainly there will be more in years to come. ;-)

    Marian

  57. Singlewhip
    Alert

    At the risk of being considered a self-promoter...

    I've been blogging about this for quite a while, preparatory to writing a successor to "In Search of Clusters" about the issue. See http://perilsofparallel.blogspot.com/ .

    Servers are no problem. They'll just get smaller and more efficient. They use huge numbers of cores already, just in separate machines. Virtualization rules.

    Clients are the problem, and they're a big one because they have the combination of volume and high price that funds a good part of the industry. Most microprocessors are $5 units, like the one that runs your dishwasher. Intel gets multi-hundreds of $, sometimes $1000s, per chip, and AMD too, for first-run chips.

    And programming... see my posting about 101 parallel languages, all current, absolutely none of them in use.

  58. Singlewhip
    Unhappy

    Oh, and by the way...

    John Cocke was the person who got the ACM Turing Award for inventing RISC architecture. See http://awards.acm.org/citation.cfm?id=2083115&srt=alpha&alpha=&aw=140&ao=AMTURING

    Dave Patterson is a great guy, a really smart guy, an acquaintance of mine, and a great namer; He came up with the term RISC. But he's not the original daddy. That's John Cocke.

    This misunderstanding brought to you by (*humph*) zealous IBM security fostered by people in IBM Research who thought keeping it secret made it seem more important. (The mainframe guys weren't buying that, but there's another tale.) But the ACM got it right.

  59. Anonymous Coward
    Boffin

    Niagara nonsense @Matt Bryant

    I've only just read this falsehood from Matt Bryant:

    "This failure to ramp up the infrastructure is perfectly demonstarted by Sun's Niagara chips, where they have effectively given up on the idea of keeping a core spinning and instead settled for having lot of cores idle and waiting whilst a few work"

    This is precisely the opposite of the truth; the Niagara chips use many thread contexts to keep the cores busy while some threads are waiting for memory. For applications like webservers, the impact is dramatic, e.g., Zeus:

    http://www.zeus.com/assets/default/Site/en/images_user/image/Zeus_Price_Perf_Grph24_11_2008.PNG

    To do this, they have more memory bandwidth (including a crossbar on chip) than typical CPUs because they effectively transform a latency problem (individual threads waiting for memory access) into one of bandwidth (lots of threads accessing memory while some are executing).

    The result is that individual thread performance isn't great, but for workloads comprising many threads or processes the throughput is much greater than anything else around right now, simply because so little hardware is idle.

  60. Julian
    Coat

    Back to the mid-80s

    So, let's get this straight - Patterson, who kick-started RISC architectures in the early 80s is talking up new paradigms of parallel processing, a hot topic from the mid-80s.

    Thing is, we solved it in the mid-80s, with the INMOS Transputer. INMOS was therefore sold off by the Tories as soon as possible. The Transputer was pure genius since it was able to easily map programs that ran internally on a simulated multi-processor to an actual multi-processor environment: so the language encouraged parallel programming and it scaled from 1 to 1000s of devices.

    Let's do a bit of Math. The early Transputers ran at 20MHz (giving 20 simple MIPS of performance) and probably had about 100K transistors in them each with at least 4K of on-chip RAM (+off chip too). In 1989 I ran my dissertation project on a 9-transputer rack giving me: 20*9=180MIPS of performance.

    Let's scale that by 2 decades. Instead of 20MHz we have 3GHz ( x 150) and instead of 100K transistors we have 2 billion transistors (x200). That's equivalent to 20*150*200 = an astonishing 600,000 MIPS of performance / Transputer (with an internal memory equivalent to 800K). My equivalent transputer rack would have 4.9TIPS of power!

    Instead we decided to base the future of computing on the (literally) back-of-an-envelope design which has set us back 20 years. I'll grab my coat.

    -cheers from julz @P

  61. TMS9900
    Coat

    Great comments

    Really enjoying this stuff.

    A few things I would like to add to the mix, in no particular order:

    1) IMHO, one of the biggest problems is the skill sets of the 'new generation' of programmers. I had a graduate who apparently was a Java guru. Yet he had no concept of ASCII. He did not understand *how* toLower (or LCase in VB) *actually* worked. To him, it was just 'magic black box' stuff.

    2) Given the above, if we gave that graduate, say, a 40 core Intellasys processor (which are available now, off the shelf, yes, *FORTY* cores), what would he do with it?

    3) All of the above does not mean that this graduate is thick/stupid/whatever. Actually, he was really bright, and has gone on to do well. However, the standard of his degree course at university was appalling. Until we can get back to 'brass tacks' in the educational side of things, we are not going to produce people with the *knowledge* (note: not talent; you are born with talent) to take the latest multi-core processors and do something truly radical, and ground breaking with them.

    4) One day, I got two graduates together. I put the following to them:

    "We need to build a computer system that can control a radio telescope. A big huge fucking radio telescope. Not only will it control the movement of the dish in real time in order to track moving objects in the sky, it must also gather the data received from the telescope and store it so that it can be reviewed in real time, online, by multiple users at the same time. Furthermore, the data should be stored historically and available for instant recall so that comparisons can be made with older data. All this, while ensuring that the telescope is moved efficiently, without burning out the motors in the drive gear. What do you suggest?"

    They came up with credible solutions, none of which were wrong particularly, and were a reflection on modern programming/system analysis thought trends...

    "Well, we'll use a few computers... One for an SQL database, one for tracking the telescope, and one for viewing data."

    "Ok, great. But that's an awful lot of processing power. How will they communicate with each other?"

    "Using XML over a LAN."

    "Yes that will work. But if you use XML, you will need an XML parser, and code to package your data into XML packets - some sort of object model..."

    "Yes, we will abstract each item of data into objects, these can remoted over the LAN using SOAP."

    "Ok, its sounding pretty cool. XML is really only useful though when you need to share your data with third parties, where it needs to travel through firewalls, and be parsable by another machine that may not necessarily be running the same platform as you. We're talking about a system that is self contained, connected via a switch. Couldn't we just use sockets and our own protocol? Wouldn't that be much more efficient?"

    "Well yeah, but, that would be difficult..."

    Then I leave them goggle eyed when I say, "actually guys, I'm pulling your chain. This problem has already been solved. In 1971. By Chuck Moore. He did the whole thing on one PDP-11 with a single disk drive and 32K of RAM."

    Sometimes, I really do think we've gone backwards.

    Mines the one with the "Threaded Interpretive Languages" (1981) book in it. Sometimes we should go back and read the old stuff, lest it not be forgotten. It might teach us something.

    Mark

This topic is closed for new posts.