Its nice to read an article about something that doesn't make me feel stupider at the end than I felt in the beginning.
The oft-cited Moore's Law is the fulcrum of the IT industry in that it has provided the means of giving us ever-faster and more sophisticated computing technology over the decades. This in turn allowed the IT industry to convince us that every one, two, or three years, we need new operating systems, better performance, and new …
Its nice to read an article about something that doesn't make me feel stupider at the end than I felt in the beginning.
to be in an industry where things don't become useless two years down the line?
why should i have to change my computer every two years just to be able to run the new point-release of bloatware?
Looking into the car park here there are some cars that are ten years old, they seem to work perfectly well. Looking in the server room there isn't anything anywhere near as old as that. Looking on people's desks... nope, most computers are newer than a directors car - have to be to keep running the software. I don't change my washing machine every two years 'cos the old one's too slow, I don't change my TV every two years(*), I don't have to buy a new chair just because Cushion 2.0 doesn't fit....
(*) although, the way TV technology is going at the moment maybe I should be..
I'd welcome an engineering ceiling that meant that the next version of software would have to be made More Efficient instead of just Prettier. I'd welcome the ability to run a new game on full-res on my PC rather than having to upgrade something (memory/video card/mouse mat).
Surely technologies follow a curve of improvements, with the occasional bump here or there? We've been lucky to live in the exponential part of the Chip Improvement Curve; I assume we're entering a gentler slope and so maybe its just time to chill, relax and write some EFFICIENT AND FAST SOFTWARE for a change.
Anyone who uses this meaningless cliché should be shot. Otherwise, some good points there...
Sounds more like people were so caught up in the clock speeds of the chips, rather than actual data throughput and the speed at which PC tech has been outpacing software development. Software seems to pretend to keep up by becoming full of useless functions and features in their race to pretend to be "keeping pace", when in reality average people just don't need the extra speed (as there are no useful pieces of software that need it) and don't need bulky new software when the "older" streamlined versions work just fine for their purpose. With all of the applications and games that have been coming out recently it seems like they are just made to show off the newer hardware (like most newer console games), without actually showing off its usefulness nor a good reason to dish out tons of cash constantly to upgrade for one or two applications. Things like WOW will never be worth $4K to me, though the few that dish that kind of money out are loud enough to appear like major consumers, they are in fact extremely few. I'd be willing to bet money on there being at least 100x more budget PCs sold than high-end ones, maybe 1000x. Having worked on the end user side of the IT industry, this has been the trend since budget PCs were thought up. I'm all for people being able to buy supercomputers if they want, but they shouldn't be the only focus as most of those advancements are useless to the average consumer for what seems like decades, i.e. when a new version of Windows demands the extra power, it's a strained attempt at making the extra power seem necessary, while at the same time taking up that extra power with things the user doesn't even want running, like Vista, the bells and whistles have gotten too big for bigness sake. I don't want to "need" multiple cores when I do the exact same things on a single one just fine.
I'd like to think Patterson was just being kind to the industry, being a part of it and all, and if he was really honest he could have summed it all up to the execs of the software/hardware companies by just asking, "WHY? Do you guys even know what your companies do? Because no one seems to have any focus on goals other than to keep making money faster and faster, what do YOU want to use a computer for anyway? Let's not skip the middle steps, if you build it they won't just come, and you can't just market to the top 5% of PC spenders because by the time you're computer can actually run the "top of the line" games/programs, you're telling them to upgrade again."
Eventually they're going to make more and more products people just don't need or want becuase useful software is not demanding the extra power, and should they always have to? Most software makers, esp the smaller ones that work for contract companies writing custom software for businesses, have only gotten worse at coding because the computers are so fast they are oblivious to how bad their code is, so their code does not get fixed often enough. If it compiles it's perfect, is too many of their philosophy. Look at MS Windows! BLOATWARE
Being IT this topic of Moore's law has always annoyed me at how talkie-point power-point execs and their lackeys are making the industry a pain in my ass with all their BS, and beliefs in how things are working based on absolutely no facts becuase there is just too many factors for their minds to wrap around in this industry, and the programs they have made for them to do the thinking for them are crap as well.
But what do I know? I only use the stuff constantly, and have supported other users for 12 years now. Perhaps I just need to kiss the hardware/software manufacturer's asses and just screw the people, seems to be the only way to get a job nowadays.
Anyway, since multiple CPUs were first introduced to the average consumers, it has been apparent if the hardware can't juggle the processing between chips on it's own it will NEVER go mainstream. The introduction of multiple cores without this ability to juggle the load between cores as though they were all one big processor, while at the same time peddling it to the common user, was like a kick to my nuts. It's like MS saying windows is faster becuase secretly every time you hover your mouse over an app it starts loading it, even if you don't want to run it, just so that when you DO double-click it pretends to load faster than the competition. The difference being, in the multiple core case, programmers generally are paid a fortune to be able to program for multiple CPU's, or shit themselves if they are told they need to learn.
But I'm just insane, don't mind me, obviously I hate money and capitalism and the rich and authority figures, and I LOVE terrorism and pagan gods that need blood sacrifice... obviously ;-P Or I might just be fed up with the BS. Nah, couldn't be that simple.
How can Moore's Law provide the means of giving us ever-faster and more sophisticated computing technology? Surely it is merely an trend observation that has held, more or less, so far?
Fascinating. And I'd love to see these new applications. But...
People will always want to write letters and add up columns of figures. So Word Processors and Spreadsheets, which Patterson dismisses, will still be needed in some form.
In fact I would guess that we will need all the current types of applications that we currently use -- however limited and annoying we find them -- for the next ten years or so.
And computers running the majority* of these applications, processor speed is no longer relevant; hasn't been for years. It's IO speed that makes things faster.
It seems to me that Moore's Law has become an excuse for an industry and so much marketing hype. The vast majority of people are today being forced to buy computers which are much more powerful than they will ever need -- except for the fact that they are running operating systems which have bloated to require that power without any real payback for the user.
But of course, I would say that. Every computer in my house is an old recycled one running Linux.
(* not video editing or animation rendering, but pretty much everything else.)
Good summary of what we've known in the games industry for about five years (you wouldn't know it to listen to our "luminaries" like Carmack and Sweeney, though; they're still praying for salvation based on their old faith - which is hardly surprising given their legacy codebases, but still).
Effective large-scale parallel programs (e.g. games) seem to split in a recursive way. The top layer may be serial, then there may be a parallel layer underneath, then each part of the parallel layer may be serial again, with each serial part implemented by a parallel algorithm. Thus can we divvy up hundreds of cores within a single program. An example might be:
Serial: Calculate each output frame of the game in turn
Parallel frame: Run simulation and rendering in parallel
Serial simulation: First calculate physics, then run AI
Parallel physics: Break the sparse matrix into chunks across many processors
Serial chunks: Run a standard single-threaded algorithm to evaluate a chunk
Unfortunately, the "productivity programmers" don't have a clear layer that belongs to them. Domain-specific knowledge is inserted at all levels.
We want to avoid having the domain guys think too hard. Much customization (insertion of domain-specific knowledge) can be to do with data structures and "shaders" - small pieces of functional code which do raw data processing. But they will sometimes need to understand the parallel context this stuff runs in. The aim is to make this "sometimes" be as infrequent as possible, but this in itself requires the domain people to change their paradigm e.g. they can no longer just call out to other objects at will, because that doesn't scale. Their job still gets harder. I don't see how to avoid this.
Ironically, the lowest-level performance programmers still need to write the best single-threaded code for a given task. They are the only ones to escape the paradigm shift, despite being the best people to take it on board.
We don't 'need' to upgrade our computers every 5 years and our OS every 3 years,
Programs written now aren't just out of date, but un-runnable 10 years later,
We can stop wasting resources by constantly running after the moving goalposts of performance.
Efficiency is out of the complete software development process equation at present. It started with the Unified Model which "threw the baby with the bath water" and removed performance metrics from the standard requirement gathering techniques. The agilistas took this even further and we are now having to deal with a whole generation of professional programmers which have very little understanding of efficiency. In fact the so called "professionals" do not want to understand efficiency, it ires them quite badly. So the only people who still strive for it are hackers or the people with grey ponytails and sandals. So I do not quite see where he is going to find his "efficiency programmers". There are less than 1% of those left around and their quantity is decreasing as the universities teach "Java" and "Agile skills" instead of data representation and search algorithms: http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html
So with all due respect Mr Paterson thinks of the software industry in terms of what people could do and did when he was young. That software industry is dead. "Improvements to processes" sponsored by Ericsson and Chrysler killed it. R.I.P.
On a side note - using multithread brute force search for optimisation for code is like shooting a mouse with a Grad multiple missile launcher. Yeah, fine, in some cases you will kill the mouse... And everything in a mile radius... However, you are not even guaranteed to hit the mouse on most occasions. Or if we drop the analogies, engineering roccocco is no replacement for mathematics. Never was, never would be.
...to wring out every ounce of performance from the hardware that is no longer getting faster.
Or from writing the *new* multi-threaded applications required to get the performance out of the new multi-core architectures.
I've been writing this stuff in posts on this and other sites for a couple of years now. The key is economics - as the computing market has matured, the advantage gained by investing in faster CPUs has considerably diminished. Beyond the technological limit of the current CPU structure of semiconductor transistor switches, there is the limit of consumer demand.
The biggest growth sector in the computer world has been the laptop, and that means efficiency and lower temperature are the areas to invest in. Do you want a 3.6Ghz quad core processor in your Eee PC? Nope. Also, the biggest established sector of PCs is the computer as business / home workstation. How much processing power do you need run a few office apps at work or surfing for pr0n at home? 2.6GHz Intel dual core processor is more than enough. If you want a faster general use computer, buy a solid state hard drive and a Gbit network setup, because booting up your PC, accessing data and moving it around networks are by far the slowest parts of modern computing
So it is that my prediction of a couple of years ago has come to pass - the demand for top end CPUs has collapsed. Parallel computing is obviously the future as far as Intel and AMD are concerned... but the demand will be in specialist computers such as gaming PCs and consoles, number crunchers and system modelling machines in academic research and in servers that have to deal with big databases and the like. For the majority of the market, the most value for money research will be in reduced power consumption.
What this means for the industry is hard to predict, but we could draw comparison with car engines. For the first part of its development improving the power output of the engine was a significant part of advancement in car technology. Providing greater horespower was more important than most other factors... until we all got engines that were about as powerful as we needed. Since the 90s, most R&D on petrol engines has been on greater reliability, fuel efficiency and tune-ability. Most research has gone into getting the more efficient deisel engine (equivalent of laptop CPU) to be more like a petrol engine in its performance. The basic acceleration of the typical car has not significantly improved - what has changed is that car types once low on acceleration have caught up with the faux-sporty saloon cars at the top end of the mainstream family car sector. Have car companies stopped investing in car R&D? Quite the opposite, and as car engines became less important, other factors like safety, aerodynamics, in car entertainment etc have more than compensated. There will always be investment in CPU research, but it will no longer be targeted at the mainstream PC and instead what we get in our home computers will be a side-effect of what is developed for other uses.
So will processor researchers lose their jobs? Well, that comes down to the growth of the technology outside of the stand-alone computer. Parallel computing offers significant advances, but will they be ones that society wants? If on-line applications take off, then I'd say 'yes'. If they don't, I'd say 'no'. After all, current engine technology would allow me to put a 1000 horespower in my car, but the 1000 horsepower car engine will never become standard, even if cars are still being used in a thousand years time, simply because there is no use for all that power. Parallel processors have to demonstrate that they offer the consumer something they need.
people already just buy new computers when the old one dies.. they're 300 quid to replace. and most of that sort of system gets a significant upgrade as the purchase crosses the dual/quad core spec change of the last 2-3 years, most 'old' pcs being around the P4s with hyper-threading era.. purchaser gets a medium spec system from today's retail and watches the performance jump massively and yet still just does the same old stuff.. shopping and pr0n.
as for the software writing layer, it's been gettin lazy for years as a way of driving the upgrade requirement quotient. just look at the amount of bloat that ships with what was once a lean code-base.. it's not value-add when you stick features that no-one cares about into a program that should do the one job really well and was designed to do in less capaciously capable systems.
And this guy gets paid for this? people listen to him because he's done cool stuff previously...fine enough.
chef's are only as good as their last meal. this guy openly admits he ( and a massive chunk of industry.. Herd Science in action) did get it wildly wrong a few years ago and whole financial/corproate systems based their predictions on that path.
And their still employing him?
You know, out here in the real world? Caring about regular upgrades is best left to the GPU adolescents - I replace my computer only when it breaks.
I think most people replace products only when a new one is tangibly better; shouting numbers at them isn't persuasive. If anything, the market is moving towards one where netbooks are free with mobile phone contracts and 99% of people don't care about computers beyond that. The IT industry has a lot of shrinking to do.
Moore’s law has always puzzled me, in that the natural conclusion would imply that at some point in the future the bottle neck of processing would be the speed of light (I am sure some mathematician out there could predict when this should occur according to Moore’s law?).
A more reasonable explanation would be that the increase in processing power is an exponential curve and although Moore’s law can be observed over a number of years, this is only a small part of the picture.
(Because I have to work with them!!)
I'm in nanotech, in an area where all research justifies itself by vague aspirations to have impacts in quantum computing. Every talk to every audience seems to begin with the Moore's Law plots, as a justification for a new paradigm, to continue the trends beyond the small-transistor, high heat concentration limit. This is in itself a bit of smoke and mirrors, as QC won't necessarily be faster, or indeed practible, for all problems. The thing that interests me most is the absolute faith that Moore's Law can and must continue, with the almost faith-based (I really liked that observation) assertion that we must be able to nail QC, because otherwise Moore's Law will collapse, and that that is somehow unimaginable.
128 core Work PC: would you like me to write your annual report for you?
PC: anything else?
User: please boot up my favourite three photorealistic virtual Facebook girlfriends....
Until this is achieved the IT industry has no worries!
"A more reasonable explanation would be that the increase in processing power is an exponential curve and although Moore’s law can be observed over a number of years, this is only a small part of the picture."
In fact Moore's Law predicts an exponential curve. The correct explanation is almost certainly that the curve is ogive in shape - that is, S shaped. Every other growth curve is. (This is why all the predictions of "the spike" or "transcendental human experience in our lifetimes" are such utter bollocks.)
Also, Moore's Law is related to feature density, not processing power, performance, or anything else. There are fixed limits to feature density (for instance, the diameter of a silicon nucleus). It obviously won't last forever.
@ You ain't gonna need it
I'm running radiosity precomputes, massive builds, raytracing, pathtracing, etc. just to make a single game level look nice. Others do this stuff just to generate a single image. Iteration time is king in the creative industries. We need to do stuff faster. Computers aren't only for running Word. If that's all you care about, the £300 computer will always suffice. As would a piece of paper and a pencil.
Problems grow to fill the available processing power. Just look at Vista :p
"What happens to the IT industry if the performance improvements stop?"
Why, you get a No-Moore law instead.....
What I get from this:
Not only is the singlethreaded way wrong, but it is the problem that the programmer has that is wrong. He should switch/embellish problems so that they are parallellizable.
Parallelisation might be fine for niche workloads, eg supercomputing, but for the vast majority of stuff that happens on the vast majority of computers (PCs? PDAs? Phones?) the vast majority of the time, it adds very little value. Webservers, probably. But they're a special case too. As a general rule, the problem domain where parallelisation adds value is nearly negligible (but not quite, otherwise there wouldn't be SC08 reports dribbling through El Reg for weeks...)
I know this, as does anyone else who remembers when parallelisation used to be called symmetric multiple processing. I remember Parallel C, and Parallel Fortran. Heck, I even remember Transputers and Occam, and ICL's Content Addressable File Store. Of course there wasn't an Internerd or a Wikipeedhere in those days so none of this actually happened did it.
Parallelisation does make software and system development more complex, and most current single-threaded software barely works as it is. Parallelisation will make that worse, as although it isn't *really* that difficult a concept, finding someone who actually remembers (let alone understands) concepts like "communicating sequential processes" (which is what a set of parallel tasks boil down to) is increasingly difficult.
Like so many, you seem to misunderstand the purpose of Moore's Law. It is not intended to be a predictive device: rather it is intended to keep the development teams synchronised.
Think of it like this: you have one team in Intel that is working on obscure chemistry, one on process engineering, one on circuit design, etc.. Each of these is basically working 'Just In Time'
The chemists need to produce something that the engineers can use, when it becomes the right time. The engineers need to be able to produce a new process when the circuit design is ready. Moore's Law allows them all to predict where they will be in one, three, five years time and allow all the pieces to fit together.
That is also why Intel has been able to stay ahead of its competitors, even though almost everyone else has some advantage, at one time or another.
Moore's Law will still happen because the main drivers for the chip companies are still corporates, and it is money spent on research there that trickles down into desktops and eventually home PCs. I have never met an FD who - when asked if they would like to be able to compute their payroll run, or end-of-year figures, or just daily financial activities, faster - answered anything other than "how much?". So, whilst big bizz keeps pushing for faster systems, chip manufacturers will design faster CPUs, hence Moore's Law will remain valid as shrinking chips to make them go faster for single-threaded apps will still be a valid design solution for a while, as it's just simpler than real parallel threading.
I would personally like to see more work on the infrastructure supplying data to the chips - storage, memory, and interrconnects. Having a rocket CPU upgrade without upgradign the others just means you have a faster CPU spending more time wating for data than before. This failure to ramp up the infrastructure is perfectly demonstarted by Sun's Niagara chips, where they have effectively given up on the idea of keeping a core spinning and instead settled for having lot of cores idle and waiting whilst a few work. Intel's approach (and IBM's) has been to massively increase cache and bandwidth to the chips to try and keep them as busy as possible so the increase in clock cycles is not just and increase in time spent waiting.
The other area of growth is still scale-up by coding for apps that may not be properly parallel but still spread threads across as many cores as possible. UNIX can do this, so can Windows (on Itanium at least) can too, and Linux is not far behind, so it is not beyond the pale to foresee quite soon versions of desktop Windows and Linux happilly making use of sixty-four cores in a single CPU, it's just the apps that will need the most work. And to go back to my FD example, if I asked my FD today if he'd like a PC that would run Excel faster by splitting the load over sixty-four cores, I'm sure his answer would be; "if the cost is right, yes please!"
My year 2000 High End NT4.0 laptop is horribly limited and slow.
My april 2002 Laptop out performs most new laptops under £500 and the original XP and Office are still on it. The latest version of Windows & Office goes more slowly on more powerful Laptops.
It's a 1.8Ghz P4 desktop CPU based laptop. I could have paid a premium to get a 2.2GHz CPU. But that would eat more battery.
Unless you are encoding, 3D rendering, decoding HD in SW etc you don't need more CPU and the Memory, Motherboard, HDD and Screen is not much faster on a new laptop nearly 7 years later.
The examples in the article aren't needed by most people. Current applications could be written better rather than more bloated and slower on each release.
As long as it still runs my holodeck adventure program why should I worry?
The answer is, lots of people -- fluid dynamics modellers like myself dream of workstations with 100+ cores. But explicit, MIMD-style parallel programming is a lot harder then serial programming; take your one bug every 10 lines of code, and that will drop to every 5.
SIMD would be the best approach; serial code controlling the flow of the program, interrupted by blasts of lightening-speed implicitly parallel directives when needed. (This is precisely what CM-200 progams looked like, and very similar to NVidia's CUDA).
The problem is re-expressing your problem in a way that can be easily parallelised. My job's easy here; matrix-mangling was made for this.
How can everyone keep saying "pah, why do we need faster processors to run office and internet apps."
Sounds suspiciously like "640KB should be enough for anyone." If people were always happy with existing technology we'd still be writing all our documents with vi or edlin. There is no real way to predict the killer apps that would be supported by faster hardware (although the automatic metting-minutes generator sounds good to me) and honestly twenty years ago wouldn't you have said you were happy with DOS and didn't need a graphical environment?
It is an interesting article but it is based upon an assumption.
Moore's law could be conceived as saying that performance doubles every year or so but it also could be conceived as saying that the cost halves in that same time frame.
Which is it? And how does it alter the thinking?
It is both or either.
You do not have to just assume that performance doubles for the same cost. You can assume the price drops in half for the same performance.
Of course, that is not actually the case when you look at computer systems as a whole rather than simply the processing chip.
But, what does it mean?
You might put off buying that Netbook simply because they will be even cheaper within a year or so. Or, you might put off buying the whole rack of servers for the same reason.
And if you look at it from that perspective, most of the article suggests is all but meaningless.
Instead you can focus upon the so-called digital divide, right? If systems keep getting cheaper you do not have to worry about changing software or parallelism. You only need to be concerned with making OpenOffice available to more and more people because they can afford the access. Forget Word. It is too expensive. But, open source software is readily available when the cost of hardware becomes low enough.
Isn't Amdahl's Law more relevant to this article?
Moore's Law? Why don't they just create a CPU instruction set that doesn't suck balls?
Anybody who's had the misfortune of using the x86 chipset with both(*) of its registers and an alternative (such as PPC) is likely to be fed up of the never ending swapping of values around between registers and the stack just to do the simplest of operations using the x86 instruction set (quite apart from the sheer inefficiency). Compare this to the almost simplistic task of implementing the same operations using a better instruction set and it's a wonder that anybody programming the x86 chipset has any sanity left.
* Not exactly true, but it feels like it :P
Ok, first of all there are some serious misconceptions about programming from A) those who never have or B) those who didn't live through the various eras.
I've been programming for 32 years now. My first program was written on a 1962 IBM 1130 that was 15 years old when I made my first feeble attempts to teach a machine.
When you have 8k of core memory, a card reader and a damn typewriter-style user input/output device, you learn efficiency. So I know how to write efficient code. Efficient for the *machine*, that is.
Not so efficient for the *user*, of course. Or the programmers who have to follow you later on and make changes to your oh-so-efficient program.
And I stayed in the game through the structured programming wars. And the object-oriented wars. And watched each and every fad that's come and gone since then.
Not one of the people complaining about bloat knows what that means. They *think* it means the program takes up too much room, runs too slowly, or has "useless" features.
I will admit there are some programs that truly are bloated. But not the ones most people consider. In fact bloated code tends to die quickly.
There are always tradeoffs in programming. What is simple efficient code for the machine is complex and hard to use for the user. The user demands a GUI? Fine. The GUI code is 10 times the size of the program underneath--and that's not bloat.
The GUI is too slow? It takes up 60% of the available resources? (Disk, RAM, CPU?) Tough. Could it be made more efficient? Perhaps, but at what cost?
1) Money. Got gobs, next?
2) A huge problem domain--GUIs are big because they *have* to be big. OK, got gobs of programmers because of gobs of money. Next?
3) Maintainability. Not only is the problem domain huge, it's *complicated*. It takes real expertise to write simple, elegant code that avoids obfuscation. Ok, hire *GREAT* programmers with said gobs of money. Next?
4) Maintainability and code efficiency more often than not are mutually exclusive. Increasing one *decreases* the other. Ok, got gobs of--wait, what?
Like any engineering problem software engineering is a study in trade offs. The old saw about "Good, fast, cheap, pick any two." is dead right.
So your program has to be:
1) Machine efficient
2) Easily maintained
4) Fit to purpose--for a widely diverse audience that can't agree on which features are useful and which are not.
5) Easy to use--ideally with NO training of the user.
6) Income generating so you'll be around in 3 years to keep doing what you're doing.
Name *ONE* program in the history of the world that meets all 6 of those criteria. Just one. Oh, and a choice everybody agrees with...
As soon as new manufacturing technologies come on line, everybody will forget this "parallel" nonsence. The fact is, silicon is on the way out, as was predicted by microelectronics research already in the end of 1990s. Significant increase in clock speeds requires new materials and $$ tens of billions in investment. Multi-core CPUs is just a way to squeeze the last drops of profit from proven, relatively cheap silicon tech.
And until the virtual environment of my games is real-world realistic, there will be no end.
As a gamer, I would expect to be able to drive through the side of a house with my tank, but there are precious few games that allow for it, and when they do, it's in a special environment.
I would expect to be able to knock trees over with my tank, but in most games not only does the tree stop me on the spot, it also actually assigns damage to the tank if I try.
I would expect a nice crater to mark the spot where a bomb fell, but there are hardly any games that do that. I would also fully expect the village, and possibly the entire map, to be totally devastated by the time the level ends, but there aren't ANY games that do that at the moment.
In most games, anything that is not a movable game object is, for all practical purposes, indestructible. Walls are impenetrable, roofs never get blown in, trees are there for all eternity. They are, structurally speaking, just as permanent as the ground.
Changing that is going to take humongous amounts of processing power, and probably lots more RAM than the average PC has today, as well as probably a totally new approach to modeling the virtual world.
And when we do have a realistically destructible environment in games, just think of what we will be able to do as far as science and technology are concerned !
So we really do need to get there, and if a 64-core parallel processing environment is what it takes, then bring it on !
Laptops can now do what laptops need to do. It's time now to focus on making them do it all day on a reasonable battery, and the battery should lose weight every year.
I had to extract some files off an old PC the other day. It ran NT 4.0 on a 166MHz CPU with 64Mb of system RAM. The 'user experience' was not a frustrating one, either. Never mind Vista or XP - you'd be hard-pushed to find a mainstream Linux distro that would run happily on such hardware these days. Is the Operating System (you know, that thing that lets you run applications and talk to the hardware) really doing so much more work just 10 short years down the line ?
Don't forget the article says *what if* not, *it's not*, I suspect quite strongly that it's going to be valid for some time, but remember it's not about cost or easy availability it's computing power, the Power 6 chip is a good example, notwithstanding perhaps SOI is nearing it's single core day, using doped diamond with it's much better thermal profile could theoritically give us 100Ghz chips, even with no other changes is another 10 years flat.
So, to multi core/thread, does anybody remember the day when ms word didn't spell check as you went along? spreadsheets that didn't recalculate their fields as you went along? or back further when DTP packages weren't wysiwyg? they are/do now as we have more power.
The real question is, as Moore's law gives us more power, is there also a downturn in new added features? the differences between the early versions of ms word were vast, but what new features have been added in the last 5 years? almost none? some new file formats and a argubly inferior interface.
If Moore's law dried up (and for this we'll have to exclude threading), then obviously the programming would become more efficient, after all, you do what you can within the resources available (anyone remember 1k chess on the ZX81?)
On a slight aside, Moore's law was originally about transistor count (stealing the thunder from one of the UKs true heroes Alan Turing's 1950's predictions), not processing power, so cores/threads is valid rather than raw clock speed, and Moore also changed his original prediction, otherwise the law would have been wrong several years ago.
If you haven't already, go out and buy Charles Stross' Toast, and/or Accelerando, for some well-informed and thought-provoking ideas on where all this stuff is going. Throroughly enjoyable sci-fi written by a geek, for geeks. This message has been brought to you by the Charles Stross Marketing Board.
It's not necessarily a hardware constraint that prevents destructable game environments, but the imagination of the game makers. Indestructable environment features are a simple method of ensuring the player follows the intended plot with the added bonus of saving the developers from having to think of what to put behind the walls.
But I agree, it's bloody annoying when your battle hardened, navy beret special marine seal can scale a 100m sheer cliff face, only to be thwarted by a knee high picket fence!
Huge on-chip caches are good at keeping the transistor count high, and doing so easily, and *sometimes* with good reward.
Chips like IA64 with their "VLIW" heritage *need* huge caches to get any reasonable kind of performance. Today's x86 get broadly similar performance to today's IA64 on most problems, without needing so much cache as IA64, partly because the x86 code for any given problem is more compact, partly because the 32bit data for any given problem is more compact (than the 64bit eg IA64 equivalent), and partly just because bucketloads more money and time has been spent on making x86(-64) kick a** (hardware and software) than has been spent (or will ever be spent) making IA64 kick a**.
@Joe Harrison: At least I wasn't arguing that more powerful computers won't be able to do better things that I either haven't thought of or have idiotically written off, just that computer functionality has plateaued in recent years; in the meantime all the software staples have somehow grown to need at least an order of magnitude more in terms of resources. At the minute, buying a new computer for any reason other than that your old one is broken just buys you a couple of years of using the latest versions of software to do pretty much exactly what you were doing before. From that point of view, technology has come far enough. If you prefer to look at it more pragmatically, there is no 'killer app' for the processing power we have now that we did not have ten years ago. There are a whole bunch of tiny improvements about that in aggregate have soaked up the current resources, but it's not enough to inspire anyone to buy new computers just for the sake of it.
I'll wager that a large chunk of the sales in the past few years have been for form, not function (ie, people switching to laptops) and many of the rest have been upgrades that people thought were necessary by confusing the usual fresh install of Windows speed bounce with needing to buy a new computer and not because they actually are.
There are a million uses of faster processing and lots of high profile - often scientific - uses for it. But at the consumer level, the industry has stagnated. I'm actually a Mac user so my opinion on it may not be valid at all, but I think a large part of the reason that Vista has such a bad reputation is not that it is inherently bad, but that people are finally kicking back against the assumption that they'll update their hardware for what appears to be similar functionality combined with the lengthy gap in releases making this jump seem more acute.
@Wolf: further to those comments, I think the charges of bloat relate to the aggregation of functionality that most people don't use, or aren't aware that they are using, causing software that doesn't appear to be much more functional requiring much greater resources. You're right that it's an optimisation problem; I think people here are annoyed because they feel that the big software companies are not playing the various factors off against each other in an anywhere near optimal way.
Sorry for not having clarified bloat, in my case it's when 50% of a function's code does absolutely nothing but the programmer in question doesn't know enough to take it out. And since it doesn't get in the way of the desired function (aside from the horribly slow load and processing time), then it is seen as not being an existing problem. There are waaaaay too many programmers that started, and stopped, their programming education somewhere around Visual Basic 6 and got their only experience by being the ones who burst the dot.com bubble with their horrible coding methods.
I do agree most people falsely assume bloat is what you mentioned, but I literally mean bloat: code that gets compiled and executed that actually does nothing with the data that the program was written to manipulate. My "friend" who was that programmer was eventually found out, and his boss finally looked at his code and took 75% of it out and improved it's speed by over 300%, literally. A DB process that used to take overnight to load suddenly took less than an hour, and the output was exactly the same. THAT is what I mean by bloat :-D It's enough to drive anyone crazy.
You definitely made some very good points though, glad there's someone with your experience commenting, lots of times comments like yours are as enlightening as the articles.
It is becoming painfully obvious to all but the most hard-core addicts of last century's technology that the multithreading approach to parallel computing is not going to work as the number of cores per processor increases. That the computer industry continues to embrace the multithreading model in the face of certain failure is a sign that computer science is still dominated by the same baby boomer generation who engineered the computer revolution in the 70s and 80s. It is time for those gentle folks to either step aside and allow a new generations of thinkers to have a turn at the helm or read the writings on the wall.
We must go to the root of the parallel programming problem to find its solution. Multithreading is a natural progression of the single thread, a concept that was pioneered more than 150 years ago by Babbage and Lovelace. The only way to solve the problem is to get rid of the thread concept once and for all. There is no escaping this. Fortunately for the industry, there is a way to design and program computers that does not involve threads at all. Here are a few links with more info if you're interested in the future of computing.
How to Solve the Parallel Programming Crisis:
Nightmare on Core Street:
Like it or not, we need a paradigm shift. The industry will be dragged kicking and screaming into the 21st century if necessary. The market always dictates the ultimate course of technology.
PS. Patterson should listen to one of the members on his PCL team, Dr. Edward Lee who has had a lot to say on the evils of multithreading.
I think Mr. Patterson's logic is the consumer logic they use to sell us ever bigger TV's, more powerful engines, larger fridges and iPods.
Since they can not increase the speed of the stuff we use, they develop in a way that they can develop, notably more cores, and then try to convince us to use new applications that do not improve or lives but makes the new developement look good.
Do you WANT software to decide if someone that walks up to you if they are worth talking to ? Do I want a sound system with 32 tweeters in my living room ?
Not really. I want to write useless comments on this board, balance my wallet with excel (and fail miserably), play a movie on my desktop and listen to a song on my mp3.
Undoubtedly there are great opportunities for parallellism in all kinds of computer applications (think Pixar). But on my desktop ? Not really.
[With qudos to OrsonX]
128 core Work PC: would you like me to write your annual report for you?
PC: anything else?
User: please boot up my favourite three photorealistic virtual Facebook girlfriends....
PC: anything else?
User: please shutdown when I've gone to bed
PC: anything else?
User: what is it about "shutdown" that you do not understand?
Until this is achieved the IT industry has no worries!
I'd love to get a look at a 1962 IBM 1130, BTW. That would have been a very early lab prototype for a machine that was introduced in (IIRC) 1966. :-)
I do not remember DOS fondly, lacking, as it did, both the snappiness of RT-11 and the portability of CP/M.
My "graphical environment" in those days was a display client on an Atari 800 and a layout program in Fortran on a VAX. With continuous connection to teh cloud, that sort of things could come back.
"The pointless complexity of the OS will increase to nullify any progress in processor design."
Until Moore's law does end. Programming labor is expensive, and as long as it is cheaper to write shit code and buy better hardware, code will be shit.
"it's a wonder that anybody programming the x86 chipset has any sanity left."
I retained my sanity by switching, first to C, then to C++.
BTW the 80s called. They want to hire you.
Precisely the bullshit I was talking about. I forgot the word "singularity" though. There's nothing well-informed about someone who extrapolates an exponential curve to infinity. (Sorry, you were talking about sci-fi. Did you know that some people take this stuff seriously?)
It absolutely is a hardware constraint. Game designers have wanted destructibles for years but the power isn't there to do it. Current next-gen physics engines can stack about 3,000 blocks by using the entire Cell, and the FPS will vary wildly as the simulation progresses. To just make a house out of bricks that you can knock down would take well over 10,000 blocks. To run that at a constant 60Hz ... and to have a village or town of such houses ... no, we need more processing power.
You can ride Moore's Law both ways:
* The PC way: People are prepared to spend $1000 on a PC and will get more CPU/RAM.
* The embedded way: Microcontrollers are getting cheaper and cheaper.
We're already seeing sub-50c microcontrollers used in all sorts of toys and other applications. Coding for a micro which only has 512bytes - or even zero bytes - of RAM is challenging, but is required if you want to be able to build that hair dryer within budget.
In embedded space there is also another reason for efficient code: bloaty slow code needs more cycles which means more power consumption. That results in more expensive batteries
Thus far, Moore's Law has been the "get out of jail" card for crappy (inefficient) programming. A doubling in CPU speed (adjusted for RAM etc) should give you better a doubling in program execution speed, but nope - instead the extra speed just gets soaked up by bloat.
Anyone remember Turbo Pascal? Now that was some fast software! It compiled at thousands of lines per second on a 386 and tens of thousands of lines per second on a 486. Why can't all software be like that? I have not fired it up on a more recent machine, but it should deliver something pretty amazing.
Biting the hand that feeds IT © 1998–2017