Isn't this kind of stating the obvious? Who wouldn't expect a smaller device to require less power?
It looks like an accident that Moore’s Law has been shadowed by a parallel phenomenon: that over time, the amount of power required per unit of computation falls. That’s the conclusion of Stanford PhD and consulting professor Jonathan Koomey, whose work has already been dubbed “Koomey’s Law”. As Koomey writes in a paper to be …
...as a pensioner, I should expect a PC so powerful as to be semi-sentient, yet run off a couple of AA batteries for a year.
Good thing too, given the trends of energy prices and the pension farce...
Shorter battery life
> And Koomey’s Law also predicts a continuation of what we’re already seeing: more computation for less electricity means more capable smartphones and tablets consuming less power.
In reality what we see is more features == less battery life
We are still at the stage where more and more "toys" are being packed into these devices and demand for these is outstripping the power savings available.
Hence the call for chocolate powered batteries.
Kinda, I suppose
1998, early phones had a charge that lasted 4-5 days with minimal use outside of calls, teeny-weeny B&W screen, a clock and the capability to text message (stop right there, pedants, I know that's not 100% accurate but you get my drift).
2011 we have smartphones which last maybe 2 days between charges, but we get a fat and juicy full-colour screen, internet access, GPS, games, organiser, 1000's of music tracks, video capabliity (films and video calls), email, and apps to do a squillion and one other things. Hours per day in use has probably quadrupled at the very least.
The battery in my Galaxy S2 is no larger than the one in my Nokia 2110. I don't think that's bad going, really. Go back a few more years, and you needed an althetes build to hold the damn thing to your ear for more than 5 minutes (think back to Gordon Gecko talking on the beach).
Indeed, my smartphone has approximately equivalent CPU, more RAM, and significantly more secondary storage than the desktop PC I had ten years ago.
Post a letter lately
While they are 'toys' in size, the cost in just 'carbon' saving might be pretty good.
I do like walking to the Post Office where I live, but found I use e-mail more.
I don't like small screen video, so it's a 'give or take' issue. However, I don't go to the movie house as much as I used to.
All in all, it does seem like we are getting better distribution with quite a savings in time/travel.
I would say 'more bang for my energy (battery) buck.
Heat _is_ the great enemy
It was not just during the vacuum tube era that heat was the big enemy. It is quite true today, where cooling is a major cost of server farms and having tro lead away excess heat adds to the cost and weight of personal computers and laptops.
Heat is directly linked to power consumption, so if heat dissipation is decreased, by corollary power consumption is also decreased. And power consumption is the other big cost for server farms and a limiter for battery life of portable devices.
So I think we in the next decade will see a major focus on reducing power consumption while raw compute power per processor will have less focus. If more compute power is needed, you add more processors instead. Since generality costs, we will also see more specialised compute units, so a computer even more so than now will consist of a collection of specialised compute units around one or more general-purpose compute units.
That sums it up
"Now all we need to do is come up with reasonably efficient software that doesn’t waste processor cycles doing not much."
From what I've seen, most graduates have trouble writing a "Hello World" program in less the 50K. Youngsters these days never think about performance since they're used to super fast PCs with lots of RAM. Give them all ZX80s I say.
Skizz wrote: "From what I've seen, most graduates have trouble writing a "Hello World" program in less the 50K."
That is not because they are worse programmers than earlier generations, but mainly because even a cleverly written "Hello World" program in Java will be 50K+ as a binary, because the binary has to include the Java run-time system and some standard libraries. This gets worse when using so-called "frameworks" which are essentially huge libraries that allow certain types of application to be written with very little original code (or thought), but will give huge and slow binaries.
This is a consequence of the fact that the cost of memory and compute power has fallen drastically while programmer salaries have not. So it makes better economic for companies to make a resource-hogging program using few programmer hours than using many programmer hours to make a mean and lean program -- the cost of one more server is only about a dozen programmer hours (if that). This works only up to a limit, though. No amount of server power will help if the programmer uses an exponential-time, quadratic-space algorithm instead of an linear-time, linear-space algorithm -- unless the problem size never gets very big.
It's not quite simple, but I think we are suffering from programmers who haven't adequately coped with the changes in the technology.
Computers used to get more powerful by running at a higher speed. That hit some practical barriers, and the response was the multiple-core processor. You get some gain just from the OS running on one core and the program running on another, and that's easy to do.
I can point to software, written in the last three years, able to use a lot of CPU power, which is still stuck at that crude level of multi-core use. Luckily, a lot of the work the program does can be handled by the graphics hardware, but the performance seems biased in favour of a particular GPU manufacturer.
Add the way that it grabs all the RAM it can, and that there is no 64-bit version (that might be an advantage, since it limits the RAM it can grab), and you have an indication of how programmers are failing to exploit the increased hardware speed.
And my fancy multi-core desktop monster doesn't really improve things for writing this comment. I don't type any faster. OK, so it has a bigger screen and a better keyboard than my netbook, but the feeble CPU isn't the limit on what I do.
We have too many programmers producing badly-implemented usage of processing power. I'd far rather see some smart exploitation of the falling energy costs. I used to run an office on a 1 GB hard drive, with plenty of storage space to spare. Now a memory storage device that small is becoming a little hard to find. But is the energy cost of storage reducing in a useful way when a similar physical device can store 2 TB. "Laptop" drives, of course. It does seem that cheap laptops are where people are spending their money, these days, and they are hugely powerful computing machines. They are saving energy, but they don't do so much more on a battery charge. We don't type any faster.
I don't mind having these huge hard drives. But Parkinson's Law seems to trump all else about computing. With a touch of the Peter Principle.
No - the whole point of run-time systems and libraries is that they DO NOT get included in your code!
You have one shareable copy of the run-time system stored somewhere, then your dozens of individual little programs would have only the code that you write, and for input/output and everything else in the standard libraries they would load the run-time system, making your code nice and small.
The problem is...
... not so much the application code that programmers write, but the level of abstraction commonly used by today's computing platforms.
Every 2 years*, software will evolve to require 2x as much processing power to do the same thing.
* This is only an average; looking just at MS Word and MS Excel, the doubling occurs every year.
may your armpits have fleas
It was going to be called Peter's Law but you win.
There's already a Peter Principle and it's just about as dismal.
It's not Moores Law. It's Moores Observation.
If we're being pedantic...
...it's not Moores, it's Moore's.
Good thing we're not being pedantic, eh?
So I guess that I don't really use "Ohm's law", but "Ohm's observation"; that the voltage difference across a conductor seems to be proportional to the current through it?
<puts hand to mouth as if to cough>
Anyone remember when "laws" involved Lagrangians, or at least a citation of Maxwell and not some fuddy-duddy empirical stuff pulled out of a noisy spreadsheet ranging over a limited time like it's John Maynard Keynes pulling something pretentiously revolutionary out of his nether regions?
It can hardly be called a law when it is self-fulfilling: all of the CPU people use Moore's prediction as a target for their work, thereby re-enforcing it.
If ARM for example, started to use a faster cycle time, you can bet that Intel would shorten their cycle time too. The main reason they don't is that it would increase their R&D costs.
Koomey's Law?? Come on.
Over time, the need to use less silicon per cpu increases. Thats rav's Law.
Over time the cost to produce silicon increases demanding a smaller die size. The benefit other than cheaper chips per mm squared, is the Marketing suits get to claim that their chips are green.
The decreasing demand for low energy use is economic. Economic motivation is not a natural law as it is anthropocentric. In fact none of this is natural law. Just some moron wants to get is name attached to somebody else's mistaken belief that "it" is a law.
Koomey's Law, Rav's Law, Moore's Law...
Can we not just accept that they're all linked, wrap them up together, and call them Brannigan's Law?
Surely you mean
Over time, silicon need per cpu decreases
My CPU turns itself off when nothing's happening (about 95% of the time, most of the time). I did download and run BOINC for a while but noticed my CPU running considerably hotter, and decided that my leccy bill was quite high enough and my office quite warm enough already.
Nerdy types running Linux on core i7/5/3 CPUs can keep an eye on their CPU states by using this tool: http://code.google.com/p/i7z/
"...reasonably efficient software that doesn’t waste processor cycles doing not much."
So... uninstalling Norton then?
<item> will become faster/slower
<item> will become larger/smaller
<item> will cost more/less
Delete as appropriate. I'll take my royalties now please
I like this, though my inner nerd has read that and feels <item> needs to have a closing tag to be well-formed. *Sigh* spending a week working with xml has done strange things to me....
its coded for IE
so sloppy closing tag omissions are passable. :D
great framework document
you should patent that...
i'd suggest changing the word item... cos thats already covered
there must be a vay to stop zis
> most computers are under-utilised. And that wastes electricity.
That means if we get our act together we can sop buying servers for a while and use the ones that are there properly.
That would be bad news for Intel & Co. Expect them to throw a spanner int those works.
May I introduce you to virtualization and cloud computing.
Vendors such as Citrix, Vmware an Microsoft are in on it, companies such as Rackspace and Amazon are selling it to end users.
You have heard of virtualization before, right?
That is the idea (or, at least one of the big ones) behind it.
Of course, it doesn't change the fact that oomph per watt on newer kit gets better all the time, or that warranty periods are still pretty much the same they have always been... So Intel really doesn't mind from what I've seen.
The tendency seems to be for companies to buy fewer more dense servers once they've started to go virtual... which means more sockets, higher core counts, etc. I would imagine that means better margin for Intel.
The maths doesn't work?
Unless I'm missing something?. In particular, if every 18 months we doubled the number of transistors (which is *roughly* the computational grunt) and at the same time doubled the number of computations per kW-hour (ie: halved the power used per computation), then the power consumption of the current gen CPUs should be about the same as the power consumption of a 286. But it's not. Unless they are looking at low-power chips like the Atom rather than mainstream desktop chips like the i5 / i7?
No direct linkage
Between CPU transistor count and CPU horsepower.
Think about it is a new and shiny core i twice as powerful as a core2? I think not.
Ironically much of the increase in transistor counts us not going directly to horse power. Think gpu & multi-cores. 2 cores are not twice as powerful as one or at least practically they are not.
we're digressing here, but...
I beg to differ, two cores are close to double the performance, if there are multiple independent tasks running on them, and we have a reasonably efficient scheduler. This is usually the case for the PC/server. For "general" single tasks (i.e. those that cannot be parallelised, where there is interdependence), I think the old result stands, which is diminishing returns, where the fourth processor actually slows thins down by virtue of the overhead of inter-processor comms.
The GPU is a separate case, an "orthogonal load" if you like, and its tasks are highly parallisable, as witnessed by the enormously parallel architectures of current GPU's.
Re: No direct linkage
Yes things aren't so clear cut, as kevin 3 also points out. But it depends on what they are measuring power use against: transistor counts or MIPS/FLOPS. Worse, if it's the latter, and I take the conservative approach you suggest and say that doubling the transistors nowadays only improves actual performance by ~70% (or whatever), then it the research suggests modern CPUs (and GPUs) should be using *less* power than a 286 (because the computations are lagging behind the transistor counts, so power efficiency would be outpacing computations). But this clearly isn't the case, or I wouldn't be needing that 1000W PSU. So am I wrong or did the researchers fudge the numbers to get a good headline? I can imagine mobile chips being competitive with a 486, but certainly not an i7 or Phenom II.
"then it the research suggests modern CPUs (and GPUs) should be using *less* power than a 286"
Ah, but you are forgeting about BristolBachelors Law (or observation) that software requires 2x as much processing power to do the same thing every 2 years (except MS Word).
If for example you are running MS Word 2010 compared to the absolutely perfectly fine Word 6 for MS-DOS that was around when the 286 was in it's day; then yes the processor now needs 1000W to display a title on the screen in bold font, whereas the 286 version would almost run on the power from a postage stamp sized solar cell. (Perhaps in that Intel demonstration they were running old MS software?)
are you suggesting...
that some gobshite researcher is fiddling the numbers somewhere to make the science work
perish the thought
its MIPS per Watt
The number of transistors is irrelevant, the defined quantity is "the amount of power required per unit of computation" - which is surely MIPS per Watt.
Its a log-normal curve, well a straight line actually, identical to Moore's Law. A quick search found this graph: http://www.singularity.com/charts/page129.html (its all about "the singularity",incidentally, which is crazy or scary, we'll soon find out)
Anyway, it looks like 10^5 improvement in 20 years, so about 17 doublings, so the "one and a half to two years" figure can be refined to 14 months.
There is another overlooked law, demonstrated over and over again.
The number of compute cycles required to do anything doubles every 18 months.
The First Corollary of Blackadder's Law:
Watts per operational result never change.
Sure about Blackadder's Law?
I thought Blackadder's law was that idiots need to be flogged with sarcastic wit and the evil and pompous must be blackmailed and embarrassed.
Have you been reading about Al Gore again?
"Now all we need to do is come up with reasonably efficient software that doesn’t waste processor cycles doing not much."
wrong - we need to come up with software that doesn't waste MY time as much!
The curve seems to fit, all the way back to ENIAC
So _almost_ to the dawn of the computer age.
cos the worlds first computer was colossus as any fule (reg journos excepted) kno
Bad code on a super fast CPU = Flat battery
Moore’s Law, “the electrical efficiency of computing (the number of computations that can be completed per kilowatt-hour of electricity) also doubled” every year-and-a-half'
The problem which everybody keeps forgetting and which is the crux of the matter is that we are doing more computations - color screens, sounds, unoptimized code, more detail, richer data than we did before. So in effect we have to keep doubling the number of computations per kwh otherwise we would scream to a halt - see the two day battery expectancy of most smart phones.
Unless the hw and battery designers can keep up, the functionality requirements of users will force the coders to once again - anybody remember mainframe - optimize their code and make better use of CPU and memory real estate. THAT is the next challenge.