Google's Simple PSU Design ?
So what happened to Google's idea of changing the basic PC PSU design, and getting rid of some of the superfluous rails ? http://services.google.com/blog_resources/PSU_white_paper.pdf
Intel and Google have taken it upon themselves to lead yet another "green computing" project. Executives from the two companies today unveiled the Climate Savers Computing Initiative at the Gulag in Mountain View, California. At its most basic level, the program will focus on encouraging consumers and businesses to buy …
If they want to be true innovators why not step up and be the first big company to build a Datacenter floor that runs on straight -48v DC?
The heat savings alone would offset the cost of developing the technology. Converting all this AC to DC is costing companies billions in cooling and creating tonnes of Greenhouse gas.
was to use 1 power supply for multiple machines. A better alternative is to make a regulator that takes 12V or 24V DC from multiple rails, combines them for redundancy and then generates the required voltages. Making such a regulator a pulse mode one, like most active psu units, you can get by with very little wasted energy.
For home use, you can purchase tiny atx supplies that run on dc voltages and only need a standard wall adapter (active pfc versions are available). Combining this with a pc that has a low energy requirement, you can get a pretty decent system without moving parts. (like cooling systems or mechanical disks)
If you want to make your own design, you can do it and even drive the system without a transformer with a microcontroller controlled active design and huge buffer capacitors. The idea is to monitor the voltage rail for voltage level and when it drops below a certain point (like 4.999V on the 5V rail), you switch the full rectified ac voltage into the rail for a tiny amount of time. Most of the power is soaked up by the buffer capacitor (it's like quick charging a battery) and the system slowly drains this energy (going from 5.000V to 4.999V again). The problem of this system is that while the supply is very efficient, the load on the ac network is chaotic. A better idea is to have one big supply in your house and supply both ac and dc currents, then use the small dc supplies I mentioned. (for google this worked well and siemens also had a demo house running like this)
It seems we can't just do a little polite clapping and say 'Good show old chaps', nowadays we need to find something to bitch about.
HEADLINE: Bill Gates gives another billion to stop diseases.
REPLY: M$ Windowz taking over the desktop! LInux roxxors my werld kkthxbai.
Lets just say 'Nice job, its a step in the right direction'.
I have just finished building a low-power 1U server. It takes 15 W, compared to the 80 - 100 W that this type of box would normally need. That equates to about £65 of electricity per year for an always-on server, before you start to consider the cost of aircon.
It's certainly true that the power supply makes a significant difference. The PSU supplied with my case was only 40% efficient at low load; I have replaced it with a picoPSU and separate 240 to 12v supply, which have a combined efficiency of about 75%. That has saved me around 12 W.
Large savings can also be made elsewhere in the system. For example, I'm using a solid-state disk which takes about 0.25 W most of the time and less than 1 W peak. That's saving perhaps 10 W compared to a magnetic disk.
More savings can be made by enabling the power-saving features in the hardware and O.S. For example, this board can switch its CPU clock frequency depending on load; this is described as a laptop feature and is not enabled by default, but it's just as useful in a server application and probably saves a couple of watts.
But I think that the greatest savings can be made by not using a faster processor than is actually necessary. In my server, I'm using a 1.2 GHz VIA C7-M. As the article says, Intel and AMD have largely based their businesses on making faster and faster processors, and I imagine that they make much greater margins on their bleeding-edge chips than on slower, lower-power ones, so they don't have much incentive to encourage people onto the lower-power chips: hence this "blame-displacement" exercise aimed at power supply manufacturers. I like VIA because they make exclusively low-power processors.
One final consideration is the role of software. We all know that "bloatware" expands to occupy the resources (CPU, memory, disk) available. With a bit of effort, it's possible to benchmark your server and track down the bloat; having identified the worst-offending applications you can tune or replace them. In my recent experience the things that need attention are databases, which work well only once they have been "tuned", and free scripts (e.g. PHP message boards) which might look nice but turn out to be very inefficiently written.
Quite frankly, to claim that modern SMPS designs are only 50% efficient is a load of old bo^H^Hprivates - it certainly WAS true of old analogue (ie transformer-recifier-regulator) designs but no-one has used those for PCs in years. Outside of specialist requirements, the only place you'll find other than SMPS designs is in cheap 'wall wart' plug in supplies - and even those are rapidly turning SMPS.
As for the suggestions that people should use DC bulk supplies, well that's nonesense as well. Converting AC to DC is cheap, easy, and efficient - but then it needs converting to the right voltage. A DC-DC supply still has nearly all the same conversions so will not be any more efficient - and you still need to do an AC-DC conversion to get your bulk DC rails. If you use something like 48V DC rails, then you have a conversion from AC to 48V DC (one lot of conversion losses), then a second conversion from 48V DC to the internal supply rails (a second set of losses) - more efficient to simply go from AC to the required DC levels and skip the 48V. If you were to use something in the order of 300V to 340V DC then that would be different (it's the DC voltage in an SMPS) but all you would be doing is splitting the AC-DC and DC-DC conversions that currently go in inside one box - and would you trust "Mr Average Consumer" with 340V DC ? I wouldn't !
Telecoms uses 48V DC because it uses it directly (ie without further conversions) for an analogue phone line, and it needs to be free of mains hum. They also have large batteries which historically would have formed part of the smoothing for the power supply, but also provide their backup for when the lights go out.
Yes there is room for increased efficiency - but the starting point isn't 50%, it's a LOT higher.
Simon wrote:
> Quite frankly, to claim that modern SMPS designs are only 50% efficient
> is a load of old ....
I wish that were true. Although it's hard to make a power supply that's that inefficient, believe it or not people do manage to do it.
The best way to get it wrong is to design a power supply with a maximum rated output of, say, 300W and then use it to supply a load that only needs, say, 30W (because for example it's on standby, or the PSU was designed to cope with a maximum configuration that you don't have). Because the switching transistors and inductors need to cope with the peak rated power they are big, and have large drive requirements and hysterisis losses respectively. This is why I was seeing a 40% efficiency in the PSU that I described in my first comment (see above). Efficiency numbers quoted by manufacturers are nearly best-case.
It is also easy to make an inefficient supply if they output voltage is low, e.g. 3.3V. This is because the cheapest switch-mode configuration has a diode in series with most of the current, and at 3.3V the 0.4V diode drop is a >10% efficiency loss. This is fixed if you use a synchronous switch topology, but that costs more.
For more on data centre DC power distribution, have a look at this page:
http://www.rackable.com/solutions/dcpower.htm
They have a PDF that you can download. I think their main saving is that the mains-to-48V conversion can happen away from the racks (e.g. on the roof) where the heat can be dealt with more easily, reducing aircon costs.
Crack open an Itanium. There's a reason those only come in 4U configurations. That heatsink is HUGE! Of course, my quad dual core Opterons are no better, just a little smaller.
I have a 900MHz VIA chip running under my home desk doing my firewall. I have another running Snort. Quiet. No fans.
Coming from the company who just released the most bloated and power hungry OS to date? Come on...
Making people throw out perfectly good machines to replace them with new more powerfull ones (lots of energy used and pollution created during manufacture, not to mention the landfill occupied by the disused machines)...
Then you have the extra power requirements from an OS that now uses far more memory, more disk space and makes much heavier use of the videocard than previous versions.
And at the end of the day, aside from a few hardcore gamers and specialist users, most people do exactly the same things on their computers that they've done for years.
We need smaller, more efficient OS's running on slower hardware produced using modern fabrication processes and using modern power management features.
A 200MHz processor built on a modern .65nm process would still be more than fast enough to do what most people need providing they went back to smaller more efficient software. And you could always use multiple cores, but have the more power hungry cores powered down when not required.