Figures,
are based on 10% more downtime?
Sorry I just couldn't resist :)
Microsoft spat out a white paper earlier this week in which it claimed that its Windows Server 2008 product cuts power consumption by about 10 per cent. It’s no surprise to see Redmond leaping at the opportunity to proclaim that its new software can help cut customer’s increasingly hefty 'leccy bills. But how did it draw those …
If Vista adoption is as widely adopted as Microsoft claims it is, then any reduction in power consumption to be gained by swapping from server 2003 to 2008 is going to be utterly dwarfed by the jump in power drained when swapping from XP to Vista.
I've been involved in consumption monitoring, and although I won't claim it was carefully controlled (we had the opportunity, we're tecchys and we needed to calibrate the power monitoring tools so that they wouldn't spot us recharging the LART), we tested around 30 desktops before and after upgrade.
Same hardware, same users doing the same things, for a month on each. The machines nearly trebled their power usage under Vista. It was a minor reason, but it did help the argument for upgrading back to XP.
Seriously, if people are worried about power costs, the only answer is dumping bloated software, and as MS make the most bloated...
"However, most significant of all is the fact that Hyper-V hasn’t landed yet. "
I would Beg to Differ.. Hyper-V is AIReality.
*CyberIntelAIgentlyDesigned IDEntity
And Yes, you may Profess that is Very Pink Floyd.
Cometh the Hour, Cometh the Dark Side of the Semantic Web. .... http://www.cs.umd.edu/~hendler/presentations/DarkSide.pdf
But if the server is capable of running multiple virtual machines that means the machines were not fully loaded individually. And that means their "smart power management" would have stepped down the processor to less than full power draw... but now you add virtual machines to the same hardware, it works harder, it draws more power. The MS lackey posited a hypothetical situation that will never exist!
All MS's FUD can't hide the obvious, Win2008 provides NO measurable power savings, the only difference is *the default configuration*, ie power management was probably the *only* service MS didn't have running by default in 2003. Yeah, they had Infrared running despite the lack of infrared hardware, they had Wireless Zero Config running despite the lack of wireless hardware, but they didn't have power management running? Of course they did. It was running with a do-nothing configuration!
Reminder: If 2 virtual servers will run on a piece of hardware, and they are both MS products, you're better off just running one and dumping the whole virtual layer of crap that stole 10% of your throughput. Heresy, you say, not to jump on the latest buzzword bandwagon? Go ahead and connect your virtual machines to your SAN that you had to buy to support them when a single server with attached disks would have done it all at 2 orders less complexity! Remember this whole architecture grew out of MS replacing a handful of UNIX servers with THOUSANDS of NT machines.
(And how very brave of you)
As much as I respect someone who won't even put his name to a post, I'm going to listen to a couple of independant 3rd parties and the developers about power consumption:
http://www.greenercomputing.com/resources/resource/xp-vs-vista-consider-power-savings
http://www.tomshardware.com/reviews/xp-vs-vista,1531-11.html
http://download.microsoft.com/documents/uk/business/PC%20Pro%20Labs%20White%20Paper%20Mar%202007.pdf
Whilst most readers of El Reg will of course discount Microsoft's whitepaper - TomsHardware has a pretty good name for itself and their 100%, scienfic tests prove you otherwise.
Having spent the last year doing research into server power efficiency I am both interested and skeptical.
Although this saving 10% is pitiful, the system I designed was using less than 30% of a dual core dual CPU Opteron server. 77W instead of 226W, reading to a significant saving on operational costs.
A 10% power saving would equate to around 30W upon an average server. The question is how is this achieved, most server processors do not really see this power saving. Secondly is this an averaged power rating or instantaneous. For example, if the OS is using less processor time when idle then over 24hr this may be possible, however if the processor is under a constant full load, the power consumption of both OS should be within 1% or so, since the CPU consumes around 80% of a servers power. If the power saving is instantaneous then facts about the situations are needed.
Certainly power management features of OS can reduce power consumption, so it would be very interested to see Linux Vs 2008 server. I expect Linux will beat server 2008 hands down.
This comment in the article I find highly amusing:
"Hyper-V can still throttle the amount of voltage to the CPU based on load – which is something VMware and Xen can NOT do today,"
This amuses me for two reasons, firstly altering the core voltage of a CPU is highly likely to cause it to crash or worse. Secondly there is no standard system to manipulate the CPU core voltage on most server motherboards as it requires direct control of the Core Voltage VRMs.
Is there anywhere I can get a copy of this paper, I'm quite interested to see the latest propaganda of Microsort.
I find it deeply ironic-- not to mention disturbing-- that at this time, Microsoft has few applications that are qualified by MS to be installed on Windows Server 2008. For example, Exchange Server 2007 and Office Communications Server 2007 are only supported on Windows Server 2003 at the moment. For some odd reason, the System Requirements page for Microsoft SQL Server 2005 lists Windows Server 2008 for SQL Enterprise but Windows Server 2003 for SQL Standard.
I work with the AC, and I'd believe him. He's not as anti-MS as that posting sounds (unlike me - I happily admit to hating most of their work, although I do like the hardware they rebadge, some Xbox games and most of Outlook) and he's happy to give credit where it's due - when they get something right, he'll defend them to the hilt.
The fact is, he has been through a real-life test and Vista chewed up more power than XP. Scientific and detailed, no, but good enough to show that in that office situation, Vista was more expensive to run. I know anecdotal evidence is not acceptable, but I tend to think his experience is likely to be accurate - on every machine I've seen Vista running on, they have all run extremely hot compared with comparable (and in some cases the same) machines, which means they're drawing more energy. Unscientific, but unfortunately, accurate and in front of me.
>I will not be voicing my opinion on this excellent OS as it will only lead to flaming by the avid el reg anti MS crew.
Translation - you realise you cannot win on the grounds of fact, logic or reason. Your next step is totally transparent - go after the reputation of your opponents to weaken the opinion of the audience instead of any real argument.
<yawn> next time, don't bother posting, or if you insist, at least put some effort in.
Paris - because she's bright enough to know that approach won't work!
i think it depends on the hardware when you compare xp/vista.
on new hardware with fast cpu/graphics the differance is probebly small,
but on older/slower hardware the differance will be big.
i have look at your link to tomshardware and the test setup is verry high end so thats not a real worlds test, a average user wil never use this system, a real test would be with budget system or a system thats about 1-2 years old.
Although, on the other hand, Bill gets his money from MS, and he's giving millions to help those who can't afford education, etc...
Although the point still stands... We do all spend obscene amounts on IT gear and self pleasure when there's people out there that would be happy just to have dinner tonight.
....and Immediate XXXXPort ....TelePortation.
"But if the server is capable of running multiple virtual machines that means the machines were not fully loaded individually. And that means their "smart power management" would have stepped down the processor to less than full power draw... but now you add virtual machines to the same hardware, it works harder, it draws more power. The MS lackey posited a hypothetical situation that will never exist!" .... By Eddie Johnson Posted Wednesday 11th June 2008 15:38 GMT
Eddie,
Virtual Machines supply Powerful Energy, they don't draw it.
Yeah, tell me about it when you have an OS with natively supports Wireless connections on hardware from numerous vendors were you don't have to resort to numerous ndis commands!! I ain't saying shit because as I previously stated whatever comparable facts there are on the differences, improvements or bugs between the different OSs it's always drowned out by.... MS are shit, blah blah blah, baby killers.
I doubt I will be frequenting el reg much longer as you can find better technical articles in other places with comments from people who actually use the OS and have constructive criticism or warning about them and have actual jobs in IT and development, not a load of ill informed mutants who barely know how to use a home PC and like jumping on the latest hate MS craze
I've used pretty much every commercial OS since MS-DOS 3 and a wide variety of Linux / UNIX flavours, and have to say that by far and away, Vista is the worst operating system I have ever used. It even makes WinME look good.
Strangely, however, most of the things that make Vista so bloody awful, seem to have been removed from 2008; and aside from a few little irritations, like that bloody network and sharing centre (what's wrong with going straight to the connection management utilities I ask you!) it's a phenomenal piece of kit.
In terms of finding an OS which "natively supports wireless connections from numerous vendors", I seem to spend a large amount of time configuring Windows systems to just f@*!ing connect to my AP, which hardly strikes me as native support.
Meanwhile, Ubuntu helpfully popped up with a simple "enter the key" type box and I was away.