* Posts by Valdis Filks

23 publicly visible posts • joined 8 Mar 2007

Gov: UK biofuel probably made of starvation, rainforests

Valdis Filks

Sustainable ethanol, eat less sugar, drink less alcohol

We need a balanced policy, biodiesel, ethanol & hybrid cars. So lets not kill our options and put ourselves in a corner. Ethanol as a fuel gets a lot of attention and scare stories, well so did electricity when we started using it.

We need to change our lifestyle, Brits can eat less sugar and the Swedes can drink less alcohol.

Strangely, British Sugar produce it sustainably in the UK from sugar beet, as people eat less sugar which is not good for them, we can use sugar beet for fuel, from their website (http://www.britishsugar.co.uk/RVE29c095ba629149d391ce49792e8ab37b,,.aspx)

"Crops

In the UK, bioethanol can be economically produced by the fermentation of sugar beet or wheat. In our Wissington plant, we produce bioethanol from sugar beet which is supplied under contract by existing growers.

Producing up to 55,000 tonnes (70 million litres) of bioethanol every year, the plant uses around 110,000 tonnes of sugar. This is equivalent to 650,000 tonnes of sugar beet. Beet supplied to British Sugar for bioethanol manufacture is grown on existing farm land.

Production

Bioethanol is produced by the fermentation of sugars followed by distillation to produce a pure alcohol.

Fossil fuels are used in the production process but every effort is made to optimise fuel efficiency. British Sugar has embraced a system called Combined Heat & Power (CHP), recognised as one of the most fuel-efficient processes available. About 80% of the energy in the fuel is employed in the sugar manufacturing process. As a result of the close integration with the sugar factory we have been able to demonstrate savings in excess of 60% in CO2 emissions when compared to petrol."

Q & A here: http://www.britishsugar.co.uk/RVEf8cbe9389f134771bf75104662c1de49,,.aspx

Also, those Swedes produce it:

"SEKAB is one of the few companies in the world to continually produce ethanol from forestry products in a pilot plant – so-called cellulose ethanol or second-generation ethanol (read more about this under Introduction to cellulosic ethanol). The extraction method is based on leaching out the sugar in the cellulose using dilute acid before it is fermented to ethanol."

They verify the source of imported ethanol, see myths & facts http://www.sekab.com/default.asp?id=2166&refid=2167

FAQ's here: http://www.sekab.com/default.asp?id=1914&refid=1992

Lets not kill a young industry with lots of promise, we are in the early days of ethanol, more needs to be done with research in enzymes for example so that we can make it out of more raw materials.

From the NYTimes, which summed it up quite well.

Even those who regard ethanol as the holy grail of energy policy concede that there is a right way and a wrong way to produce it. Done right, ethanol could help wean the country from its dependence on foreign oil while reducing the emissions that contribute to climate change. Done wrong, ethanol could wreak havoc on the environment while increasing greenhouse gases. Full article here:

(http://www.nytimes.com/2008/02/24/opinion/24sun2.html?_r=1&oref=slogin).

I drive a Saab 9-5t biopower (e.g. Ethanol/E85), but I like diesel/biodiesel too. We need to support new technologies to solve the energy problems facing society.

Green data center threat level: Not green

Valdis Filks

Technologies already exist to reduce power usage

We should prevent energy usage rather than increase energy usage and then cool it. Prevention is always better than cure. We should design/architectect systems from scratch with lower power & least usage. These technologies exist today and are mature. We should not add hotter and hotter chips and more disks which require more and more cooling. This is an upward spiral in usage. Lets avoid the power usage problem in the first place. Use a low power solution in the beginning and then all following problems disappear.

We buy power hungry devices then cool them. Compounding the problem, by redesigning and upgrading additional cooling systems.

We should buy power efficient devices, that need increase in cooling - removing the problem at source, then no need to spend money on extra cooling.

Examples:

Replace all desktop PC's and laptops with thin clients. These exist from Sun and Citrix (both solutions run/support Windows apps). How many workers are really mobile. If you sit by a desk all day you do not need your own PC. Most PC's, laptops run at low CPU utilisation all day. Sunray running Unix apps and Windows apps is mature and proven existed for years. No extra staff required, security problems solved as no local data/disk/cd drives. Replacement of (2Kg) thin client takes 10mins, (20Kg) PC/laptop 1-2 days. Thin clients have no local software so no re-installs required. Many indirect savings.

Take all inactive data and put it or achive it on tape drives. Tapes require no cooling or power when inactive, disks do. This fact cannot be ignored. We can use de-duplication and then move the de-duped data blocks to tape. Use VTL's or any type of virtualisation to send data to disk. Behind the disks use tapes to store the data that is not used weekly or monthly. e.g. disk buffer pools for daily weekly used data moved to tape after a month of inactivity. All tapes are encrypted, no offsite or lost data issues, tape encryption is mature and generally available.

Virtualisation and hypervisors take extra CPU cycles and are not required to consolidate applications from many systems to one system. In the last 20yrs we have been running many apps on one shared server. Any decent OS using CPU/Mem sharing algorithms can run many apps. Most enterprise OS platforms have built in virtualisation and resource (RAM, CPU) sharing software. Large (e.g. 16 CPU, multi-core) servers can consolidate & virtualise many small servers. These large servers have reliability tecnologies far exceeding the mainframe mythological reliability. Large Unix servers and mainframes are the same thing. But more application choice and simpler staffing issues with Unix servers.

You can normally replace all web tier and small print servers, email servers with multi-core servers. E.g. approximately 5 small servers can be replaced by one Coolthreads server. With a 5 x reduction in power and cooling requirements. No need to add extra cooling/chillers in your datacenter.

Summary:

From 500W desktop PC's go to 10W thin clients

From 900W - many KW disk arrays go to 0W tape. Immense storage savings.

From many web tier 700W servers go to 400W single CPU multi-core throughput servers (e.g. Sun Coolthreads servers).

Just by using existing, proven technologies that require no extra training or new technologies we can solve todays power usage problems.

You can get this from most of the major IT suppliers. Some technologies are unique to Sun like Coolthreads servers and SunRay thin clients. I wish others could supply these too.

Biofuel backlash prompts Brussels back-pedal

Valdis Filks

Lets not kill a new industry, biofuel vs food too simple.

The biofuel vs shortage of food is an over simplification. If we kill the industry now we risk losing a future source of fuel. Lots of research and increased efficiencies, alternative sources will be lost. To stop biofuel production is an extreme reaction, need a comprimise and some pragmatism. Some biologival sources are better sources of biofuel than others, do not criticise them all, see summary.

US - uses corn: Not good, which is a food.

UK - uses sugarbeat, good for fuel/ethanol, not a direct food.

Sweden - uses, wood chisp, good for ethanol/diesel production.

Brazil - uses sugarcane, good for ethanol not a direct food.

So we cannot paint every producer with the same brush, I like the UK and Swedish sources of biodfuel, then Brazil and lastly the US source.

Lots of land is set aside, it is the EU CAP that distorts all agriculture in Europe. We have never been able to solve the Common Agricultural Policy problems in Europe.

As Jeremy explains above "A commercial bio-diesel plant just opened near family and so its caught my attention. They use the waste from the neighboring pork processing plant to produce bio-diesel and glycerin as the major byproduct which is used as a food supplement for the pigs. The community seems to love it due to the huge reduction in solid waste coming out of the pork plant. Ive been unable to locate any studies regarding the carbon cost of producing such a fuel or emissions when burned, though and was wondering if anyone has seen any study results for this."

We should use waste products and non foodstuffs to make fuels, look at Sweden and the UK. There may be other good examples that I have missed.

Red Hat scurries away from consumer desktop market

Valdis Filks

All our hopes with open source ended, opp for others.

This is all I ever wanted from Linux, a desktop. I have used and been relatively happy with Mandrake, RH, Suse and Sun's Linux (JDS). My hopes have been dashed, will need to look at Ubuntu.

We have AIX, Solaris and HPUX which scale to more cpu's, threads,better file system integrity, diagnostics, security, scheduling etc than Linux servers. One of the above is even open-source and has backward binary compatibility. Shame that the desktop Linux is dropped.

Some of the above Unices, have lower licence fees than the Linux distros. Always well worth checking out, it is one of those falacies tht Linux on the server gives you a competitive cost edge. Costs more, can't migrate to another Linux without great pain. How many commercial organisation make/create their own Linux system, never met one, every distro tries to make a proprietary distro with lock-in. Where did all the open Linux promises dissappear, I hope the other Linux distros do not abandon the desktop.

So if I install RH I still cannot easily migrate to Suse or any Linux distro. Can someone give me the cost of Linux distro A to Linux distro B migration. Takes me about 2 days for a Linux desktop migration, I would not like to guess the time to migrate one Linux server to another.

This is what needs to be fixed, make it easy to migrate from one Linux to another, the discussion is always about open source. But we want choice, the Linux distributions tie you in and lock you in.

Is the Linux community forcing every Linux fan to go MAC OS, which is a unix just like linux, but Mac admit lock-in. Well, this is a good chance for Ubuntu to take the lead and help us Linux desktop fans. NB I do have to use Mac and WinXP at home due to family pressure. Am I naove and expected too much from Linux, sad, sad day. I hope the other distros do not follow.

Virtualization: nothing new on the Sun or the mainframe

Valdis Filks

Quality of egg basket carrying people - the real deal

I agree a lot with this article and that virtualisation is really old hat. Difference is that it is available on commodity hardware.

With all systems there is a small difference say 20% which gives the server hardware leverage and a competitive edge. But this 20% of expensive but extra reliability due to special h/w e.g. memory mirroring and things like dual internal crossbars, gives 80% of the value. A badly managed mainframe with unskilled operators has availability equivalent of a desktop PC. A high end unix server now has all the RAS features of a mainframe e.g. RAM memory mirroring and instruction retry. But you need good people to manage those high end unix boxes. Mainframes and Unix servers have been virtualisating/consolidating small systems for a long time. Doing what VMware does for a long time. Are not the VMware developers old unix/mainframe people. The equation is hw + people skills + virtualisation s/w.

Also, as all old timers know, virtualisation has a cost, the overhead of the virtualisation layer. We should not forget this.

So it is the quality of the support staff which make systems reliable. Outsources manage this as they try to never touch/change a system. The trick is to be able to change/upgrade systems and add new apps while maintaining availability. Datacenter grade Unix servers have allow hot swap and dynamic changes on systems which is ideal for a changing virtualisation environment also hardware to cope with failure of chips, memory, i/O etc. So very little risk here. I do not see this available yet on commodity, why we do not have it on X86, these extra features cost money. You get what you pay for.

So bleeding edge is virtualisation on X86, but put all your print servers on one X64 box and you have a single point of failure e.g. risk. Can't print anymore, save the trees, this is green IT. Virtualise with commodity low paid, unskilled staff on commodity hardware and you are treading a fine line. All eggs in one basket with people not used to carrying eggs.

How long will this fashion continue, I reckon on another 18 months, virtualisation is already about 18 months into it's cycle. Most IT fashions last for about 3yrs.

What is the next fashion, easy, black. Black is the new black.

What will the future be like, simple it will be different.

IBM smacks rivals with 5.0GHz Power6 beast

Valdis Filks

Will we have an iCooler next ?

The SunMD is a standard design with water cooling built in the factory, every SunMD is the same. No need for any local plumbers to change anything inside when a customer receives it. Just plug it into the power and external cooling pipes. When you buy a Sun Modular Datacenter (aka Blackbox), you do not not have to change anything. Put a water cooled server in a computer room and as explained many times, you need to do lots of extra work and ongoing complexity.

As far as I know all servers in a SunMD are air cooled, no servers are water cooled. But I am sure if someone paid Sun enough money we would be able to connect a water cooled P6 in the SunMD to the pipes. I would advise against this, guess why, it adds complexity.

SunMD is water cooling outside of the computer chassis/enclosure. The Power6 water cooled server is water cooling inside the chassis/enclosure.

My point is about the added complexity, which water causes if you have to put it into an existing datacenter.

Now if people want to make hot chips or more elegant designs then we technologists have a challenge. Produce a coolant that is safer to mix with electrical devices and those nice little towers that we put on CPUs can become a selling point, lets call it the iCooler. I remember that some of the IBM, Hitachi or Amdahl mainframes had those elegant circular tower heatsinks.

Well it was pretty but complex. Now maybe many overclockers like to cool their PC's at home with these type of things. The modders always like new gadgets and to spend time tweaking their systems. Commercial datacenters do not.

NB I built my latest PC with the criteria of least power usage, it is based on a AMD BE-2540 dual core cpu, $ per performance per power usage it was the most efficient. It may not be the fastest, but I was being Mr Sensible. No water anywhere near it.

In commercial datacenters I do not think we can overclock, mod and customize our servers with lights, water coolant towers etc. But maybe the first person to do so could make the datacenter into a work of art and a light show.

Me no iModder.

Valdis Filks

A good plumber is hard to find.

My issue about water cooling is with the whole system, the server, the maintenance of the server, redesign of the computer room, the outages required to do all of this etc. From a thermoconductivity perspective water may remove more heat than air. But you need to get the water all the way to the server and plumb it all in. I like plumbing but this is expensive and difficult.

How about not causing the problem in the first place with cooler CPU's.

I think I mentioned and gave examples of liquids that are good for cooling and expensive implementations thereof. e.g. Magnox cooled nuclear reactors. Air is not good to cool nuclear reactors I think that I made that clear.

Cost or ripping up floors and installing pipes is more than not ripping up floors and installing pipes.

If I want to move a air cooled server I unplug the electrics, network, SAN and move it. Then reconnect.

If I want to move a water cooled server call the plumber, book a weekend, possibly shut down the whole datacenter, move the server. Lay new pipes, pressure test new pipes, reconnect electrics. Do I need this complexity extra constraints.

Water is used already in the chillers in the periphery of many computer rooms, you can already use this hot water to heat your office as a green by product. Do we want a water piping grid in addition to all the other infrastructure/cabling in a computer room. Every computer room is different and needs unique/bespoke plumbing to install water cooling to a individual server. For the sake of one hot Power6 server, am I going to re-design the whole datacenter.

If in a couple years time when the whole industry has moved to cooler multi-threaded CPU's/servers, will your investment in water pipes all over the computer room be a good thing. Or will the computer industry employ legions of plumbers to rip out the water pipes that they installed 24 months ago.

I may be totally wrong and water cooling within servers may become more popular and economical, at the moment I find it difficult to believe.

Valdis Filks

Evian for the datacenter - not good

I am still open to be convinced that water cooling is better than air. It may be better but not one post has explained the advantages yet.

Why do we make such hot chips. A hot chip implies lots of power/electricity. Then we use electricity to drive a water chiller, then we drive an electric pump. If the chip design was not this hot in the first place then we may not need all this extra cooling. This is the prevention rather than cure argument. Hence my argument why the design is flawed. It was made too hot first of all, then we have to get the plumbers in to cool it down.

In these times of global warming we should be looking for ways to make cooler chips. Can a company use a water cooled Power6 server, install new water cooling, do all the extra work and then say that all these extra resources are good for the planet. Alternatively install a low power, multi-threaded CPU/server, which uses air cooling.

If people think that Power6 is good for heating buildings, then we can use the existing air-cooling systems and the heat they take out of the datacenter to heat the buildings aswell. Just redirect the cooling mechanism.

The new Power6 has a clock rate which is 2x faster than the Power5 (approx), but the Power6 applications may not run 2x faster. The increase in GHz does not match an increase in performance. Thus why do we have such hot high clock rate CPU's. Most of the industry is moving away from high GHz hot CPU's.

The majority of companies Intel, Sun, AMD and IBM Cell make CPU's which do not need water cooling. I am a fan of IBM's cell chip, excellent design. I am not a fan of water cooling. Strangely Sun, Intel, AMD & IBM Cell are moving to more parallel/multi-threaded designs that run cool. What is P6 doing.

IBM for some reason made a variation of the P6 which required water cooling, no answer here.

Because we cool cars with water does that make it good for CPU's. We actually use oil to do the cooling aswell. Next thing which has already been hinted in this discussion is that we will have several liquids in a computer to cool it. More components to me means more complexity. With complexity comes extra costs.

Many comparisons of a new IBM server to old Sun servers, please compare new IBM servers to new Sun servers. My post was about the environment and implications of all the extra resources required to install water cooling in a datacenter. Obviously we need to look at CPU design and the hot chip GHz issues, but mainly my interest was in water cooling issues.

Computer history repeats itself, we may well see more water cooled systems. But I do not think that is a good idea for all the reasons already outlined.

I have been to the US more times than I can remember, half my family are US citizens. I am an alien (see previous comments in The Reg about me) born in a very democratic country, living in another very democratic country where free speech is very much respected. I have seen and worked in 100's of datacenters/rooms. I thought that we saw then end of water cooling, just like ECL, Bipolar Chip design.

Valdis Filks

Water cooling adds complexity.

Agree, water is used in chillers in the datacenter, most of the computer rooms that I have been in, the air con units/chillers are in the periphery/edges of the rooms. What water cooling to servers/computers does is put pipes and plumbing all over/under the floor mixing it with the network, SAN and electrical power. Do we need an extra substance/piping under the floor. Does it create more problems than it solves. I have seen many companies go through projects to simplify and reduce their underfloor/overhead wiring. Some datacenters even have alarmed floor tiles. Why do you want to mix water pipes with this.

The point of this is to reduce complexity, less is more, simplicity is better than complexity.

All computer manufacturers should help give the computer industry a good reputation for technology leadership by reducing complexity. This is the point about water cooling, it does not make things simpler for customers. Water cooling makes computing more complex.

Do you want to say to your customer, here is a new server and do you know a good plumber, on 24hr call out, 7 days a week.

NB I do not write anonymous but am open and stand behind my beliefs. I was born in a country and live in a country where free speech is respected. Do anonymous writers/comments have something to hide ?

Valdis Filks

Parallelism and water cooling.

The water cooling is a risk assesment, reduce the components involved and simplicity always is a better option.

For example you can have, in your infrastructure:

Design A = Power (electricity), Network (electricity), SAN (optics mainly), cooling (air)

Design B= Power (electricity), Network (electricity), SAN (optics) and cooling (air + water)

Design B has more components plus a catastrophic mixture of water and electricity. This is the problem, even if you use it to heat your building, your infrastructure just got very complex and dangerous, by adding water. Your safety regulations and such like has just increased costs.

Parallelism, this is also a virtualisation play.

With a low power air cooled server which has 8 cores you can virtualise 8 single threaded apps. You use the free software and the system to create 8 domains. As some of these cores can have more than one thread e.g. 2. You could even consolidate 16 servers or to one of these highly multithreaded servers which are 2U in size and go by the name of coolthreads. No water cooling required.

However, lets take any mainstrean already existing database e.g. MySQL, Oracle, Postgress and DB2.

These have many parallel processes (often more than 8) e.g. db writer, lock manager, transaction manager and so on. So put these on a multi core/thread server and you have a good match. No coding/changes/migration required. A very large majority of servers are running OLTP workloads that have a database engine as described above so a very large segment of the computer/server market is ideal for parallel computing.

Long term agree, need to write better parallel apps. But a lot of apps (OLTP) out there are parallel in nature already.

Can't quite agree on the water cooling, but I am open to suggestions.

Valdis Filks

Water cooling

A couple points, raised in these comments, which I will try to answer.

1) Yes we used water last century in datacenters. But that is because we did not have a choice. When aircooled systems became available we had a choice and kicked the water cooled computers out very quickly, to be replaced with air cooled computers.

2) Water cooling on occassion failed, leaked, had to be switched off, water cooled servers needed to be switched off. Things got really ugly, I worked in v.large datacenters I saw these problems. Someone mentions that we used water without too many people being electocuted, to me one such water cooling disaster is one too many. Avoidance is a better policy.

3) Air cooled servers could coutinue to run, while water cooled servers needed to be switched off, when the safety guys decided the water leaks were too much of a risk in a high voltage datacenter then we had to switch off everything.

4) Yes water may be a better coolant than air, it adds complexity though. Magnox is a better coolant (shorter half life) in Nuclear reactors than water in PWR. But magnox much more complex to maintain than water. In the same way that water is more complex to manage in a datacenter. Using oil is just another level of complexity.

5) Are we proposing that we justify to buy a computer to help heat the building. When we decommission a water cooled server, do we need to buy extra boilers to heat the building. This is a plumbing nightmare, lets keep things simple. Again, water cooling adding complexity.

We are back to the starting point, do not make hot chips that require water cooling. Maybe I am too risk averse. But I worked with big computers (mainframes), then Amdahl and Hitachi came along and made air cooled machines. Which sold like hot cakes as they did not require all the extra baggage of water pipes, cooling, power, monitoring, maintenance. If you want to scare the hell out of a datacenter manager, tell him he has a water leak in the datacenter.

Prevention is better than cure, do not use water cooling.

I am not that old, but strive for simpler computer architectures avoiding the mistakes of the past. Have we learnt nothing from our computer heritage.

Valdis Filks

Water cooling P6 adds extra costs and complexity.

Water cooling for CPU's is an admission of a design failure. The extra pipework and electrical power to run the water cooling just adds costs and complexity. Extra electicity is required to run all the water cooling, so there are not savings, this needs to be taken into account and cannot be a hidden infrastructure cost. Plus to add an extra water infrastructure to a computer center with all that electrical wiring is dangerous. Computers and water do not mix well together. Using water cooling for computers is a technological backward step.

All described here: http://blogs.sun.com/ValdisFilks/entry/water_and_electricity_do_not

IBM gives mainframe another push

Valdis Filks

Mainframes fail and you have no competitive open bid process

I worked with mainframes, they fail just like any other large server. Their is no unique or differentiating hardware or software in them anymore, with the E10K, Sun overtook the mainframe in scaleability about 10 years ago. A 64 way mainframe is a medium sized Unix enterprise server. Many large Unix servers have over 100 cores. Amdahl and Hitachi built larger and often better mainframes than IBM. They saw that other companies would overtake the mainframe and got out of the business.

The mystery concerning mainframes comes from the fact that you cannot find anyone to explain them except IBM. Thus not many people can refute IBM claims.

MIPS = Meaningless Indicator Of Performance. This is not a useful measurement of performance or metric.

The term mainframe is missused enormously, any large UNIX server is a mainframe.

If you run a mainframe you are totally locked into IBM, you cannot have an open bid to upgrade it. There is only one bidder IBM, you cannot go anywhere else for the s/w, IBM supply z/OS. Most TCO/ROI calculations will be done by IBM Global Services, who have more people than you company so you will not have a chance or the resources to analyse their cost model or question every detail.

For any new application being installed on the mainframe calculate the cost of exit. You have nowhere to go. With Unix mainframes you have several companies that will supply you a platform and you can truely compare costs & ROI. And move to another supplier if you are not satisfied with your Unix Server.

Mainframes are good, but so are many large Unix servers/mainframes.

Caveat Emptor, buyer beware.

Sun's Rock chip waves goodbye to 2008 ship date

Valdis Filks

Valdis from Mars, no but close.

This could be a first scoop for The Register, yes I am an alien. But maybe I am not allowed to call myself that for politically correct reasons, my alien status has been revoked since the country I live in is part of the EU now. I did live in a large EU financial metropolis and work in the middle of a large concentration of IT, where most of the rest of that country looked upon those city dwellers as aliens. And I believe still do.

I also now live closer to the northern lights than most of the population of europe. Apparently where the "blond" aliens have integrated into the population. But I have no starship, just a SAAB that runs of alcohol (serious clue here). When I drink my cars fuel, I do believe I have my own starship.

Well we have had some good chip discussions. A quick wrap up for people from another planet could be that Itanium is a truck nowadays, not a sinking ship, Sun's Rock chip will be edible in the form of a pizza and people still believe that power per thread is not important.

I love the computer industry.

Summary, get yourself a pizza, buy a new cooling device to support the old hot CPU designs and be careful of aliens.

Valdis Filks

Scope creep, TPC, Itanium, cores etc

To many subjects to cover, but good questions raised.

Yes, OK it was 10 m/f and not 5, but do you get the idea. Underestimation, have we underestimated multi-core. Intel now is following Sun wih multi-core CPU's.

Is TPC a good measure of performance, this question will run and run. All CPU's wait at the same time. Put a Ferrari in a traffic jam, see how fast it can go.

How long can Itanium last before HP/Intel kill it. Will itanium move to X86 with a virtualisation engine above it, including the overheads. Itanium is a dangerous path to follow, I did not invent the Itanic slogan.

Sun now has the T2 with more floating point units, thus any comparison with previous generation T1 chip is outdated.

Future issues will be power and cooling IBM/HP recommend more cooling, why not stop the power consumption hot chips first. Install Sun T5220 and you may not have to update your cooling system. Use prevention rather than cure. Sun can prevent you going down the hot datacenter path, this saves money. If you are not a drug/power addict, you may not need to go into rehab/extra cooling. Get the idea, do not make the high power mistake and then you may not have a cooling problem. Many webservers can be consolidated to T2, see:

http://blogs.sun.com/ValdisFilks/entry/another_win_for_ecological_computing

http://blogs.sun.com/ValdisFilks/entry/analysts_get_out_of_the

Valdis Filks

Sun, IBM, SMP and innovation

I am afraid that innovation does not come, can be sustained or maintained by the large companies. They all have innovation, IBM labs are innovative, Intel can make smaller chips, but reducing a Chip size (die size) is not architecure or industry leadership. Sun is very innovative and their products proove it, go buy one.

As far as SMP goes, yes MVS & z/OS as called has a long heritage, but solaris is has always been designed to multi-thread. Other UNIXes (including Linux which is Unix) may not multitask or thread aswell as Solaris. Maybe this is why competitors say that multithreading with multi-cores is not the way. Ask why competitors are scared and talk down multithreading systems.

If you cannot do it, then criticise those that can.

Sun, competitors cannot create/design CPU's or Operating Systems like Sun, thus they criticise Sun.

HP and IBM cannot design a well cooled, fast and compact 4 CPU Intel system like Sun e.g. the X4450.

If you want to consolidate many single threaded apps on several servers then with Solaris on T5000 servers is the way to go. Many customer are consolidating their 100's of web servers to 10's of Sun T5000 servers.

Someone from IBM said that the world does not need more than 5 mainframes, if we now say a CPU does not need more than 16 cores. Will that statement be valid in 10yrs. Will it survive the test of time.

Read my blog a couple months ago about Intel and Sun's role reversal. Sun has CPU design leadership and Intel moves to software.

http://blogs.sun.com/ValdisFilks/entry/sics_multicore_day_feed_the

Valdis Filks

Beliefs and facts, buy a low power, multi-thread CPU from Sun.

Points raised and some facts

Assumption: Only IBM and Intel can make/design chips/CPU's

Fact: Sun niagara chip N1 has sold better than Itanium, new N2 is ramping up strong. If Sun can make/design N1 & N2 this implies that they make and design CPU's. Where the design of Sun chips is industry leading. Go buy one from Sun if you believe they cannot make one.

Assumption: Intel superior architecture

Fact: Every system has it's place and different purpose. Intel does not have the mult-core or on CPU encryption & ethernet of Sun. Sun does not have the smaller die size of Intel.

Asummption: Reliability

Fact/Anecdotal: Intel servers run hotter than Sun, anything that runs hot from a physics perspective is under greater strain. Anything under creater strain/heat etc will have a shorter lifespan. Look at the X4450 from Sun, neither IBM or HP can make an equivalent Intel server this small.

Customers who grew up with Intel came from the low end upwards. Sun came from low end to high end and does both. You get what you pay for.

Sun is leading the market.

IBM and Sun started with multi-core chips.

But Sun put more cores on a chip than IBM, IBM has not been able to compete.

Sun made the chip use less power. Intel followed.

Sun has encryption and ethernet on a chip. Intel and IBM either cannot or do not have the capability to do this.

Summary, let everyone pick on Sun, but look at the facts.

City to Intel: Kick the rest of the tech industry into line

Valdis Filks

Sun already ahead with T5000

Sun T5000 servers already have ethernet and encryption on the CPU. Previous generation of UltraSPARC T1 already had multicore way ahead of competitors. Intel is following AMD & Sun on this. Also, power usage of Sun servers way below Intel, Intel is in catch up mode, thus they cannot kick the rest of the industy into line.

Sun kicked the industry into multicore and low power. Every heard of a SunRay thin client, thinner than any other thin client.

Hospital's brand new '£1m' server room goes up in smoke

Valdis Filks

It is an accident, we are all experts now,

Accidents happen, this was an accident. We should stop bashing the organisations involved, the NHS is there to help us we should help them. I think we all owe the NHS a lot, I have had 2 broken arms fixed by them.

This is an opportunity to start thinking about data and cooling.

We can be smarter in the future and store the PACS data on low temperature storage e.g. tape. We need a smarter storage hierarchy were data is on disk for x days then goes to tape for months or years. No point paying for cooling spinning disk that no-one uses.

I know this is a shameless plug, but we have a spanner/tool that fits the job, people may just not know about it. SAM makes the media transparent and your applications things all data is on disk. You can take a copy and send it to tape on another site which does not need the cooling of disk. Disk is good for short term storage or active data, tape is good for inactive data. SAM glues it all together so you never know.

The solution is here: http://www.sun.com/storagetek/management_software/data_management/sam/index.xml

An example is here:

http://www.serverwatch.com/hreviews/article.php/3696256

Valdis Filks

It probably is not the first time or the last

I just hope this does not happen any more, some of these medical records can help save peoples lives. We need to move away from these hot servers with high Ghz CPU's and larger and larger disk stores that have become more prevalent in the last 5yrs.

These power and cooling requirements are not sustainable. We could be heading for more than a meltdown as seen in Jimmy's.

See: http://blogs.sun.com/ValdisFilks/entry/fighting_fire_with_fire

We need to be responsble IT people and make balanced systems from a power and cooling perspective.

Data center efficiency - the good, the bad and the way too hot

Valdis Filks

Simple use less power, prevention not cure.

Why do we make this so complicated, spending money on more energy hungry cooling systems. Fix it as source use lower power servers, that are as or faster than existing servers. Also, delete data on disks or archive stuff you do not access daily on to tape. All discussed here: http://blogs.sun.com/ValdisFilks/category/Environment

IBM's AIX 6 drops 'L,' adds 'S'

Valdis Filks

Does AIX file encryption stop de-duplication from working

If AIX encrypts the data in the filesystem, at source. Will a de-duplication device still be able to detect duplicate blocks of data and reduce the backup window. Or will de-duplication stop working, will de-duplication have to decrypt it before backing up, the encrypt again when storing the backup copy. What is the overhead and security exposure of this.

More details here: http://blogs.sun.com/ValdisFilks/entry/the_dupe_in_de_duplication

Execellent article, however, the proctologists section was painful, not sure if I want to delve further into this area.

Carmakers tout green motors in Geneva

Valdis Filks

Use cellulose to make ethanol, not foodstuffs.

I drive a ethanol car, SAAB biopower 2.0t, the new one 2.3t is even more efficient. In the medium to long term, we should stop using foodstuffs to make fuel but cellulose. I live in the vodka belt, Norway, Sweden, Finland, Baltics and Russia. Most people know how to make home brew ethanol substances from food by-products (e.g. potato skins).

Just discussed all this ethanol issues on my blog: http://blogs.sun.com/ValdisFilks/entry/the_truth_is_rarely_pure