* Posts by Steven Jones

1526 publicly visible posts • joined 21 May 2007

World o' data centers burns less juice than expected

Steven Jones

Mistaken assumptions...

Quite simply he got it wrong by not taking into account fairly obvious commercial pressures to reduce operating costs. It's always a mistake to perform an extrapolation without realising that suppliers adapt. We can look at a close parallel with the use of fuel for transport - there are commercial pressures on manufacturers to reduce fuel consumption hence, rather than go up, total fuel consumption in the UK has declined.

In the case of IT equipment there is still a lot of scope to reduce the total power requirements through better basic technology and improved management.

HCL discloses 'email deletion' requests from News International

Steven Jones

Data Retention Policy

It would be interesting to know if NI or any of its subsidiaries had a formalised data retention policy and if any actions were outside this. Given the importance of communication within a newspaper, it would be surprising if they did not have a fairly long-term archiving policy. However, many corporates will have much more aggressive deletion policies.

Four illegal ways to sort out the Euro finance crisis

Steven Jones

@Mark65

I understand the point, but it's not quite true that some current bond holders won't suffer a loss. That is part of the new deal, albeit nothing like as bad as it could have been (and might yet be). That France is notably keener on assisting Greece than is Germany might be something to do with the former's banks being particularly exposed to Greek sovereign debt (albeit German banks are the second largest creditors).

In any case, the notion that all state-issued bonds in the Eurozone are equally sound has now been disproved, the importance of that being that it should be possible to contain risks in the future. However, we are where we are, so there's an awful lot of reckless lending to be unwound. For instance, the UK banking sector is particularly exposed to Irish sovereign debt.

Steven Jones

All about faulty risk asssessment

I'm not sure I wholly agree with this. The US has a system that allows individual states to go broke and default within a single currency area and default without upsetting the whole. What happened here is that institutions (private and state) have been happily buying Greek bonds on the basis that the Euro area would not allow a default. Pretty well everybody who has looked at the Greek situation will realise that some form of default is inevitable, however it's how it is dressed up. The Greek national debt has simply reached unsustainable levels compared to its GDP. When the bond markets finally cottoned onto this fact and that a default is most certainly possible, then this triggered the current crisis.

If those responsible for buying Greek bonds in the first place had properly factored in the risks involved several years ago, then the Greek deficit would not have been allowed to grow to these levels, simply because the bond markets would have acted as a control as the costs of state borrowing would have increased. That (mostly) Eurozone-based institutions thought they could treat Greek bonds in much the same way as German government ones was a serious mistake and is at the heart of this mess. More generally, it was loose credit control that was at the heart of the financial crisis which we are currently in. That institutions thought they could immunise themselves against these risks using increasingly complex financial instruments (like the dreaded credit default swaps) served to produce a horrible interconnected, yet opaque finance system almost designed to produce the current crisis.

What is surely required is a much less interconnected system where the risks are more easily assessed and contained.

As for the idea that there can be several percent of the northern states' GDP to subsidise the south, then that is surely never going to be sustainable. If we look at these sort of transfers in the UK it has essentially just made some parts of the country state subsidiaries. That's simply not viable for whole countries.

iPhone plunges 13,500 ft from skydiver's pocket - and lives

Steven Jones

Mixing up your energies

@atippey

The formula you give (half mass times the square of the velocity) is that for kinetic energy. The formula for potential energy in this case is m x g x h where m is the mass, g the acceleration due to gravity and h the height above the reference level you are measuring the potential energy (ie where this blessed phone would land).

Leica X1 APS-C compact camera

Steven Jones

Badge envy...

If you go to DPReview and do a side-by-side comparison of the Fujifilm X100 with the Leica X1 (which you can do from the former's test), it's difficult to support the conclusion that the "easily delivers the best image quality in its category". At high ISO the X100 is surely a bit better and both cameras are rated highly for the lens quality (and the Fujifilm is a full stop faster).

Of course the X100 is physically a bit larger, but then it has a built-in viewfinder and something of a handgrip (both cost-extras on the Leica). With both cameras there appear to be quirks, and neither appear to be up with the best on AF, speed of use. Also both are expensive albeit the Leica hugely so.

Of course there will always be sold by the badge on a camera, and I suspect that is far and the way the most important issue when it comes to premium-priced products like the Leica. Until I see some direct side-by-side objective evidence under comparable conditions I'm inclined to think the Leica does not easily deliver the best images - if anything it's subtly the other way.

Before 'the cloud' was cool: Virtualising the un-virtualisable

Steven Jones

One thing that can't be virtualised

There is one thing that can't be virtualised, and that is time - or at least it can't be virtualised where an OS has to interact with the real world. This can have some unfortunate side effects in terms of performance, clock slip and so on. For instance, any OS using "wall clock" time for things like timeouts, task switching and the like can produce some undesirable features on a heavily stressed machine. This is especially true when the hyperviser is able to page part of the guest environment. This causes erratic and very lengthy (by CPU standard) lumps of time to appear to be used during execution if "wall clock" time is used.

From my experience eventually all OSs which are expected to run under hypervisers eventually have to be modified in some way to be "hyperviser aware" in order to iron out these wrinkles. Many years ago I worked on an OLTP operating system that ran under VM - in order to fix some timing issues it was necessary to modify some core timing functions in the guest OS to avoid using wall-clock time and get execution time information from the hyperviser.

You can get away with this stuff on lightly loaded environments, but not on heavily committed ones.

Intel 320 SSD bug causes forum despair

Steven Jones

Nightmare?

I've been using a 256GB Crucial drive since September 2009 and, compared to the hard drive setup I had before, it is an absolute dream; far and away the best upgrade I've ever performed.

As for write cycle limitations, then it's not going to be an issue that affects typical PC users. In any case, hard drives are prone to sudden and unpredictable failures (as I've found to my cost).

I should add that I use the SSD for system and other areas with high random I/O demands. For bulk data I use HDDs.

So one thing I can say for sure, my experience of an SSD is anything but a nightmare. It has transformed the usability of my PC.

ESA unveils billion pixel camera that will map the Milky Way

Steven Jones

In a word no...

The minimum size of a photosensor element in a CCD is effectively dictated by the optics and the nature of visible light and not the ability to fabricate smaller elements. Indeed a 40nm element would be something like an order of magnitude smaller than the what any visible light optical instrument could theoretically resolve. Indeed for many types of optical instrument, the practical resolving power is significantly worse than that. Whilst there is some value in the sensor "out-resolving" the optics, there is a law of diminishing returns and there are other issues. One of these is the ability of the photosensors to be hold enough excited electrons to provide for a decent dynamic range. That ability is directly related to the photosensor size. Then there is a requirement to be able to read all these elements. A single billion-cell CCD would take a long time, and due to the "bucket-brigade" nature of a CCD it would be likely to introduce more errors and noise. A CCD also has to allow for space round the photocells for insulation and for circuitry. That reduces the surface area for actually sensing light, which is very bad news.

For this sort of work, the optimal photosensor size is probably of the order of a several microns, or a good two orders of magnitude larger than that used for producing processor chips.

Do we really want 100Gig Ethernet?

Steven Jones

Slightly misleading...

If you compare the sequential throughput of a modern 1TB drive with that of a 10MB one, then that has only gone up about a 300 fold, or rather less than Ethernet throughput has in the same period. The reason for this is quite simple - disk capacity goes up as to a square of linear bit density (so capacity has gone up 100,000-fold) whilst sequential throughput goes up linearly. In fact it's misleading to claim that Ethernet is only 1,000 fold faster - that 10Mb used to be delivered in a single collision domain (and if there were many hosts it was impossible to get anything near 10Mb). A modern LAN implemented with non-blocking switches containing many hundreds or thousands of switches. Indeed there are enterprise switches rated at several terabits - just not all down the same wire at the same time. So (total) network capacity has gone up at a much higher rate than the data rate of a single Ethernet interface.

Budget airlines warned over 'hidden' debit card charges

Steven Jones

Not exactly illegal

It was never "illegal" to make a charge for credit card payments. However, the credit card companies used to have a clause in their contracts with retailers which forbade them to raise such surcharges. It was these contact clauses which were, in time, deemed to be illegal in the formal sense of the word (quite possibly because of EU or domestic competition rules). This has been the position for many years.

I can see some justification for not making debit card payers (fixed 20p charge from the card operator) and cash payers subsidise credit card sales (usually about 2% of the value), but it should not exceed the actual costs.

Note that credit card users do gain some benefits - their payments are essentially insured against the failure of the supplier, not to mention a 1-2 month credit period.

Steven Jones

Not the only ones...

Could we add ticket booking agencies to the list of those adding unwarranted charges? They also do it more than once by adding in a "booking fee" and a "transaction fee" which can easily add 20% to the price of a ticket...

Cloud 'will spur server sales'

Steven Jones

Spur server sales?

I'm not sure I understand this - surely cloud computing is meant to increase the efficient use of computing resource by making increasing virtualisation, dynamic load redistribution, shared redundancy, the common use of shared resources, thin provisioning and all those other good things that we are being told of. No doubt there will have to be expenditure on new resources to realise this dream, but if it doesn't result in less hardware being purchased in the long run, then something has surely failed.

Of course the suspicion that this is yet another over-sold concept by the IT industry selling impossible dreams to gullible senior management cannot possibly be true...

Oracle buying Ellison-backed Pillar Data

Steven Jones

Conflict of interest?

Just a very slight smell of a conflict of interest here, hence the careful wording about the valuation.

nb. now waiting for the Oracle DB being optimised to work with Pillar storage using closed APIs. Look out EMC, HDS, NetApp...

Renault pledges Fluence ZE will be UK's cheapest e-car

Steven Jones

Leased batteries

The article fails to mention that the purchase price does not include the batteries. Those have to be leased at £75 per month for three years or 60,000 miles, whichever comes first. So that's £2,700 for three years - not an unreasonable cost, but if this is used as a runabout on short journeys it would cost considerably more then the electricity used on a per mile basis. At an average of 12,000 miles per year, the battery cost is about 7.5p per mile.

Oracle Solaris 11 to abandon elderly servers

Steven Jones

A bigger problem is not supporting old OS versions on new hardware...

The bigger problem for many runners of legacy apps, is not the lack of support of Solaris 11 on old SPARC servers, but the reverse; the inability of new servers to run old versions of Solaris. Of course there are Solaris 8 & 9 containers available, but even those won't run everything.

As anybody involved in running legacy apps can tell you, swapping out the hardware is the easy bit. The hard part is software migration. Once you have a legacy app running for many years, often based on third party software which is no longer under active support or development then you are either into serious development and testing expenditure (which business departments are reluctant to support) or eking out the life of the existing environment.

Just about the only supplier who really does supply fully backward compatible environments such that new OS versions don't cause problems is IBM on their mainframes. Unfortunately doing so is at a considerable cost. Not only financially, but through the persistence of some rather outdated features of the operating system.

600 tonne asteroid in low pass above Falkland Islands - TONIGHT

Steven Jones

Not an asteroid

According to the Near Earth Orbit classification system, this thing isn't big enough to be classified as an asteroid which would need to be greater than 50 meters in diameter. A 5-20m object would be a meteoroid. Of course it doesn't make such a good headline.

Microsoft loses Supreme patent fight over Word

Steven Jones

The system will not change any time soon...

The US are not going to substantively change their law regarding the patenting of software concepts until such time as it's perceived to threaten their financial interests. Among other things, the legal profession are hardly likely to support the killing of a system that regularly provided them with a lucrative income. Given the number of lawyers in US politics, then they aren't likely to kill this particular golden goose.

Some might argue that this doesn't matter too much if the rest of the World doesn't allow patenting of software (with a few exceptions). However, given the domination of the software world by major US corporations, and the requirement to operate in the World's largest market, I rather suspect the rest of us are going to have to live with the stifling effects of this system on competition, innovation and innovation. It also has the effect of raising entry barriers to smaller companies introducing new products as they are very likely to fall foul of one of these widely-drawn patents owned by the big players. The only small companies that do well out of this are the patent trolls of course whose business models depend on monetizing their IPRs and not actually developing products.

All this will only change when one of the really big countries, like China or India decides that they don't need to play along with this and can service their own internal and selected export markets not covered by this patent system.

Sinking Sun scuppers Oracle server figures

Steven Jones

And the future of Sparc is?

Oh dear, and so predictable. Might we expect that Oracle will now decide that SPARC, along with Itanium will not be supported on new versions of their database and middleware software...

Well, maybe not, but it rather goes to show that if Oracle were seeking to gain market share for SPARC from HP-UX Itanium machines by their announcements, then all they were likely to do was to accelerate the migration of customers to x86/x64 platforms. Since SPARC servers (especially on the M series) are now very much premium-priced products it's not going to look like an attractive alternative. Of course it's a real pain to move large, enterprise databases from one hardware architecture to another, but if it has to be done, logic says go for the one processor architecture which is pretty well guaranteed to be around for whatever counts as the long term.

In the meantime expect Oracle to milk the SPARC market as a legacy one, much in the same way that IBM do with mainframes. There will always be a significant number of customers that can't or won't migrate for cost or risk reasons.

Win7 machines harder hit by infection as VXers change tactics

Steven Jones
Headmaster

"more immune"

You can't be "more" immune. Immune is an absolute; you are either immune or you are not. Resistant would be a more appropriate word to use.

Sky in surprise duct-and-pole-sharing trial with BT

Steven Jones

O2 (or rather Cellnet) did not exist in 1984

The O2 (or Cellnet) as it was network did not exist at the time BT was privatised and the first tranche of shares was sold. It was created as a partnership in 1985 with Securicor owning 40% (although the latter was essentially a sleeping partner). So there isn't really any sense in which the O2 network was built by the state and bought by shareholders. The state had no role in the funding of the Cellnet network.

Of course there's an argument that the bandwidth was "gifted" to Cellnet, but that also applied to Vodafone too.

Of course the other things that has to be taken account of in the capitalisation is the rights issue in 2001 when almost £6bn was raised from shareholders (probably the equivalent of £8bn or more now).

Steven Jones

@Tom 28

A whilst ago I worked out how much BT shareholders paid for the company adjusted for inflation using the three tranches. Note that the price per share went up in all three tranches - £1.30 for the first, £4.00 for the last.

Year £m Index 2010 £m

1984 3,916 2.4 9,397

1991 5,000 1.6 8,000

1993 5,000 1.5 7,500

total (in 2010 £m) 24,897

Note, index is RPI relative to 2010 pounds.

Current BT Market cap is around £24bn

Steven Jones

Bought not "given"

The BT network was bought by the shareholders. Adjusted for inflation the government received £25bn for the company in the three tranches of share sales. Those shareholders had a right to expect the rights to the assets they bought to be respected.

ARM jingling with cash as its chips get everywhere

Steven Jones

In perspective...

Much as I admire ARM (ever since I had an Acorn Archimedes), it has to be said that £50m operating profit on a £185m turnover doesn't make for an industrial giant. Unfortunately it's not the sort of scale of business that's going to pull the UK out of the economic doldrums. Well, unless we can come up with a few thousand more (Germany is very good at these sort of specialist medium/large companies that have a product lead in their sector).

However, for ARM itself, the prospects must look good. With such a low cost base, and the ability to charge such low royalties, they have a huge commercial advantage. The entire cost base of ARM will be a tiny fraction of what Intel would have to spend to develop this sort of market.

The market cap of ARM is only about 1.5% of that of Apple for instance, and a small fraction of the latter's cash pile.

HP ProLiant power supplies 'may die when dormant'

Steven Jones

Rehearsals

Whilst a full scale failover is to be applauded (but try getting approval for that on 24 hour systems), the idea of just powering servers up once a year is none too clever. As a confidence measure it would be extremely sensible to test them at least monthly. Even if it's not possible to test the full app, it's still possible to test basic hardware operation, connectivity and the like.

Apple seals $66bn in Jobsian wallet

Steven Jones

Growth Stocks

I two words Capital Growth. Investors buy shares in companies such as Apple on the basis that they are growth stocks. The principle is that the money freed up by not paying dividends can be invested to grow faster. It was an approach taken by many US technology stocks. Of course how this fits with a company sitting on a huge cash pile is another question. Should Apple be unable to maintain the momentum of its growth and it can't find anything useful to do with the money, then the stock price will stall, fall and there will be a clamour to return these funds.

It's storing up a problem for the future - it's difficult to see how Apple can continue to grow at the same rate and many investors will count on getting out before any share price fall. However, there are always more losers than gainers.

Fujitsu £2bn broadband project throttled at both ends

Steven Jones

Common infrastructure

I don't think there's an issue as such about opening up the infrastructure - it's just that it has to be done at a level which is a realistic representation of it's value. The assets belong to shareholders, and they've paid for them - however it was originally put in place. As you point out, if this is done at the wrong level all it does is disincentives infrastructure investment.

I wouldn't support monopoly, especially state monopoly provision. The old Post Office telecommunication organisation was monumentally inefficient and, at the time of privatisation, the vast majority of local telephone exchanges were Strowger based. Government ownership does not equal lost of investment - the exchequer have a habit of siphoning off funds.

As it is, the government does quite well out of telecoms taxation - the (for the mobile industry) disastrous 3G auctions raise about £24bn (from memory).

Steven Jones

They are not Local Distribution Networks

Running fibre down telephone lines or along the grid is fine for trunk networks (and, indeed it's done). However, it's no use whatsoever in distribution networks where you need poles or ducts. There have been schemes of various levels of practicality to use sewage pipes, gas pipes and the like, but none of those have come to much.

Steven Jones

Gifted?

It's interesting that the Virgin spokesman talks of the government "gifting" BT the infrastructure. Nope, that infrastructure was sold to the shareholders of BT when it was privatised. Indeed BT was sold on at a higher value than BT's current capitalisation. Add up the three tranches of shares sold and it came to £13.9bn. Adjust for inflation and it's the equivalent of £25bn vs BT's current market cap of £24bn.

Of course what the government also unloaded onto the shareholders was a big pension liability onto the shareholders as nobody at the time much appreciated what the size of this would be given the annoying habit of the populace to live longer. Just look at the Royal Mail pension deficit to see what the government could have been lumbered with.

Steven Jones

Aluminimum

BT have never used aluminium - that was the Post Office...

So, what's the best sci-fi film never made?

Steven Jones

Alien

I has to be Alien - although some would say it's not truly sci-fi but horror-suspense.

Nissan Leaf electric car

Steven Jones

Not oil....

"the power they use is still probably coming from oil or gas in the first place". The electricity will only come from oil if you live in Saudi or the like. Far, far too expensive - London got a nice big venue for modern arts at the old Bankside generating station largely because generating electricity from oil became increasingly uneconomic during the late 1970s.

The primary fossil fuels used for electricity generation are gas and coal. Gas isn't too bad (from a CO2 an pollution perspective), but coal is fairly dreadful - not just CO2, but other pollutants not to mention putting quite a lot of radiation into the atmosphere.

Anyway, the general point holds true that if electric cars had to pay the same duty/VAT for their power as do IC vehicles (at least in the UK) it would put the per-mile costs up from about 2p to about 6p (assuming optimal range and off-peak electricity).

Oracle, Fujitsu goose Sparc M3000 entry box

Steven Jones

CMT

There is no way that a CMT chip can compete with the single thread speed of x64, Power, Itanium or SPARC64. CMT chips are all about maximising the throughput of processor cores by presenting virtual CPUs and using up otherwise "dead time" spent waiting for main memory access. They simply are not optimised for single thread processing as to do so means allocating lots or silicon real estate to superscalar techniques such as out-of-order execution, concurrent execution and the like. To do so with the T series would be impractical - the entire processor design is based around on a completely different philosophy. Unfortunately there are many workloads out there where single thread speed matters. For example, to optimise response times or for large, high-throughput, Oracle databases. Indeed it is telling that the Oracle Exadata is built round Intel x64 architecture processors, not the T series.

Now that AMD and Intel are pushing adding many high speed cores into single chips (with two threads per core in the case of Intel), the only real advantage of Oracle's T series processor is on binary compatibility with SPARC code (important for a lot of legacy apps). In almost all other criteria the advantage will be with x64 architecture servers, and on price/performance there is just no comparison.

DARPA aims to make renewable power practical at last

Steven Jones

Hydrogen economy myth

The idea that hydrogen makes any sense as an energy storage system is a persistent myth. When you take into account the full life cycle only about 25% of the original energy content used to produce it will be available for use in a fuel cell. On top of that, it's bulky - even under very high pressures or even liquefaction. The very compression (or liquefaction) of hydrogen uses a very high proportion of the energy content. Because hydrogen atoms are so so small it is notoriously difficult to keep under very high pressure - it tends to percolate through liners and does bad things to the tank material over time. Diving tanks are simply unsuited to this purpose - they have to be built to a much higher level.

Only for a few uses does hydrogen make much sense - whilst it's bulky, liquid hydrogen is at least light, so it makes sense in space rockets where the energy lost in liquefying it is not the most important issue.

Pretty well nothing (at least chemically) beats the energy density of liquid hydrocarbons. It would make more sense to find an artificial way of generating those (perhaps using genetically engineered organisms and sunlight) rather than taking the high thermodynamic losses in spitting water by electrolysis and compressing or liquefying it then somehow finding a way of flying a 'plane using the stuff.

Small nuclear reactors also don't make much sense - for those to operate you need copious supplies of cooling water. That's fine in a submarine, or a ship which are, literally, surrounded by oceans of the stuff, but it's hardly going to work in a desert where even drinking water might have to be shipped in.

Here's one simple link to the basic overheads.

http://www.physorg.com/news85074285.html

Yes, and for those who know about these things, there is talk of using electrolysis under high-pressure to avoid much of the compression energy costs, but it's all rather speculative.

Paramount to recount The Martian Chronicles

Steven Jones

A tribute - I suppose

Some Bradbury fans are willing to go a bit further...

http://www.youtube.com/watch?v=e1IxOS4VzKM

H2O water-powered shower radio

Steven Jones

Just another eco-gimic

Any saving in energy compared to using rechargeable batteries is going to be tiny in comparison with the extra used in producing the turbine and generator capacity. There's less than 0.003 kWh stored in a typical NiMh AA battery.

Facebook 'open sources' custom server and data center designs

Steven Jones

Microwaves?

Microwave frequencies are generally defined as starting at 110GHz. Given that the system bus speeds are generally in the few hundreds of MHz there's not much danger of significant amounts of microwaves being emitted. Once they start powering buses from cavity magnetron, that's maybe the time to worry.

However, I'm sure they will have had to pay attention to RFI issues in the motherboard designs. Otherwise it's not conducive to reliable operation.

Praying for meltdown: The media and the nukes

Steven Jones

@AC

I'm not quite sure how the designers of the plant having got their design criteria wrong makes me an idiot, but if it makes you happy.

Incidentally, it's not unusual for coastal areas to slump after a rupture of this type in a subduction zone. There's an interesting bit in Aubrey Manning's superb "Earth Story" series. It showed how, following the Great Alaska "megathrust" earthquake of 1964 (magnitude 9.2), that some parts of the coastal region rose, and some slumped. Some parts dropped by 2.4 metres. That earthquake also caused a tsunami which was measured at 9.1m at one village it destroyed over 200 miles from the epicentre. Given how sparsely populated the Alaskan coast is, then it might well have been higher elsewhere.

So the possibility of coastal zones slumping following a mega thrust quake was known quite a long time before Fukishama was built. The circumstances of the two earthquakes are fairly similar, although there will always be local variations of course.

Steven Jones

Coastal geometry

There were some exceptionally high tsunami levels, but those were caused because the local coastal geometry amplified the wave height (as, indeed, happened in some pasts of Japan). It ought to be possible to avoid the worst locations in placing plant.

Steven Jones

The EPR design

You can waterproof buildings and still provide an air inlet above the water level (via a snorkel). Also, it might not be necessary for the generators to operate during the actual period of the tsunami - there are battery backup systems which apparently lasted for about 8 hours. Enough time for the water to recede. What you don't want is flood water so damaging the secondary cooling. In any case, it's relatively trivial to mount the secondary generators out of harms way.

This is the specification for the UK EPR diesel generators, and two of them are installed on the roof of the diesel room, 30 metres above ground level.

http://www.epr-reactor.co.uk/ssmod/liblocal/docs/Supporting%20Documents/PPC%20application%20-%20diesel%20generators.pdf

It's just an engineering problem - perfectly viable, but at a cost. If a design for the UK, in a non-seismologically active area can have diesel generators 30 metres above ground level, then why were the Japanese facilities, in a much more at-risk position, not designed to at least these levels? This is hardly hindsight - the EPR design has been around for a while.

Steven Jones

Landslides?

I have seen no signs of landslides following the tsunami. Indeed there were some who filmed from hillsides not that much above the level of the tsunami It's certainly expensive to carve out a ledge 30 metres up a cliff, but subject to checks on local geology, it shouldn't be subject to landslides. It's not like China where the hillsides are so often made of loose soil.

If it turns out that the geology isn't suitable, then fine - but it's still not a stupid question to ask.

As for an afterthought, then that's simply not true. They just built to an inadequate standard. You don't design plants like this to one-in-a-hundred year standards, but to much higher ones - perhaps one in ten thousand.

Quite simply they got it wrong.

Steven Jones

Not a stupid question

'Instead we heard from a Green who wondered why the reactor hadn't been built "above tsunami level"'

That's actually quite a reasonable question. The US has several nuclear facilities built on bluffs by the coastline. That way they can be above tsunami level yet still use seawater cooling. Of course there is an increase in energy usage for pumping the cooling water, but it's relatively modest. Also, the local geology has to be taken into account and it is undoubtedly more expensive to carve a platform into a cliff side, but it's far from a silly idea. Of course a higher tsunami wall could have done the job too as would properly protecting the auxiliary generators in waterproof buiuldings (as the EPR did).

These are perfectly sensible points, and what can definitely be said (at the least) is that the tsunami danger was underestimated or, as some of us might think, compromised for cost reasons. Following the 2004 tsunami it was evident just what tsunamis could do (and the US has done a safety review). Did the Japanese review those too? The Japanese nuclear regulatory authorities have had notoriously cosy relationship with the generating companies.

Artificial leaf produces electricity through photosynthesis

Steven Jones

Finely tuned volumes

"If it's placed in a gallon of water in bright sunlight, the artificial leaf splits water molecules into their component gasses: hydrogen and oxygen."

Wow - do you think it will scale up? I've only got a two gallon bucket.

Oracle's Itanium gambit: A play for HP's checkbook

Steven Jones

EU Power

The EU has plenty of power to investigate market abuse and raise fines if necessary. Even Microsoft were hit by this. Given that there is at least one very large European company (SAP) which has a lot to lose if Oracle abused their market power, then the EU could well take a lot of interest in this.

As anybody who runs a large IT shop can tell you, the costs of migrating from one database to another are enormous. The market is therefore quite "sticky" in terms of changing vendors.

Fuel foolery, merger warnings and Budgetary boons

Steven Jones

Mental ability

For the great majority mental ability and the ability to concentrate and the energy levels have deteriorated notably by the age of 65 (there are exceptions of course). It's at its most extreme with scientists and mathemeticians where productivity is demonstrably lower past the age of about 40 (again, with exceptions). The accumulation of knowledge and experience can still be beneficial in some areas up to the age of 60 (according to a number of studies), but basic cognitive skills start dropping from 27.

If anybody is able to operate as a top-notch IT technician by the age of 80, then hats off to them, but they will be very much the exception.

The workload challenge

Steven Jones

@dic.usa

Nothing new to me there. The point wasn't what you can run Z-Linux, it's what concepts were pioneered on mainframes. TCP/IP simply was not pioneered on mainframes - an early port hardly counts, and in any case, many of the concepts didn't fit MVS very well and the whole centralisation concept of that OS.

I'm also fully aware the the I/O channel architecture was developed to offload processing to specialist I/O controllers so you could do things is search for index blocks without the CPU being involved, but it was an architectural dead end. Yes, there was a need for efficiency, and I know why it was done, but the point is that proper file systems were pioneered away from

Also, I knew 360/370 architecture inside out as I used to write operating system code right down to the issuing I/O commands in kernel code and dealing with interrupts, condition codes (think that was the term) and do on. My company had a high performance OLTP system using a proper pre-emptive multi-tasking system, software disk mirroring, atomic transactions, dual logging and atomic transactions on this architecture back in 1971 (and there is one system still running in production albeit is should have been re-written years ago). I can still read (most) of 360/370 programmes from the binaries, and I can certainly at least align all the instructions by eye as it's a nice predictable architecture.

Yes, and I know about TCBs and all that stuff - it's just that TSO and the like were hardly the last word in sophistication.

So go back to the original challenge - somebody claimed that there were no major computing concepts pioneered on mid-range servers. I just gave some examples of ones that were and where some mainframe concepts became outmoded and replaced by others (like the IBM mainframe channel architecture, or the arcane mainframe access methods).

Steven Jones

I'll take up the challenge

IBM mainframes certainly did pioneer quite a lot of technologies, but very far from all of them. TCP/IP and the whole opening up of networks was most certainly not seen on mainframes first - indeed it's very philosophy of decentralisation runs completely contrary to the way IBM saw networks in the 1970s and 80s.

Also mainframes did not pioneer decent, hierarchical file systems. The very structure of mainframe operating system I/O systems allowing direct access to I/O commands from user programs (labeit with add-ons to impose security) did not allow for a properly layered I/O system. MIt also left mainframes with a bewildering number of different and incompatible ways of holding file data (all those "sams" - vsam, isam etc.) with no common command set. Even apparently simple operations like deleting or copying a file (dataset) couldn't be achieved using a common, straightforward command. You had to be aware of the organisation, use the right utility and remember all the quirks.

Mainframes also did not pioneer multi-threaded development environments. Being stuck in TSO land, you were generally limited to what you were doing in foreground and the ability to submit batch jobs. Even CMS under VM was essentially single threaded as, for that matter, were TP monitors like CICS and IDMS-DC.

Mainframes programming models also did not pioneer good, flexible inter-process communications. That very much came through work on UNIX.

ASCII was also not pioneered on mainframes - instead the rather inconsistent and nasty EBCDIC held sway with its odd gaps in character codes due to the legacy of punched card compatibility.

Also fixed block disk architectures were not pioneered on mainframes - there is a legacy of nasty CKD formats. Any remotely modern software treats devices as logical block access. Even if the disks aren't truly CKD, the backward compatibility makes it very difficult to produce advanced file systems which can be used by legacy programs.

Also mainframes were stuck with a nasty 24/32 bit hybrid architecture for long after true 32 bit alternatives were available in the mid range world and mainframes were also late to true 64 bit.

Mainframes did not pioneer desktop or wysiwyg environments. Yes, there were graphics - of a sort - using specialised terminals, but the whole windowing/mouse type user environment with which we are now familiar emerged from the mid-range arena.

I'm sure there are more - yes there were good things, but a lot of mainframe software looks very old and anachronistic these days.

Oracle to HP: 'Liar, liar, pants on fire'

Steven Jones

Superdome II

The Superdome 2 (which is based on a modified form of the C7000 blade chassis used on Proliant) currently scales to 128 cores (256 threads) and 4TB. It has 32 sockets each capable of carrying a 4 core CPU.

However, I'd certainly agree it isn't going to be in the same price/performance area as a BL980 G7, but the issue for many users of big iron equipment like this is stability and ease of migration. Moving a 10-20TB Oracle DB from HP-UX to OEL is no minor thing, especially if the migration has to be done with only a few hours of downtime and be reversible if it goes wrong.

Speaking from a company that has a huge amount of Integrity, Proliant and Solaris hardware, then the RAS characteristics of the former have undoubtedly been better than the x86/x64 architecture machines, although the latter is improving through the generations. One issue about very large configurations is that RAS characteristics have to improve as, with many times the components, there are proportionately more things to go wrong. Even things like Oracle RAC won't resolve all those issues as node failures can still leave to brown-outs with stale cache and so on.

Of course big iron will turn into more and more of a niche, but for some companies the cost of the hardware is sometimes only a modest part of the entire system cost. Stability is often what matters most.

Steven Jones

@Oliver Jones

What on earth makes you think there is any real personality type difference between those who thrive in capitalist and authoritarian regimes? Generally speaking, it requires a degree of ruthlessness and enormous ambition to do either. It's no accident that the hugely rich Russian oligarchs were also the ones that were well represented under old regime. Also the matter of egos can't be missed out - it's no coincidence that rows and break-ups get dirtier the higher up the management chain.

However, that's not the point here - quite apart from the commercial matters involved here (and this move certainly undermines competition), it's a real slap in the face for a lot of big companies running big-iron HP-UX. It's hugely disruptive and expensive to migrate very large databases on mission critical systems. Oracle will not have pleased many of their big customers with this move.

Fukushima's toxic legacy: Ignorance and fear

Steven Jones

Lewis was addressing only the nuclear issues

Lewis is referring to the nuclear element of the disaster, not the whole thing. Just how much the nuclear clean-up cost, I don't know. But if we take the costs of cleaning up Three Mile Island (about $1.7bn in today's money), allowing for there being three reactors at least as badly damaged and a fourth where the building is badly damaged and allowing for the fact several of the building were wrecked and this has affected the storage pools (not something that happened at Three Mile Island) then I can imagine the bill approaching $10bn to clean up this mess. To that you can add the costs of replacing the lost generating capacity (or at least the proportion of the life lost - some of these facilities had, or were likely to have, their operating lives extended by another decade) plus the economic impact of supply disruption then we might be looking at maybe $15-$20bn. Not, by any means, in the same league as the whole clear-up costs, but also not insignificant.

So it's simply not a minor event. It's just not a catastrophic one.

Interestingly the European Pressurised Reactor would have survived this as it has several backup diesel generators dispersed in several locations all of which are in watertight concrete enclosures. The French have been complaining for some time that they've lost out business against the Americans as the latter have taken short-cuts with their safety standards. This would appear to be some evidence to support that with the proviso that no French reactors have had to survive a tsunami.

There is, interestingly, a litany of Japanese nuclear accidents (some leading to deaths) and the local nuclear regulator has been accused of being far too cosy with the operators.