* Posts by Steven Jones

1526 publicly visible posts • joined 21 May 2007

Tape users wait for news of LTO 7 and 8

Steven Jones

Not a charity

So there are a handful of organisations that use supercomputeres that generate volumes of data which are logistically difficult to deal with on the prospective LTO6 and the question is what are they meant to do if the LTO roadmap stops at LTO6? Well, in a breaking piece of news, I hear that in a real shocker that the storage supplier market isn't driven by charity designed to meet every requirement. It is, believe it or not, driven by businesses that will seek a return on their investment. There may well be a handful of organisations in need of tape backup solutions which scale into the tens of PB without taking up warehouses full of cartridges. However, such a handful of organisations does not necessarily make up a viable market.

To put (say) a requirement for a gross 100PB into perspective (allowing for redundancy like multiple copies, disaster copies and son on) with 3TB LTO6, then that's about 33,000 cartridges. Which is a lot, but there are systems out there where individual tape libraries scale to approaching 10,000 cartridges. Most of those 33,000 cartridges will be duplicated offlined somewhere for archive and safe keeping. It's a bit of a logistical problem of course - archive copies have to be regularly refreshed as there simply isn't a long-term archive storage medium that can be trusted.

Of course it's very unlikely that data is processed direct from tape, apart from the initial slection. The days when data was sequentially processed direct from tape are probably gone for the vast majority of organisation - the data has to be brought online to disk first. There is the real problem - as many people struggling with very large data volumes can tell you, it's the transfer of data to and from tape that is often the biggest bottlneck. The throughput of an LTO4 tape is such that is considerably exceeds the throughput capability of any single disk (apart from SSDs). It even stretches the capability of many RAID controllers. Start running several LTO4s flat out at the same time, and it doesn't take many of those to saturate many medium sized storage arrays. Of course the supercomputer user will have all sorts of highly specialised parallelised and distributed storage systems over their cluster to handle the huge I/O rates. However, that's not a route open to most companies and consequently the market for huge tape drives with many hundreds of MBps each is very limited.

The problem with tape is often not the basic capacity, or even the sequential throughput, it's that issue of moving data between nearline and online storage media. Hence LTO is concentrating on capacity and not throughput.

One more calculation - 100PB per year is an average of about 3GB per second. If we assume that all data is accessed 10 times per year, that's 30GB per second or about 120 LTO6 drives. Double that up for 50% util and we get about 250 drives. If you are an organisation that has this much data, then 250 drives and libraries isn't an impossibly large bill - but coming up with the supercomputer that could deal with the implied peak bandwidth of > 120GBps (assuming moving between disk and tape) is going to be some engineering challenge.

James Martin apologises for cyclist outrage

Steven Jones

@Cameron Colley

Nope - it is quite simply dangerous driving (or worse). Driving without due car and intention is just that. If the intention was to cause the cyclist to end up in the hedge (as was implied by the original wording) then it is at least dangerous driving, and might well qualify as reckless. Of course it is far more likely he was just making the whole story up.

Oh yes - and just because others are not complying with the Highway Code, it doesn't give you the right to break the law in retaliation, especially in a manner likely to lead to an accident.

Steven Jones

Quite simple really

It's extremely simple - either this guy did what he said - that is attempted to scare a bunch of cyclists into having accidents, or he made the story up. In the first case he is a dangerous idiot who ought to be prosecuted for dangerous or reckless driving. Intentionally trying to cause accidents is an offense and needs to be treated as such. I don't care what anybody thinks of any other class of road user, or what they are offended by or what others of their ilk might behave like or how much they pay. It's dangerous, and if you support this then you have no place on the road either. Period - full stop. Just grow up.

If, as seems more likely, he's made it up for the column and to let off a bit of steam, then he's just an idiot. If the judgment he showed in propogating this dangerous tripe pandering to the worst instincts of some of the less intellectually advanced members of the human race (for which this site has a fair number judging by the comments) then one wonders he can tie up his own shoe laces.

SA pigeon outpaces broadband

Steven Jones

Latency

Unfortunately this solution isn't suitable for all network applications. Network ping times can be long, so online gamers can be at a disadvantage. In addition, the presence of Falco Peregrinus along the transmission path can lead to unacceptable levels of packet loss.

Adaptec bolts SSDs onto lightning RAIDs

Steven Jones

@Ian Michael Gumby & Annihilator

Well thanks for that, but you hardly added anything to what I said and I was aware of those issues. Oracle for one can spot full table scans and avoid populating the cache with unwanted data, but that wasn't foolproof (and it's changed again in Oracle). No caching algorithm is perfect - they can only forecast the future from past behaviour.

I was also fully aware of hybrid hard drives and that they've not provided the expected benefit. However, that is surely just because of the state of technology. I'm no fan of requiring motherboard and operating system support for such things - they really ought to be transparent, much as the cache in the controller will be. Whilst the former might produce some improvements through optimising for the peculiarities of particular environments, it's usually a technological dead end as it gets locked into your particular server hardware. Much better to be able to swap out with a fast I/O device.

The particular issue here was over the special characteristics of limited write cycles on SSD. Now I've never much worried about it on "true" write activity, as most I/O on most systems is read (there are exceptions to that where state information is updated and where reads are readily serverd from cache in a DB). However, if this device really does turn a whole bunch of what would have been reads into writes to the SSD then it could wear out faster. I would always assume that the things would be fail safe, it's really the cost implications.

Steven Jones

Possible Downside

I see a possible downside to using an SSD as a read cache in the way described. It would seem that every uncached read from disk will get written to the SSD if the description is correct. Of course if the data is re-accessed later, then that's all fine and dandy. However, if there is an I/O pattern that frsutrates this - sequential reads, such as backups, being one example, then it will churn a lot of data through the cache without any effect. With a volitale RAM cache that only has the effect of wiping out data that might be usefully cached, but with an SSD cache that's a lot of write activity and flash has a limited number of write cycles.

There are ways of dealing with this sort of cache issue - one of the things that can be done is to detect large sequential reads and not cache them. Certainly this thing will depend heavily on the cache algorithms being used. Running a disk backup and re-writing the limited size SSD several times over would not be a great thing to do for either SSD longevity.

Oh yes - and if a disk controllet can do this sort of caching, then it can be done in a drive. Add a few gig of SSD flash and some clever cache algorithms into a standard drive and we might well have something of use to the average PC user. Whatever is done, this needs to be kept off of the motherboards and kept in the strorage devices.

Labour calls for free Wi-Fi on trains

Steven Jones

Economically inefficient

It's about time some of these people who produce these ideas leant a little about economics. Centrally planned expenditure of this sort is always in considerable danger of introducing economic inefficiencies.

Of course there are some times where the market cannot deliver essential services. For those types of cases, there are such mechansims as universal service obligations, localised subsidies and so on. However, once that moves outside of the realm of essential services to one of consumer choice (and surely WiFi on trains isn't an essential service) then you are into the area of economically inefficient expenditure.

Israelis offer unmanned robo smart-missile 8-pack

Steven Jones

@By This 15:43

8 is one less than 9 and that is a square number...

Windows 7 versus Snow Leopard — The poison taste test

Steven Jones

The Most Important Difference

Of course the most important difference of all is not the functionality or usability of the interface, it's that (hardware-wise) Windows is a more open environment than Snow Leopard. Of course that brings both strengths and weaknesses, but I figure that plays a considerable part in many people's decision making.

Arms biz: Your taxes mainly go on our fat salaries! Ha ha!

Steven Jones

@me

And I meant Iraq of course, not Iran...

Steven Jones

@Tom Welsh

The Taliban were directly involved in the support of Al-Qaeda and it provided a base for their operations. It's not a matter of debate or controversy. that is an established fact. What is also an established fact is that the Taliban (then the de-facto Afgan government) either refused, or was unable to do anything about, the presence of Al Qaeda camps and activities in the country. The presence of such establishments may not be essential to all aspects of terrorist warfare, but it is a great enabler, and not something that could be ignored after the events of 9/11. In any event, the very symbolism of the attack on the Twin Towers guaranteed a US response, Merely tightening up security, without any direct attempt at retaliation, was never going to be acceptable to the American people as a whole.

Afghanistan is not to be mixed up with Iran. Involvement in the latter, beyond containment, was a strategic and financial mistake of the first order, driven by the zealoutry of the Bush government . Aghanistan and the neighbouring Pushtan areas of Pakistan remain a potent threat; not just to the West, but to India and much of Pakistan. it appears likely to remain so, unless some political containment deal can be worked out. That probably remains the West's best hope, but such negotiations do not occur in a vacuum. As with Northern Ireland what is impossible at some points in history becomes reality later, as circumstances and attitudes change.

Steven Jones

@AC (why anonymous?)

Spelling was never a great thing with me, but as for just being out of my teens, I wish. My excuse is that it was written quickly.

As far as the general points. Yes, I didn't have time to find out the numbers employed in the arms and related industries in WWII. Just a matter of time to do the research (which is why I said I suspected that was the position). However, the point about western powers fighting major wars being underpinned by industry is inarguable. Just try and find a credible historian anywhere that disagrees on that point. It doesn't matter if it was WWI, WWII, Korea, Vietnam or any major conflict which western powers have been engaged in. They are underpinned by huge industrial inputs of resources. In comparison, the Chinese (in Korea), the Vietnamese, Taliban used far higher proportions involved in the direct fighting. Those combatants seek to neutralise the West's equipment advantages by changing the manner in which such wars are fought.

As for India and China being able to both manufacture and research at cheaper rates than the west? Well that one is a virtual truism. I work in an industry where off-shoring such activities is absolutely the norm for economic (anbd sometimes educational) reasons. There are strategic reasons why this isn't done with defense industries. However, there is also a very good argument that we have already given up the strategic technologies required to make this credible Some might say there's not much point in being able to design the software for a missile guidance system if you can't, for instance, secure your supplies of semiconductor technology.

I've no doubt Lewis knows about the practicalities of being in the armed services and dealing with inadequate equipment. However, he's no economist, and is fond of repeating selective stories on environmental matters in anything but an objective manner. If you engage in polemics, as he does, then expect some robust responses.

None of this covers the issue of whether we should be directly involved in Afganistan. There is certainly a security threat from that area, but we are long past the days when there were large numbers of conscripts that we can through at manpower intensive wars (as there was when my father was sent to Malaysia during his national service). Fighting wars like that in Afganistan are very expensive, even when the West seeks to do so on its own terms. Outsourcing the manufacturing of arms to the US does not fix that problem, and there is a solid case to be made that it will damage the economy and reduce national defence capability. That's not to say that weapon procurement does need to be all "in house", but we are not in the position of, say, Saudi Arabia, where extensive oil revenues can be used to pay for such expensive imports.

Steven Jones

No economist

A simplistic analysis by Lewis who I think hasn't learnt much about the nature of modern warfare. Firstly, it's not obvious that paying the salaries of US highly paid arms workers and salesman that there UK ones is any great gain. After all, at least those paid salaries in the UK and their employers get to pay some of that back in tax. The issues of the net cost to the economy need to be looked at too, Perhaps this analysis ought to consider how many US workers there are in the arms industry.

The second point is that modern warfare. as conducted by western countries. is a highly capital intensive business. The first and second world wars were conducted as much in the factories and industries of the combatant countries as in the field. I'm not sure what the ratio was between WWII workers in arms factories, shipyards, support industries and the like compared to the numbers in the armed services, but I rather suspect it was much higher than you would suspect. These were wars of national logistics, economic, industrial and technological resources rather than just those of armed services.

If you want to take this to a far more logical conclusion over cost effectiveness of military hardware supplies, you might like to consider the way it is done in consumer goods. I'm sure the huge workforces of China, India and other workshops of the world could supply a lot of very high tech equipment, including the research, much cheaper than the US. Of course take this to an even more logical conclusion, and you'd outsource the whole ground-fighting, although no doubt Joanna Lumley would undermine that (one of the things that Western governments have lost the knack of is getting client states to do their dirty work - the fight against Napolean was financed by Georgian commerce and trade).

The UK is already suffering hugely from an imbalanced economy following this self same line that others can produce things cheaper and better than us. We see it in IT, in the car industry and much else. Only in a very few are we competitive (like the pharmaceutical industry). The real problem is sorting out industry and not what amounts to a dialogue of despair.

Brit inventor wants prison for patent crims

Steven Jones

Sauce for the Goose

If those who are found to be in breach of a patent are to be liable to be jailed as is suggested, then might I suggest that this would need to be balanced by similar risks to those making patents which are later found to be invalid.

It should not be forgotten that patent law is essentially a legalised monopoly. The presence of such a system is a special case of the suspension of competition law. It is also not to be mixed up with copyright law which provides a different sort of IPR protection.

If what Trevor Baylis is really talking about is industrial espionage, then that's a completely different things. I might just about agree that there should be a criminal sanction if a company deliberately attempts to defraud a patent owner, but turning the patent violation

into a strict liability criminal offense, as this article implies, is a recipe for bringing technological development to a grinding halt. It's already a major issue for companies falling foul of patent trolls, but if that then became a matter of criminal law then imagine the repurcusions.

(For those that don't know, "strict liability offenses" do not require the prosecution to prove intent - many minor offenses, like speeding fall in that category).

VMware plots world data centre domination

Steven Jones

Perceptive Article

That's an extremely perceptive article - if VMWare abstraction essentially becomes the virtual data centre then that will answer huge number of data centre management issues. I'm not quite so convinced about the hyperviser usurping the role of the operating system by providing an applications with direct resources. However, it is very easy to imagine databases, J2EE environments and the like delivered as virtual appliances with an integrated highly tailored and optimised operating system layer designed to run directly within a VM. At the moment this is a tricky thing to do with "real" x86 hardware due to the install process needing to tailor itself to a massive range of different device drivers and other hardware environmental factors. By running on a VM and going through the better-controlled VM abstraction layer that problem is vastly reduced.

As far as supporting non x86 processors goes, there is an answer to this, at least for the support of legacy applications not requiring efficiency, and that is hardware emulation/run-time code conversion in the hyperviser. It's a tried and tested method that has been used at the application layer for migrations between processor architectures. It has also been used for emulation right down to support for operating systems - in the past, UNIX vendors have provided mainframe emulators (indeed there is a very good freeware mainframe emulator). Porting applications is the ideal way round this, but it isn't always practical or cost-effective. As the performance of x86 CPUs becomes ever faster, emulation approaches that were previously considered unviable become practicable.

It's not going to be suitable for high-throughput systems, but every large shop is going to have its legacy of old hardware with processors which are either out of development or whose future looks dubious. That includes, of course such processors as VAX, Alpha, PA-RISC, MIPS (any of those left in production?) and arguably SPARC or Itanium. Many shops also have a rump of IBM mainframe or Power machines which they would rather eliminate as they've moved strategic applications to Linux or Windows. Of course the bugbear here is the rights holders for the operating systems and other licensed software as costs, support and legal obstacles become an issue.

The most promising candidate for such emulations was snapped up by IBM in the form of Transitive. I rather suspect that an emulator of some sort embedded within VMWare would be considered to be highly desirable by many shops, and I wonder if this is being looked at (clearly Intel could do things to assist emulation in hypervisers as well). Emulation is just the ultimate level of virtualisation.

Intel says data centers much too cold

Steven Jones

@Destroy All Monsters

As has been pointed out already, 48v DC powered servers are available (mostly for the telco industry). Incidentally, 48v feeds does not eliminate the need for switch-mode power supplies - virtually none of the electronics in a server actually runs directly at that voltage, and stepping down from 48v DC to the required levels isn't really much more efficient than stepping down from 240v AC.

However, there is a big potential saving in using DC-powered data centres, and that is because there is currently a lot of wastage in the area of non-interruptible power supplies. Most current practice has the inbound line power from the electricity supply (and from standby generators) being stepped down to battery levels and then stepped back up again to 240V AC. That double conversions is wasteful. Supply your servers via 48v DC feeds and you can avoid that step-up phase.

However, there is a fundamental problem of physics which makes supplying a data centre at 48v wasteful. As any schoolboy will tell you, to deliver the same power at 48V DC as you do at 240V AC (RMS into a resistive load) then you are delivering 5 times the current. As power loss in wiring goes to the square of current, then if you want to keep your power loss in the data centre wiring to the same level the cross-section of the wiring would have to be 25 times higher. Wire is actually impractical in those cases - you have to use very thick and heavy copper bus bars. Start equipping a 2,000 square metre data hall with that sort of thing and you have a major problem.

It is, of course, possible to run higher DC (more batteries in series) to avoid this, and still get the benefit of feeding your data centre at the non-interruptible DC battery levels, but 240v DC is extremely dangerous stuff (mostly a 240V AC shock will not prove fatal unless you are in poor health or are unfortunate). However, high voltage DC has the very dangerous feature that it tends to lock the muscles to the conductor whereas AC tends to make the muscles spasm and release.

Another approach is to provide each aisle of servers with their own non-interruptible power supply (ie batteries) and step-down from the mains with 48v feeds to local servers, but that means putting a lot of lead-acid batteries into you data hall with you servers, not a great place to put something that needs maintenance and can produce nasty, corrosive fumes.

The issue of available DC-powered servers isn't so much of an issue. If there was the demand for it, then it is a trivial matter to equip a server or blade rack with a DC-DC power supply rather than the normal 240v AC-DC version.

US music publishers sue online lyrics sites

Steven Jones

@Steve Swann

Blake, Shakespeare, John Donne and so on are all out of copyright so you can distribute them as much as you like. However, those for more contemporary poets such as John Betjamen will remain in copyright until 70 years after hist death (in the UK - the term of copyright varies by country - the works of HG Wells are out of copyright in the US but will remain copyrighted in the UK until 2016). The copyright of such works remains with the author's estate (or whoever those rights have been sold to) for that period.

Steven Jones

@DavCrav

I think you've missed the point - this is not the recording industry that is taking this action. It's a body that represents composers - a completely different thing. The recording industry does not, in general, own the copyright to the lyrics or compositions, only particular recordings. Copy a classical recording within the copyright period and the recording industry might be interested.

On another point, all those amateur musicians who have recorded covers of copyrighted songs on YouTube without authorisation are all technically in breach (as, of course, is Youtube).. However, I haven't heard of any copyright holders (or their representatives) chasing these down. What I have seen is live cover versions of Hallelujah (written by Leonard Cohen) taken down off of YouTube. Not amateur covers, but some live recordings of that song by Brandi Carlile (I know somebody who had an account suspended for that reason). Live recordings of Brandi's own songs remain up so this will almost certainly be down to actions by those representing the rights to that song. The rights to Leonard Cohen's songs have been the subject of some bitter legal proceedings which may well be connected.

Kettle car breaks speed record

Steven Jones

@_wtf_

I think running a full scale steam piston engine at red-0heat levels in order to get the gas expansion rate (and hence rev rate) up to what you would see in an IC engine would be pretty near impossible to achieve reliably at full scale. High powered IC engines tend to use liquid cooling of the cylinder walls in order to keep keep the lubricant working and stopping the engine seizing up. If you use re-hot steam in a reciprocating engine and cool the walls down to a reasonable temperature, then the efficiency of the whole thing is going to suffer horribly. Thermodymic efficiency would suffer even more as it would effectively rule out using a compound engine (even if the cooler cylinder walls didn't slow down the steam expansion in the primary, the exhaust temperature is going to be much lower so the expansion rate would be slowed down).

There are very good reasons why steam piston engines tend to run at lower temperatures and lower RPM than IC engines. Even if somebody can get a small scale steam piston engine revving into the stratosphere, that isn't going to be a practical thing at full scale. Keep the really high temperature stuff to turbines where you can run them, hot as there is no direct contact with lubricants.

As far as thermodynamic efficiency goes, I think that for ship propulsion at least, a diesel engine is still the thing to beat. Thermodyamic efficiencies of over 52% have been demonstrated where the best steam turbines are still below that figure. In commercial shipping diesels tends to dominate over steam turbines (the various navies of the world have different priorities - especially where their source of power is nuclear).

Steven Jones

@brakepad

You are right that if the acceleration is constant it would average about 0.5ms/s (which is fairly pathetic - a family car will manage around 4 times that). I suspect there are two main issues. Number one is the mass that it has to carry in water. Given that a run would have lasted about 2.5 minutes (or which 2 minutes is that fairly leisurely acceleration), then that is apparently a tonne of water that it has to carry, and that's assuming that they don't have to carry enough water for both runs - I believe the LSR rules allow for refueling (not that the water is fuel of course). However, to do so would require bringing a tonne of water up to full operating temperature. Even overcoming the latent heat of evaporation for 1 tonne of water is going to take 334MJ without whatever increase in temperature is required (I guess they could have recharged the boiler with near boiling water, although that strikes me as dangerous. Pumping (say) 600MJ into the boiler in about 3,000 seconds (about the maximum turn around time they had) is going to require about 200KW to be transferred into the water in the boiler. That might sound a lot, but given that IC engines are maybe 25% efficient that's about the rate of fuel usage of a 70bhp petrol engine. For those that think that an instantaneous boiler is used - well that would require something upwards of a 2MW+ heat source if it was to turn 1 tonne of water into steam over a 2.5 minute run. Really rather too much for a vehicle that size.

The second issue is surely the use of a steam turbine - those things are great at running at high RPM and producing a lot of power whilst not occupying much space. However, what they are truly terrible at is producing torque at low RPM. Even equipped with a gearbox it is likely to be operating outside its peak power output area for long periods of time.

In contrast, piston-driven steam engines develop maximum torque at low revs (look at the way a powerful steam loco tends to spin its drive wheels at rest). That would mean it would accelerate faster, it would take less time to reach top speed, and would very likely need to carry less water in the first place. Steam turbine driven rail locomotives never really succeeded either.

I can't help but think the designers may have made a mistake choosing a steam turbine and I rather think an updated version of a Stanley-like steam powered car with modern materials would have got the job done rather better in the available length of track. Of course piston engines don't rev very high (steam doesn't expand that quickly, and the cylinders tend to be large). So a piston-driven steamer would have its own gearing issues. However, I suspect that current steam technology is all designed round steam turbines and nobody has much experience of designing high-power piston-driven steam engine, whilst you can buy a steam turbine off the shelf (Stanley gave up record attempts when they have a near fatal crash, so maybe there are serious problems with the drive train on such beasts - after all, coincidentally or not, The Mallard reached pretty well the same top speed as the 1906 Stanley record).

In the case of some steam-turbine driven ships and submarines, turbo-electric drive trains are used. That might work in this case, as electric motors generate maximum torque at 0 RPM and the steam turbine can run at its most efficient RPM rating and that might overcome any extra penalty of carrying a generator and motors (it would save weight in water and gear box). However, would such a beast count as a steam-driven or electrically-driven vehicle? Or maybe I could build my own and claim a turbo-electric LSR even if I only got to 5mph...

Steven Jones

Not a great rate of progress...

So that's over 100 years to achieve an increase in top speed about 13 mph. Even allowing for the old record being set under less rigorous rules, that's not exactly a breathtaking rate of improvement.

Electric motorcycle world championships planned

Steven Jones

@Hame Micallef

The lack of noise issue is one that primarily affects pedestrians. The number of pedestrians killed in collisions with motorcycles is way out of proportion to their numbers and annual mileage. Per mile travelled, collisions with motorcycles kill 3.7 times as many pedestrians as those with cars. Making it much more difficult to hear the things seems very likely to make that worse.

The multicore future, and how to survive it

Steven Jones

Unconventional Thoughts

Beside his take on parallel processing, Louis Savain would appear to have some other ideas not strictly in keeping with current scientific orthodoxy.

http://rebelscience.blogspot.com/2008/01/war-between-believers-and-deniers.html

Shock jock blames Britain for hack attack

Steven Jones

@Richard 102

"To be honest, as a taxpaying agnostic libertarian white male American, I'm telling the rest of the world to go away and solve your own damn problems, to hell with you. Of course, the last time we did that, it lead to World War II."

Before the US reintroduce the policy of isolation that it held between the two World Wars, the country will need to solve its little problem of dependency on middle eastern Oil (not that the rest of the western world don't have this issue too of course). As it happens, this particular issue had little to do with Europe and, like many things these days, everything to do with middle eastern politics. Given US interests and involvement with Saudi Arabia, Iraq, Kuwait, Iran, Israel and Afghanistan (to name but a few), then retreating back to continental USA isn't really going to happen.

Remember it was a US airliner that was targeted, nominally by Libyans, although many point the finger back at Iran. It has to be remembered that in July 1988 the U.S. Navy's cruiser USS Vincennes somehow mistook an an Iran Air Airbus for a fighter-bomber and shot it down with the loss of 290 lives, only 4 months before Pan Am Flight 103 was bombed. It's not the work of a conspiracy theorists to suspect that, at the very least, the latter event was at least in part retaliation for the former, even if it wasn't directly perpetrated by the Iranian government.

I'm afraid pinning the Pan AM bombing down as a European issue is missing the point. This was aimed primarily at the US and is heavily connected with US interests in the middle east.

US Navy aims to make jetfuel from seawater uranium

Steven Jones

The "Fi" part stoof for "fiction"...

I'm not sure what Lewis has been sniffing putting the poor coverage of basic scientific principles down to people not reading "proper" Sci-Fi these days. Heavens, I read enough of that in my days (before I went on to do a Physics degree) and relying on that stuff for a reasonable insight into basic principles of Physics would have got you nowhere. Yes, there were people like Arthur C Clarke who did know their stuff, but even he had to fall back on some fairly improbably devices and technologies in order to overcome fundamental limitations. I wouldn't say what he came up with was actually impossible, but lots of it was certainly conveniently invented. Most Sci-Fi writes didn't bother - they just used a few fancy words and got on with writing their space operas.

In fact the more interesting Sci-Fi writes were more interested in the effects on society of some occasionally more mundane things. Like Wyndham speculating on what would happen if society discovered something which extended life expectancy (trouble with lichen). There were novels of overpopulation, exhausted resources, oppressive governmental systems. There's far more to learn from them than some daft idea that Sci-Fi taught people about the second law of thermodynamics. Star Wars is "proper" science fiction on than basis, but it's scientifically nonsense.

On the little sea-water-into-jet-fuel storay, the New Scientist article goes on to say that it would only work as a greenhouse-beater if "clean" power could be used. I guess if anybody does manage to harness the deuterium in seawater in a fusion reactor, then the complete jet fuel source would come from that most abundant substance. However, the following statement is nonsense :-

"As a result the primary limiting factor on how long a US carrier can keep flying its planes is actually the amount of jet fuel it can carry."

Gee Lewis, if the Germans managed to resupply U-Boats at sea being heavily outnumbered by the Royal Navy, don't you think the US Navy can manage resupplying at sea given that they have overwhelming control of the oceans? Yes, it may be a trifle inconvenient and might involve a lot of tankers having to be shepherded around, but surely it isn't an insurmountable issue with the current state of military power. Of course the US (and western) dependency on imported hydrocarbons is of huge significance to ongoing power, but largely because it strikes at the very heart of western economic strength.

As far as renewables go, there are outfits trying to use direct sunlight as a means of generating synthetic hydrocarbons - a long shot maybe, but those techniques can use far more of the Sun's energy than photoelectrics which can only use a small part of the spectrum. There is plenty of desert area on Earth. I'm not wholly convinced (even if it can be made to work, there are problems of getting the necessary water and the absolutely huge capital costs). However, it is maybe just possible for those solutions where only the power density (by volume) of hydrocarbons will do the job. It certainly seems unlikely to replace current oil supplies.

Opaque Wi-Fi laws 'damage UK economy, social inclusion'

Steven Jones

Mixed up thinking

It seems to me that there are two very different things being dealt with here. One is about whether it is legal for a WiFi operator to offer free access to a service via their contracted ISP whilst the other is about how you know you are

On the first point :-

"Mac Síthigh argued in his paper that such legal uncertainty threatens not just the ability of commercial organisations to use free Wi-Fi to attract customers, but local authorities' ability to offer access to people who otherwise might not have it."

This would appear to be firmly in the area of civil law and would appear to be governed by the terms and conditions of the contract that the WiFi operator has entered into with their ISP. I would have thought that any local authority or business would have considered this. It's certainly true that there will be grey areas for small business - for instance, can a B&B, cafe or Guesthouse use a normal domestic account for offering what amounts to a business service. But that will be primarily a civil law matter and not one of criminal law unless there is some conspiracy to defraud the ISP. I suspect similar issues arise with a householder sharing their ISP connection with their neightbours - that may fall foul of ISP T&Cs, but it is a civil matter.

The second point is different again - just how do you know that there is implicit approval by the WiFi operator for free access to a publicly available network. In some cases it will be obvious - like notices to that effect. However, in other cases it will not be so. Clearly if a WiFi connection's security is hacked, then that is clearly not an authorised access - but what about those WiFi connections which are left open by accident? How do you tell whether that is deliberate or not? Maybe some explicit system (like network naming) should be used.

Of much more legal concern in a criminal sense over making a WiFi access point publicly available is surely the issue of when it is used for criminal activity. I doubt that this is a problem for truly public services, but a householder who allowed their WiFi service to be available for others who then used it for illegal activities could find themselves involved in unpleasant investigations (at the least). Given that thse sort of investigations are heavily reliant on ISP records and logs, the first point of call is very likely to be the person contracting the service. For this reason, then I think householders would be wise to be very careful over the informal sharing of WiFi connections with others than those who can be trusted. It should be added that this attention could also include those seeking civil remedies for such thing as breach of copyright. It may not be an area of strict liability in law, but dealing with the consequences could still be very unpleasant and costly.

BMW's X6 turns eco

Steven Jones

Losing weight by trimming your toenails...

Just another nice way for BMW to profit a bit more by allowing drivers of these beasts to sooth their conscience with an environmental fig leaf.

Cat awarded online high school diploma

Steven Jones

Barely qualified

Indeed Ben Goldacre's dead cat Hettie did get awarded a diploma from American Association of Nutritional Consultants, the very same institution that awarded Gillian McKeith a PhD and allowed her to bring a whole new level of talking shit to UK television.

Is LTO-5 the last hurrah for tape?

Steven Jones

Tape is dead??

I've heard this one before. Any big company running mission critical systems has to off-site multi-genaration backups for disaster reasons. Also many companies are subject to compliance regimes that dictate the retention of data for long periods - often years.

Now you can run de-dup system with remote replication to allow for off-siting of your multi-generational backups without (hopefully) using too much disk space. However, de-dup won't work for archive and log type data (such as DB arvchive logs) as every block tends to be unique. Also, there is a penalty in removing all the redundancy in your backup - multiple, full backups to tape gives you truly independent, multi-generational copies. With de-dup you are absolutely relying on the software and hardware not to screw it up. If it does, then it has the potential to render all generatons of your backups unusable.

There's also the environmental issue. 10PB of tapes will only use power when being read/written to. Even if you can get 10:1 reduction in space requirements (and you'll be lucky) then that 1PB of ((RAID-protected) disk space is going to be chewing up perhaps 20kW with the controllers - if you are dealing with the off-siting issue through de-dup replication, then double that. MAID setups don't work too well as, with de-dup, you tend to be splattering data all over the array and can't really shut any of it down.

Now none of this is to decry using disks where it suits. If you have a small enough business, then a couple of USB-connected portable disk drives might do.

Boffins showcase do-it-yourself flying spy drone

Steven Jones

Mounting Points

So where do I attach the Hellfire missiles to that thing?

Stephen Hawking both British and not dead

Steven Jones

@Bruce 9 & Prostate Cancer

I suggest you do a bit more reading around the reasons for the difference in prostate cancer stats in the US and the UK as you clearly haven't any real idea what you are talking about in this respect. Scientific American carried an article on it.

Essentially the difference is dued to the (controversial) practice in the US of regular PSA screening for prostate cancer. What this leads to is the US having far and away the highest measured incidence of prostate cancer in the world. This leads to early diagnosis, although the evidence that this actually makes much difference to mortality is rather poor. What does happen is that a large number of men are subject to unpleasant and, potentially dangerous treatments for a condition that would not have killed them (prostate cancer is usually a slowly progressing disease, and very often men will die of something else first).

When you start looking at the age-adjusted mortality figures for the US and UK they are reasonably comparable. Slightly in favour of the US, but only by a very small margin. Quite simply, if you diagnose a slowly developing disease early, you will, of course, have a higher 5 year survival rate, but you might well just get an extra few years of worry, unpleasant treatment and very likely die of something completely different.

http://info.cancerresearchuk.org/cancerstats/types/prostate/incidence/

PSA screening is medically controversial, but in a system driven by maximising the incomes for medical businesses and riddled with eager lawyers vast sums get spent with very little return and, arguably lost of unplesant treatments being inflicted on people. I'd also add, that if men want to go for PSA screening in the UK, then there are lots of doctors that will do it.

Just to show that this is not a UK argument, here's an article in the Chicago Tribune on the subject. The National Cancer Institute concluded that for men with life expectancies of under 10 years, then they should not do a PSA test as they are unlikely to benefit.

http://newsblogs.chicagotribune.com/triage/2009/03/the-prostate-cancer-screening-controversy-continues.html

In the meantime, it's fairly clear that the US, like the UK, contains a large number of people easily swayed by headline statistics who either lack the ability or interest in actually understanding what the hell they are quoting, just as long as it sounds as if it supports their argument...

Two convicted for refusal to decrypt data

Steven Jones

@nic 3

I think the real danger with this is that somebody will get caught up in some Kafkesque saga where they are required to provide a password for some encrupted file and have genuinely forgotten it. I suspect many of us have old password protected/encrypted files that we have forgotten about or have lost their purpose. Certainly I have.

It may be considered unlikely that people will get drawn into "serious" investigations and end up in this position, but that's far from the case. It's only necessary to look at Operation Ore where very many people had PCs seized following the discovery of credit card numbers on a web site carrying child porn. Of course it is far from the case that all of them were innocent, but there were certainly a very substantial number who were being the victims of things like stolen credit cards or frankly erroneous statements about what they must have seen.

All it requires is a mixup on log records for somebody to be dragged into an investigation. There have been mixups over such stupid things as differences in timezones (BST vs GMT) on ISP records, not to mention the possibility of Trojans, wireless networks being hijacked and any number of other things which could end up with an innocent individual being dragged into investigations of some very serious crimes.

Steven Jones

Double Encryption

There are plenty of options for the truly criminal. One is to use truecrypt which has a system of double encryption which allows for plausible deniability. There are two encryption keys. The second is optional and is used to hide a hiffen volume, the existence of which cannot be proven. So you can be forced to hand over the first key, but if you have further data hidden then it's existence in what is apparently spare space cannot be proven as it all just looks like random data.

Most importantly, trucrypt works in memory - it's very easy to leave traces in other parts of your system (so you have to be careful of what applications are doing). There are still plenty of ways this could go wrong, and if you send a file to somebody else, you'll have to trust them not to make mistakes and reveal the presence of this hidden data. Of course just the presence of trucrypt might be enough to raise suspicion, but for a court to convict an individual for not revealing a password which they can't prove must exist would, even in these days, be a difficult one.

Then as an alternative, you can go for steganography. There are ways of hding information, which may itself be encrypted, in apparently innocent files such as large media files. It can just look like the little bit of random noise that you get in any such image. The existence of such things can also be difficult to prove.

Of course you need to trust the software developers - it they've made a mistake in their implementation, and the existence of such things can be detected, then you could be in serious trouble.

Sun's Rock is barefoot on Abbey Road

Steven Jones

A long, drawn out death...

It looks like whatever development future there might be for SPARC, it will be with Fujitsu. Oracle will seek to eke out software revenue from what will be a steady loss of market share. Just whether Oracle will continue to develop Solaris (whether for SPARC, or x64) will be interesting to see. Maybe Oracle will seek to get Linux and Solaris on a converging course, or then again, maybe Larry will seek to steer Oracle customers towards Solaris and make it a premium product (perhaps by tailoring Solaris and Oracle together) - who knows.

The Fujitsu SPARC64 is actually not a bad chip. It's a bit on the hot side, but performance and (reputedly) reliability it is way above SUN's old SPARC IV+. Whilst the Niagara will beat it hands down on throughput per watt, when it comes to single thread performance (which matters for a large number of apps - including meaty databases) there is no contest. With an 8 core cpu on the way and decent single thread performance, a re-badged Fujitsu machines looks like a sensible way of avoiding lots of hardware investment. Bad news for SUN hardware engineers of course, although their machines have never quite made it as the best big-iron ones about.

Twitter sued for patent infringement

Steven Jones

@Paul Crawford

You only have the intermediate step and one quoted by Wikipedia. Virtually all products can be readily reverse engineered. It's certainly true that it is a requirement of the patent system that it is published (how would you know what you might be infringing on if you didn't know the content of the technical details in the patent or, indeed, what on earth was being patented). The purpose of the legislation was to encourage enterprise and invention. Publishing the details is a necessary, and welcome, side effect but it isn't the main purpose.

If the publishing of technical research was the main aim then it might extend further - for instance, just how much wasted effort would be saved if those seeking patents were required to publish the full details of the relevant research (which they are most certainly not required to do). This is not just a theoretical issue - drug companies waste a lot of money on fruitless research carried out by others (read Ben Goldacre on the subject). I happen to think he's probably wrong, but the requirement to publish technical details on patents is a very narrow one and inescapable.

Steven Jones

@Cyberspy

"Why was this patented?"

I guess you mean why was the patent granted, and that's a good question. Well the answer is quite simple - just because a patent is granted, doesn't mean it is actually valid. These days the patent offices don't do much more than register that the patent exists and maintain the library. Apart from a few obvious checks that you aren't trying to patent fire or a perpetual motion machine, then there is no guarantee at all that the proposal is novel, might conceivably work, doesn't infringe another patent or invalidated by prior art. In the US patents are effectively granted by default unless anybody can be bothered to go through the incredibly lengthy exercise of proving that it either doesn't qualify, or infringes on another patent. It's all left for lawyers to argue over, and if necessary, get settled in court.

Perhaps, like Einstein did when he worked in a patent office, all the staff are working on revolutionary theories in physics (although I rather doubt it).

Steven Jones

Mostly Junk

There appears to be nothing obviously novel about any of the patent applications, having had a quick scan, and they are full of innacurate and erroneous claims. For instance, they claim there are no messaging and alerting system which allows the recipient to define the method of receiving it - that's clearly bonkers. In our own company, we have got many ways of sending out notifications and alerts, and have had for many years. We probably still have pager gateways soemwhere. Then there are bits about the use of a database to allow for the definition of individuals, contact methods, groups. All of this is standard stuff.

Reading this stuff fills me with despair - largely because if such generic concepts listed in those patent applications are consider novel by the legal system then we are all doomed. The US softwae patent system is truly designed to be a major revenue generator for lawyers, and as the law was no doubt drafted by lawyers, perhaps that should be no suprise.

It's also fairly clear that the purpose of the patent system has been forgotten. The reason it was invented was to give a short term monopoly so that inventors and developers could recover and profit from large investments in research and development. The problem was, without such a system, nobody would invest in such things as a competitor would just copy the idea. So it was meant as a means of encouraging investment. Now we have a system where some companies just throw hundred of speculative ideas into the patent system, often without any credible plan to produce anything, just on the basis that they could get really lucky in the future an win royalties from a few f these.

Murdoch says Page 3 won't be free from next year

Steven Jones

BBC

I think you can expect that the government and regulatory bodies to come under even more pressure over the BBC's free internet services. There's no doubt that a combination of free services from the BBC and the Internet/Google siphoning off advertising revenue spells really bad trouble for the independent media sector (debatable if you include Murdoch's empire in that little list). It's already the case that local papers are really disappearing and that quality, independent reporting is in very short supply. ITV are losing money, Channel 4 is also in a dire financial position. I don't think it is good for the country to have just a couple of major outlets in the form of a News International/BBC duopoly. However, that is the way we are heading.

It's not a pretty picture for journalism in this country. Yes, there are some very good bloggers, but they tend to be on niche subjects and are often issue-driven. Outfits like the Huffington Post provide some alternative, but that tends more to be a collection of columnist type articles from a host of different sources, some good, some not so good and rather a lot that are down right dreadful.

Electric car powers across land, ice and water

Steven Jones

Financial fantasy stuff...

More fantasy land stuff I guess. So let's imagine this thing could be rented out for 250 days per year, that's £1,000 per year. Keep it in the fleet for 5 years and that's a gross amounf of £5,000 income before you've paid an admin costs, maintenance and all those other little things. You can't even hire a bicycle in the UK for that price for a single day, let alone an amphibous electric car of somewhat unproven technology.

TMS wins flash bragging crown with 100TB monster

Steven Jones

Comparison with IBM's TPC setup

It would be interesting to compare this with the set-up that IBM put together for their class-leading TPC'C' benchmark at 6,085,166 transactions per minute or about 100K per second. Most of the cost of that benchmark was in the storage listing at about $20m. It had no less than 68 disk controllers with 11,000 disk drive (8 x 146GB and the rest all 73.4GB 15K). You might, in theory, get something over 2 million random IOPs out of that many disks (before RAID overheads), but I suspect that will be at the expense of about a quarter of a megawatt of power and maybe 20 racks or more of space. Of course you would get nowhere near that 2 million random IOPs when measured at the front end due to the need to keep I/O queues down to a tolerable level and the write overhead.

Note that TPC rules allow for the vendors to discount the prices provided they would be available at that price at those volumes to real customers.

With the SSD it looks like you could hit 2m IOPs (at a much lower latency than the physical disks) with a list price of about something less than $9m in a couple of racks and what I assume is a fraction of the power requirement.

The one real downside is the amount of storage. Most of the IBM TPC'C setup was configured as RAID0 (log files, which are written serially, were RAID5). Configured as RAID0, the 11,000 mostly 73.4GB 15K drives would have offered about 400TB of storage space, so in terms of IOPs, cost per TB, power usage and (I assume) latency the SSD configuration would already be ahead. Of course those of us in "real" data centres aren't generally allowed the luxury of populating arrays with 73GB drives just to get the IOPs up - 300GB 15Ks (or worse) are the order of the day, so SSD falls way behind on a cost per GB basis. However, the fact that RAID-protected SSD can be a comparable in cost per GB to RAID0 protected 73GB 15K drives is very interesting.

It must be about time that a vendor does a TPC'C' run using SSDs. It's interesting that nobody seems to have yet done this, but if these numbers are correct, the nail must firmly be in the coffin of 73GB 15K enterprise drives. Cost these configurations over 5 years, including environmentals, and it is no contest. Enterprise 15K drives of about 300GB will be around for a while, but there days are surely numbered.

UK teens bullied into sending sex texts

Steven Jones

Fake Charities

There seem to be quite a few charities out there that receive virtually all their funding from the Government in order to lobby the Government to pass some law or other. There's some interesting stats on this site

http://fakecharities.org/

For instance, in 2007/8, Alcohol Concern got 57% of its income from the DoH with less than £5k from private individuals. Quite a few of these charities activities are dominated by lobbying with relatively little done in delivering services...

Flying 'Motorbike'/Reliant Robin 'to take off next year'

Steven Jones
Alert

Centre of Gravity

Where on earth (or in the sky) is the CoG of this thing? As far as I can see the centre of lift from the main wings is right at the back of the machine if it is as illustrated. I can't see how the tiny wings or main body could generate enough lift to make the difference, so that think looks destined for a nose-dive given that the pilot (at least) is a long way forwards.

Well, that's unless this is yet another half-baked artist impression owing more to form than function. Now we've never seen The Register feature those have we.

Judge: Informal emails, phone calls did not establish a contract

Steven Jones

Misleading title and sub-title

The way the article is titled is misleading. It gives the impression that it was the media that was used in the communication, not the content of it which was the issue here. What is actually says was that the lack of formality of the content and insufficient detail that was the main issue here, not that it has to be written down and signed.

It is perfectly possible to find youself in a binding contract via email, letter, fax or any other method of communication provided that it's existence and authenticity can't re repudiated. It's just that you need to include sufficient specific details over what was being offered, what was being accepted and for what payment.

Server virtualization – what could possibly go wrong?

Steven Jones

@boltar

There are plenty of reasons why you might to choose to virtualise UNIX (or any other OS) rather than co-host applications. Firstly there is the issue of housekeeping and patch management - anybody who has worked on a co-hosted environment in a service-critical environment will know about the problems of co-ordinating outages for things such as patch management, introducing OS upgrades or some configuration changes that can only be done by bouncing the machine. Yes, there are times when the Hyperviser needs such treatment, but with the ability to move VMs dynamically, then downtime for that can be reduced.

Then there is the issue of isolating problems. Unix is actually not very good at that - a badly behaved application can bring the whole machine down. Anything from filling up swap, to forking to many processes can, even if it doesn't crash the machine, bring throughput to a virtual crawl. Anybody who knows IBM mainframes will know that there are far tighter controls workload management, but that's a far more rigid and less fluid environment. UNIX, Linux and so on are simply not like that - it's a strength, and it's a weakness at the same time. Doing your workload management at the hypervisor level can allow you to greatly limit the impact of badly behaved applications. Then there is the convenience fact of facilitating consolidation - yes, you might consolidated a dozen Unix apps onto a single co-hosted environment, but then you've got to sort out all those version and library differences, the clashing naming standards, the shared configuration files, the kernel settings. It's often more trouble than it is worth.

Also. virtual machines work particularly well in development and test environment. VMs can be bounced, libraries changed and the like without impacting everybody else.

Now this is not to say that co-hosting doesn't make a lot of sense. However, I'm much less convinced about co-hosting unrelated applications. For larger organisations it often makes sense to develop consolidation strategies that allow you to present "appliance-like" services. It's perfectly feasible to produce farms for co-hosting databases, J2EE environments, web-services. In that case you have virtualisation at a different level - that of software services. It's much more efficient in hardware utlisation terms than having a VM per application instance. You can then have an environment which is optimised for running a given type of workload.

Eventually is seems likely that everything will be virtualised by default - look at the mainframe arena. However, that's not instead of co-hosting, it's as well as.

As for my main problems with (machine) virtualisation. Firstly there's config management, especially insofar as it affects software licensing, management, performance and capacity planning. If you are going to move your apps all over an ESX farm you had better have a way of dealing with all those issues (and the software licensing one can really bite you in the backside - there are plenty of companies out there that will seek to optimise their revenue through their licensing models that con't recognise the reality of virtulaisation). Then there is the support problem - I've lost count of the number of suppliers that don't support virtualised environments. Some of it is FUD, some of it is real. Then there is the issue that machine virtualisation can be inefficient and ingrain bad practice. One thing that VMs do is chew up memory and disk space as every OS carries a considerable overhead of both. One of the major problems is that none of the OS's that you are likely to run on VMWare and the like will share code pages. For those that have used CMS on VM, that was specifically engineered so that different instances of the guest OS would share code pages.Not much chance of that with Windows or Linux (unless things have change, IBM gave up on doing the same with Linux under VM, but if it were possible it would save huge amounts of memory).

Google hints Bing! pact will curb competition

Steven Jones

@Daniel Jarick

You really don't have too much understanding of economics or the role of competition do you? Just to enlighten you over your apparent misaprehension that controlling a market requires overtly criminal activities such as you allude to, there are plenty of ways that a dominant supplier (and Google is most certainly that in this market) can control a market through means such as pricing policies, creating barriers to entry, buying up potential competitors and so on.

There are also plenty of structural means by which markets can become uncompetitive. Some are where certain markets are natural monopilies. For instance, it's extremely difficult to come up with a sensible market providing mains water pipes to the home (rather than sourcing the water - a different thing). In the area of computing, Microsoft where able to manage to reach a dominant position due to a combination of astute commercial decisions and the pressing desire of many organisations to have common means of exchanging data. That happened witht he combination of a standardised PC architecture, a standardised operating system and standard data formats, of which the most important was MS Office. Unfortunately for competition, many of the required standards for this platform were proprietary, hedged around with all sorts of IPR issues and less than platform-independent. Micrrosoft were able to extend this dominant position through the use of pricing policies, tie-ins, cross-subsidies, loss-leading, buying up key potential competitors, or undermining them by giving away products (a straight issue of cross-subsidy).

There are a number of simlar tactics being undertaken by Google. They are certainly in the game of giving away products, largely as a means of locking in further advertising revenue. Now this sort of thing can be in the short term interests of most people, but in the long term all organisations get fat, bloated and stuck in their ways. That's when the presence of viable competition is particularly important to encourage innovation. It's generally considered a bad thing to allow dominant suppliers to erect barriers to competition through what are broadly called anti-competitive practices (a term that describes a number of different commercial means which, whilst harmless when practiced by smaller suppliers is devastating when carried out by dominant ones) ,/

I think you've missed the little irony in the story - that it is Google making the point about the tie-up damaging competition when they are dominant. In a very real sense Google control the main market dynamics, and it is not necessarily about the quality of their products. Essentially Google only have one real product - in the sense that it is something they sell. It's delivering advertising - virtually everything else, especially the search engine, is a marketing tool to sell that product.

MySQL startup targets SSDs

Steven Jones

@Matt 21

We have a system with > 99.8% cache hit in Oracle and OLTP transactions are still 65% waiting on read I/O. All logging and writing is to NV cache so there are sub ms writes. The DB is heading towards the 16TB region and it has many thousands of users.

Much of the trouble is caused by COTS packages generated from meta-data configurations. A lot of them are very resource intensive. Many batch processes on these are essentially transactional with lots of random I/O and reducing latency by an order of magnitude will make a massive difference.

As for RAID5, on enterprise arrays with huge NV caches, it can work well (for sequential writes, RAID-5 has lower overheads than RAID-1. Ultimately random reads are constrained by disk latency time. RAID-1 can give you an advantage at at higher util rates as there are two disks to read from rather than one - simple queuing theory. However, if you are seeing 6ms service times on a SAN you aren't going to get a dramatic difference by going to RAID-1. Writes are unaffected as they are all cached anyway.

The problem is that 20TB of SSD is still too expensive. Maybe in a couple of years time.

Steven Jones

@Matt 21

"In reality 99.99% of systems are not IO bound when using well laid out traditional disks anyway."

So only one in 10,000 database is I/O bound on traditional disks? All I can say is that we must have almost all of them in the UK - we have lots of DBs where the most significant single wait event is due to the latency of I/O. It may not be relevant to all DBs, but in the transactional area latency on random reads is a limiting factor, and it's not getting better (yes, and that is after throwing memory are the problem, using enterprise arrays with NV cache, 15K drives and all the others). Once you get down to random access times of about 5ms there is nowhere else to go with physical disks. Of course if you have a small enough DB that it fits in cache, then not so much of a problem (but beware the horrendous start-up performance with a cold cache).

It's generally not such an issue with data mining and the like where bandwidth is what matters, and not IOPs.

Of course Oracle and the like will benefit from SSDs. However, tradition DBs are optimised for physical disks and data is laid out to take account of it. Removing the random access penlaty by having an effective zero seek time would open up a a lot of possibilities for arranging data. write-anywhere approaches become viable - simply avoiding over-writing data in situ as there is no longer the penalty which makes transaction back-outs easier. Access to data is then through a re-direction process.

Last chance to vote to cut phone termination rates

Steven Jones

@AC

"Why should mobiles be any different to landline?"

Good question. Interconnect calls into landline are regulated at fractionally above cost. Mobile operators are allowed to charge something like five times marginal costs. Mobile phone users are being subsidised by landline callers - plain and simple. That's a market distortion and hardly a just system.

Sun tripling RAID protection

Steven Jones

Disk Throughput Rates

Whilst it is true that disk throughput rates do not go up at anything like the same rate as capacity increases, it isn't quite as bad as described - at least for sequential access where increased capacity comes from higher areal densities (where the capacity comes only by adding platters then the rebuild time would go up proportionately).

Where capacity comes from increased bit density, then total capacity goes up to the square of linear density. Double the bits per unit length and total capacity goes up by a factor of four. Basically twice as many tracks and twice the amount of data on each track (roughly - track and bit density won't necessarily go up at quite the same rate). That means if the capacity of a disk is quadrupled in this way, then the rebuild time will be doubled (twice as many tracks to read). The only way to improve this would be using multiple independent read-write mechansims(which introduces cost, complexity, reliability and issues of aerodynamic and vibrational interations) or higher spin rates - which are already close to reasonable mechanical limits.

(The position on random access is a lot worse than sequential access - the number of IOPs per second is pretty well fixed unless there are mechanical improvements).

I'm not wholly convinced about the need for triple parity though. Double parity is important as there is a significant chance of an unocrrectable read error on your remaining copy (a complete failure of another drive in the rebuild window is a very much less likely event). However, the chances of a failure in the double parity protection in a RAID-6 setup is very much reduced. Against that, putting a third parity into the configuration introduces even more write overheads and reduces available capacity.

The killer problem here is that the very geometry involved in disk storage (which is essentially a sequential access system with a moderately high speed seek facility) is going to make things worse. Patching up RAID systems to cover for incredibly lengthy rebuilds at considerable cost to write performance is no fix for the fundamental problem that you have many TB all being funneled through a single limited bandwidth bottleneck as represented by that single active read/write head.

Amazon Kindle doomed to repeat Big Brother moment

Steven Jones

@Robert Long 1

Ts & Cs do form part of the contract. Read them first before you sign. As for the right of an Intellectual Property owner to have illegally made copies returned - well that right will apply in pretty well any country where copyright applies. The customer's claim will then be against the supplier. Of course the reason why this doesn't happen much in practice is that it is incredibly difficult to enforce. Hence the copyright holder will invariably go for compensation from the organisation or individual that breached copyright in the first place - it's probably the only practical approach. Of course a court would have to approve it - but there is very little doubt that they would do so.

There are lots of cases where counterfeit stock bought in good faith has been destroyed. Of course it's generally retailers that are targeted (of course they aren't always innocent buyers).

I'll repeat again - if you have an illegally copy in breach of copyright, you have no legal right to retain it. The fact you can is very simply that it is very difficult to track down and enforce the copyright in those cases. Nothing more, nothing less.

I'm always astounded by the number of posters who think the law can just be got round by a narrow reading of law in their interests.