Feeds

* Posts by Steven Jones

1287 posts • joined 21 May 2007

Page:

VAT's all folks: Telecoms and services tax to be set at consumer's homeland rate

Steven Jones
Bronze badge

Re: Next year, I will mostly be living in Luxembourg

I rather think they won't be relying on your IP address to locate you, but details from your credit card, bank account or other payment method.

0
0

Why won't you DIE? IBM's S/360 and its legacy at 50

Steven Jones
Bronze badge

Re: The first clones...

You are quite right - DME, not DMA (it's been many years). DMA is, of course, Direct Memory Access. I was aware there was also a 1900 emulation too, and there were those who swore by the merits of George (if I've remembered the name properly). Of course, the 1900 had absolutely nothing to do with the DNA of the IBM S/3260.

0
0
Steven Jones
Bronze badge

The first clones...

It's worth mentioning that in 1965 RCA produced a mainframe which was a semi-clone of the S/360, and almost bankrupted the company in an attempt to compete with IBM. It was binary compatible at the non-privileged code level, but had a rather different "improved" architecture for handling interrupts faster by having multiple (in part) register sets. The idea, in the days when much application code was still written in assembly code, was that applications could be ported relatively easy.

The RCA Spectra appeared over in the UK as well, but re-badged as an English Electric System 4/70. Some of these machines were still in use in the early 1980s. Indeed, UK real-time air cargo handling and related customs clearance ran on System 4/70s during this period (as did RAF stores). Of course, English Electric had become part of ICL back in 1968. Eventually, ICL were forced to produce a microcode emulation of the System 4 to run on their 2900 mainframes (a method called DMA) in order to support legacy applications which the government was still running.

In a little bit of irony, the (bespoke) operating systems and applications mentioned were ported back onto IBM mainframes (running under VM), and at least some such applications ran well into the 21st century. Indeed, I'm not sure the RAF stores system isn't still running it...

Of course, this had little to do with the "true" IBM mainframe clone market that emerged in the late 1970s and flowered in the last part of the 20th century, mostly through Amdahl, Hitachi and Fujitsu.

0
0

David Cameron defends BT's taxpayer-funded broadband 'monopoly': It's a 'success story'

Steven Jones
Bronze badge

Re: Gaps?

It was each of the local authorities that carried out the Open Market Reviews under the BDUK framework, and they would have treated all the telecommunication suppliers in that respect (although clearly that means VM or BT for the vast majority), and they that drew up the intervention areas. Of course it's always going to be difficult for small companies with limited finances to commit expenditure, but frankly that's because in the world of major infrastructure projects the capital requirements are high, as are the risks. If they weren't, there wouldd be hundreds of local companies doing it, and frankly there aren't. It's a game for companies with deep pockets who can absorb risks. (Like Google - who can afford a large scale commercial experiment with Google Fibre).

What you appear to be requiring is that a commercial company releases its investment plans to competitors, and I really don't see that happening, especially in an area of investment like telecommunications, where there can be rapid take-up and change.

I suspect that the tendency in fixed line telecommunications is very similar to that for electricity, water and gas. The economies of scale are with large operators and that's a natural state of the market. What that means of course is we end up with a highly interventionist regulated environment (which is what we have), with more competition at the higher and added value levels. There will be specific areas were smaller companies will make an impact - industrial estates, new apartment blocks and so on (there have been some developments recently), but I don't think we are going to see the country somehow covered by a patchwork of small, local network suppliers. That's how both electricity and telephone provision started out, and it ended up being consolidated into national networks in virtually every country you care to name (usually nationalised, as in the UK, but privately in places like the US). By a quirk of history, Hull & Kingston retained it's own local telephone network, but that's highly unusual.

One other point. It's rather unfortunate that public money is required at all to subsidise rural roll-out. In the case of the telephone (and other utilities), that subsidy was achieved via a cross-subsidy system. That continues to this day in that the copper loops in rural areas are cross-subsidised from revenues in urban areas. That can be done, as there is a regulatory regime that is enforced via a USC, but that's not the route that Ofcom (or government policy) favours. What they went for is a highly competitive market as deep into the network as technology allows, which with ADSL was essentially from the DSLAM onwards. As penetration goes deeper into the network, then costs become prohibitive and you end up with a de-facto monopoly on FTTC solutions. However, the structure of the market - with very low cost competition via LLU operators means that there isn't the potential to cross-subsides roll-out.

Perhaps if Ofcom had adopted a model which actually represented the differences in cost structures between urban and rural, such that customers in those areas bore the real cost of provision, then subsidies wouldn't have been required (cross or otherwise), the market would have provided. However, I rather suspect that rural dwellers wouldn't much appreciate paying the full commercial costs involved, but as the market was structured, they didn't have the choice.

0
0
Steven Jones
Bronze badge

Re: Gaps?

@ Warm Braw

EU state assistance rules do not allow any substantial overbuild of any comparable existing privately-funded system. In the case of the BDUK funded scheme, that included VM broadband as that is capable of exceeding the chose NGA standards for "superspeed". Indeed, VM keep a close eye on this for obvious reasons and would object to any state funded competitor encroaching on their "patch" to any significant extent.

The BDUK process includes an open market survey asking for any (credible) privately-funded schemes before the intervention areas were defined. The length of time required to gain EU approval was responsible for a considerable part of the delay in the scheme as, not surprisingly, politicians tend not to consider such issues before making their announcement.

So, I can't be sure in your area, if there is a "legal" overlap with the VM network. If there is, most probably it was part of the commercial roll-out. It's extremely common (almost the norm) for some cabinets on an exchange to be part of commercial roll-out and others to be on the BDUK scheme as they were not considered to be commercially viable. It can be very difficult to tell the difference. Some authorities (like the Bucks & Herts schemes) actually publish which cabinets are to be enabled as part of the BDUK scheme, but that's far from universal.

nb. my cabinet is similarly in a VM area and was enabled three months ago, but it was part of the commercial roll-out, whilst I expect others on the same exchange (serving smaller communities) to be BDUK enabled.

That doesn't mean there won't be a small amount of overlap, as inevitably the footprint of a

0
0
Steven Jones
Bronze badge

Re: If you're going to plow billions into telecoms infrastructure...

I'm not sure what country you are from, but it's spelt plough in the UK.

As for spending £30bn on an FTTH network (which is a credible figure and about what the equivalent island of Jersey is spending per property), then you'll need to find a legal way of doing it. Overbuilding the VM and BT NGA networks would fall foul of EU laws on state assistance, so you have to factor in re-nationalising their access networks, which will wipe out half or more of your budget. So now your down (optimistically) to £15bn, which is nothing like enough. So let's make your budget £45bn. Also, how are you going to get people off of the copper network? The evidence is that the majority of folk stick to the copper as it's cheap (being a sunk cost) and meets most of their needs. Withdraw it, and you've got a whole bunch of LLU operators who'll want compensation for the investments they've made in kit. In reality, any government would be stuck with running both fibre and copper in parallel for many years, and wholesale charges will be forced up to recover the costs.

What you are describing is exactly what the Labour government decided to do in Australia with the National Broadband Network. That was aimed at delivering fibre to 93% of properties (so didn't covered the remote areas) and, on the latest review, was costed at $72bn (AUS), or £40bn albeit about 40% of the properties. It's since been downgraded to a mixture of FTTP and FTTC, but it still going to cost £24bn (that's assuming it actually delivers).

Against that, the public expenditure on broadband infrastructure in the UK is very low. In fact, many might argue that there is something wrong with the regulatory and commercial structure if the government is spending public money anyway. The problem is the path that Ofcom (and most EU regulators) have gone down. They've forced down the price of copper to the point where it's very difficult to justify investment in NGAs outside "prime" areas as a mechanism for minimising retail pricing. There is precious little incentive for private investors to put money into infrastructure.

0
0
Steven Jones
Bronze badge

Re: fudging numbers

You are just plain wrong - the percentage of EO lines in rural areas is nothing like 90% (although it could be for individual exchanges). Among other things, relatively few village exchanges serve just one village, and all the others will have cabinets. There are solutions for EO lines, but I rather suspect that they aren't priorities as other lines can be covered at lower costs.

1
0
Steven Jones
Bronze badge

Re: This is a real success story for our country

If you think you can connect more than a small minority of properties to fibre with a budget of £1.2bn, you are living in fantasy land.

4
3

UK regulators: We will be CHECKING UP on banks' IT systems

Steven Jones
Bronze badge

Re: "antiquated nature of bank IT systems"

Concentrating on the underlying hardware and OS rather misses the point. Certainly you can run rock-solid IT systems on mainframes, and characterising them as "antiquated" actually tells you nothing about the underlying resilience of the applications. However, even the most reliable and robust systems can be undermined by poorly trained and managed staff. It shouldn't be forgotten that the 2012 RBS outage was not due to dodgy Windows XP, Linux or UNIX systems, but a problem with the support and maintenance processes of good old CA-7 on a mainframe system. It's not that CA-7 or Z-OS is fundamentally unreliable, but a failure in good operational and support practices.

The real issue is that, in the drive to reduce costs and roll out new features, that what is being sacrificed is the quality and experience of operational, technical support and IT management staff and resources. If good practices are not maintained, then even the most reliable hardware in the world will not prevent catastrophic outages.

6
0

MPs attack BT's 'monopolistic' grip on gov-subsidised £1.2bn rural broadband rollout

Steven Jones
Bronze badge

Re: Openness

Also the power backup for the FTTC cabinets is via internal batteries which are kept charged by the mains supply. They are sufficient to keep the broadband going for a few hours, but if the power is off for any longer, a portable generator will be required (which is, incidentally, how small telephone exchanges are powered if there's an extended outage).

Note that if/when we see fibre to the remote node (or whatever people care to call it), the small "mini-dslams that will run either VDSL of G.Fast are likely to be powered over the (short) copper loop from the customer's premises, thereby avoiding the need for mains power.

1
0
Steven Jones
Bronze badge

Re: Could competition have worked?

Contention is all to do with how much bandwidth the ISP installs in the backhaul (plus things like peering). Quite how the backhaul is provisioned depends on the particular exchange and what the long haul options are, but in any event, it's "just" a matter of money. Note that some operators (mentioning no names) have the reputation of being parsimonious on exchanges they haven't unbundled as they have to buy the back-haul (at least to a hand-off point) from BT Wholesale. If they cared to buy more bandwidth, then the congestion would be eased or even eliminated.

Anyway, the point is to pester the ISP with complaints, and if all else fails, use an ISP with a better reputation for managing congestion.

0
0
Steven Jones
Bronze badge

Re: You're an idiot.

Like most things in telecommunications, there's usually a reason why things are the way they are, and often it's historical. That applies to lots of old infrastructure - for instance, if we were building the railways now, they wouldn't have the bottle-necks and load gauge restrictions that have been inherited. To the simple-minded, it all seems so easy to rip it up and start again, but it's never that simple.

Of all people, those involved in IT ought to know this. Any very large organisation will, over the years, have inherited a vast legacy of systems which is often not economic to just replace, and almost never possible to just change overnight. So it is with major bits of national infrastructure. Changes will tend to be evolutionary, not revolutionary. For long periods, old and new will co-exist

If people want some idea of what the problems are with grand national telecommunication projects in western countries, I'd invite them to look at the (highly politicised) Australian National Broadband Network, which was originally planned to bring FTTP to 93% of Australian properties (so didn't cover the really remote areas). Estimated costs escalated to $73bn (AUS) with timescales disappearing over the horizon. Following a strategic review, this is meant to be coming down to a "mere" $41bn (AUS) using a mixture of FTTP and FTTC. Of course, even suburban Australia is less densely populated than the UK, but bear in mind there are only about 40% of the number of premises.

So that approx £23bn for 10m properties rather puts the BDUK (and related) public broadband expenditure of about £1.4bn in perspective as it covers roughly the same number of premises in the intervention areas, and will deliver rather earlier.

2
0
Steven Jones
Bronze badge

Re: You're an idiot.

VDSL is not allowed over EO lines due to the ANFP, which is adminstered by the NICC. This sets power and frequency profiles which are designed to allow different services to co-exist by limiting cross-talk. The concern over running VDSL over EO lines is the possibility of conflict with ADSL (and, maybe a few other services which use carrier modulation). Similarly, you can't run ADSL from street cabinets.

The NICC is effectively a trade body controlling the technical rules for use on BT's copper network.

4
0
Steven Jones
Bronze badge

Re: Were Fujitsu ever really serious?

The BDUK framework required that the successful bidder had to provide a non-discriminatory wholesale services. Indeed, it would have been virtually impossible to have got anything else through EU state subsidy rules if ISPs were locked out. Of course, this means that the commercial case is poorer as the winner couldn't count on retail level revenue (and hence the gap funding is higher).

1
0
Steven Jones
Bronze badge

Re: I've just been told by a little bird

I've no idea where the distance from the exchange comes into it. There are lots of cabinets in the country which have been enabled which are far more than 1 mile from the exhange (like the one I'm attached to). Of course, the further the cabinet is from the exchange, the more it costs to run the fibre to it, but what is probably far more important is how many properties can be usefully serviced from the cabinet. Of course, there may be particular obstacles - like the cost of running power, or blocked ducts, but these aren't directly associated with the distance from the exchange.

Of course, if what you meant was the distance from the cabinet, then the speed available will be greatly reduced at 1 mile (or 1.6km) as the limit for the 24mbps BDUK threshold is at around 1.2km line length, but you can get useful speeds up to 2-2.5km from the cabinet.

There are also trials being performed on fibre-to-the-remot-node. Basically a very small DSLAM up a pole which is connected by fibre to the exchange and which might be line powered (maybe from the customer premises, as it's perfectly feasible to provide a few watts over a few hundred metres, where it's not possible over km type distances). All experimental just at the moment.

0
0

China's rare earth supply crimp plan ruled to be illegal

Steven Jones
Bronze badge

If you join the club, you keep to the rules

Of course an country is free to decide who to sell goods to. However, if a country voluntarily signs up to the WTO, then they have to abide by the rules. If China wishes to be able to sell goods and services into other markets without undue discrimination by using WTO rules, then it is obliged to work within them.

Some of the rules involve not seeking to create an artificial advantage to local industries by attempting to monopolise local raw materials. Applying arbitrary quotas which apply only to exports of raw materials falls under this. Indeed this does affect the US as well, as they are unable, under WTO rules, to prevent the export of their cheap "fracked" gas, and some US gas terminal ports are being refitted for export.

Note this doesn't mean that there has to be a free for all which means any resource can be plundered without consideration to the environment, but what it does mean is that a country can't discriminate in favour of its own industry. Of course, it's all vastly complex in practice, but that's the basis of the principle.

4
0

Think drone delivery is hot air? A BREWERY just proved you wrong

Steven Jones
Bronze badge

Light beer

"Think drone delivery is hot air? A BREWERY just proved you wrong"

Oh yes, of course it did...

4
0

Don't stare: SHRUNKEN Mercury lost 7km, but only 'cos it's COOLING

Steven Jones
Bronze badge

Extending the methodology

Using the same basis of estimating how much the planet has shrunk by the extent and size of wrinkles, my head has got an awful lot smaller since my youth.

4
0

BT caught in data gaffe drama: Whistleblower squeals over alleged email fail

Steven Jones
Bronze badge

Re: HTTPS compulsory?

From what I can see, from the user end the logon credentials for btinternet are HTTPS. Of course, that doesn't means it's HTTPS end-to-end. The HTTPS termination point can easily be at a different point in the communication chain to the actual email web server - it just goes through some form of proxy service. However, it's unclear from the article where the exposure is meant to be.

0
0

X-IO to heat up ISE storage bricks with iSCSI access

Steven Jones
Bronze badge

Re: "two triple disk RAID 6 failures"

Their website describes it quite well, although they do use some dubious terminology. For instance, under self-healing they claim the disk is "remanufactured"

"using our patented software technology, every drive in an ISE can be taken into repair mode, power-cycled, and remanufactured just as it would be if it were RMA’d to the drive’s supplier — all while the ISE continues to operate at full performance and full capacity."

That's a weird use of the term "remanufactured". Of course what they will really be doing is performing a full low-leve format, identify bad tracks/sectors and so on. It's hardly the normal use of the word "remanufactured", which usually means replacing parts of the device (like bearings) which have fallen out of manufacturing tolerances. Of course, the can get away with these weasel terms as no manufacture actually remanufactures disk drives these days - they simply aren't seviceable that way. So yes, they might do what a manufacturer does when faced with a returned disk, but remanufacturing it isn't. I think re-certifying, as you had it described to you, is far more accurate.

However, they also say that if part of the disk is unusable then they will take that element out of service. I've no doubt that extends to a full HDD failing when the entire disk will have to be retired. At that point there's no way a user can restore it to full resilience as the units are sealed.

nb. the likes of NetApp have schemes which monitor drives and dispatch replacements to be user-replaced (not a hard job). The returned drives will then be assessed and go through reformats, map-outs and re-certification. If they don't meet standards, they can be taken out of the pool of replacements for dispatch to customers. I've no idea of the exact methods used, but I suspect it's possible to do more extensive work back at the factory so to speak. I doubt what NetApp do is much different in principle to other enterprise storage suppliers.

1
0
Steven Jones
Bronze badge

Re: "two triple disk RAID 6 failures"

It's fairly easy to see how they achieve such low failure rates. They simply put the redundancy in the package and put in a system than supports multiple disk failures, probably via some sort of virtual RAID. That way you could provide full access, even if two of the actual drives have suffered physical failures.

Of course, you can't replace the internal drives yourself to restore the full resilience - it's a sealed package. After 5 years, you could well start suffering more failures if some of the internal redundancy has been compromised.

The secret here is the failure rate is measured by the failure of the entire ensemble failing, not the rate of failure of internal disks. The proper comparison is that of the failure rate of a RAID array, and not an individual disk.

0
0

Battery vendors push ultracapacitor wrappers to give Li-ions more bite

Steven Jones
Bronze badge

Good idea, but maybe not an optimal solution

Given that the ultracapacitor can only provide power transient peaks, I assume that this will require some form of close control with the device being powered where it's capable of sustained high demand. For instance, on a smartphone it might be necessary to "throttle-back" the processor in order to reduce power consumption to the sustainable level. I suppose that it could be done without a full control interface (e.g. the device could monitor demand using "dead reckoning", rather than a closed-loop control system, but I don't think such a system would be optimal).

Given this logic has to be embedded into the device's management systems, then it would surely make more sense to embed this in the device. Then the battery can remain just that, which means not having to invent a whole new interface. It would also be cheaper where batteries are replaceable, rather then embedded. Also, this is something that device manufacturer's could implement without a new battery specification. Indeed, maybe they already do it.

1
0

Indonesia plans 10 Gbps FTTP as part of 20-million-premises broadband project

Steven Jones
Bronze badge

Re: I applaud their ambition.

You might, but at what cost. You are already cross-subsidised by other consumers on your phone line as the much longer line will represent more capital investment, a higher maintenance overhead and so on. The same is probably true of your water and electricity, assuming those are longer than the UK average. Similarly delivering mail to you probably loses money too.

It's a matter of simple economics. Certain services are just cheaper to provide in high density areas. If a company could see a viable market for high speed broadband in your area at a price that consumers would be willing to pay, then it will be done. Of course, the reality is that people expect to pay the same price wherever they are, and the regulation of the market is such that it doesn't allow for significant market differentiation. Hence it's only going to happen through a subsidy. Either a cross-subsidy via some form or duty incumbent on operators (as happens with telephone, electricity and water) or an explicit state subsidy.

So, if consumers in your area are willing to pay - say - £50 per premises for a high speed broadband service then it would be likely that a means would be found to provide it. However, I suspect that most won't pay that much. Hence the need for a subsidy.

0
0
Steven Jones
Bronze badge

Re: I applaud their ambition.

Or "I want somebody to subsidise where I live"...

2
1
Steven Jones
Bronze badge

" using the connerie-vian* rule-of-thumb formulated in Australia, which suggests 10 million FTTP connections cost $AU90 billion, Indonesia will need over $100bn for this project, or about eight per cent of GDP."

It's simply ridiculous using Australian models for projecting costs for infrastructure provision in Indonesia. Whilst the technology costs will be similar, it is well known that 80% of the cost comes down to installation. That's roadworks, installing cables, putting up poles, running cables and the like. Those costs depend heavily on local labour rates and not those in Australia (some of the highest in the world).

Also, the Australian National Network is not exactly a shining example of efficiency, not to mention it includes the costs of effectively re-nationalising lots of assets. Indeed the network has been subject to a strategic review (which is now recommended to be a mix of technology, not just FTTP).

http://www.nbnco.com.au/about-us/media/news/strategic-review.html

So yes, this is going to be expensive, but 8% of GDP? I think not.

0
0

Neil Young touts MP3 player that's no Piece of Crap

Steven Jones
Bronze badge

Re: CAPS LOCK MUSIC

CAT 6 for speaker cables? You're joking right? It might be good for noise rejection, but that's hardly an issue for speakers. The problem is the cross-section and the insertion loss due to resistances. It's only 0.58 mm2, which is only good for about a metre at best on a 4 ohm speaker system. Anything longer than that and you'll want much bigger cross-sectional area.

1
0

20 Freescale staff on vanished Malaysia Airlines flight MH370

Steven Jones
Bronze badge

Re: Black Box

Whilst the cost of uploading flight recorder information to satellite is certainly a factor, the technology is not. There exist a number of proven ways to provide robust data transfer over unreliable data links and, in any event, none of this would preclude the use of local storage on the data logger.

In fact the subject of real-time upload of flight recorder information has been around for many years. It's just a matter of the authorities mandating its use. Looking around, the cost of satellite data transmission is of the order of several dollars per GB, albeit I've no doubt it depends on the nature of the services. Flight data recorders seem to have capacities measured in the several GB region, so it seems to me to be perfectly practicable.

The archive storage of such data is hardly a major cost issue. Such data can be discarded as soon as it's no longer relevant, although I rather think that summarised extracts of safety-related information would be useful. Also, it has been the practice to archive some flight data recorder and airline engine operating information for operational analysis to optimise such things as engine reliability and fuel economy.

1
0
Steven Jones
Bronze badge

Re: Very sad indeed...

@ James Hughes 1

""The odds of dying per mile are

by car, 1 in 100,000

by plane,1.6 in 100,000,000,000"

Really? For car driving it's a chance of 1 in 100,000 per mile? I rather think not, as the average UK annual car mileage is about 12,000. In fact, the chance of being killed in a car is about 5 per billion miles (in the UK) as against 0.08 per billion miles in an airline. That's 60 times safer (per mile) by schedule airline, not 625,000 times safer.

However, that isn't really comparing like-with-like. Nobody flies an airliner from their front door (well, unless your name is John Travolta). In practice, safety stats like this only make sense where they are genuine alternatives. If you take long distance car journeys on motorways, then the death rate per billion miles is reduced by a factor of 3 (in the UK). So that's more like 17:1 in favour of the air alternative. However, factor in the much higher fatality rate per journey of air over car (the stats I find show that), then they safety comparison of short-haul air versus long car journeys closes again. Indeed, you might well be safer travelling by coach on than short-haul plane flights.

0
0
Steven Jones
Bronze badge

This is not an appropriate time for that comment.

18
2

Nutanix photo-bombs VMware's selfie with delayed patent release

Steven Jones
Bronze badge

There must be a better way...

So this is a patent that has been granted for running the sort of abstracted IP-based I/O services that have been run on physical servers in one guise or another for many years, just on the basis that it is now being run on a virtual machine? That's absurd.

.

It's about time that some other means was found of adjudicating what is, or is not, a patentable IT products than this charade that only serves to enrich lawyers and this sort of nonsense. Frankly the number of true IT innovations, rather than just natural developments of existing technologies, are relatively few. Perhaps what's required is an independent group of IT specialists who can draw up some vigorous guidelines and provide an early sifting service on what is truly innovative. There are peer-review systems for academic journals. Perhaps something in equivalent. It might cost more money in the first place, but is surely a more economic system in the long run than this apparent free-for-all in granting trivial patents.

0
0

Traditional RAID is outdated and dying on its feet

Steven Jones
Bronze badge

Re: Eh?

You are quite right, you can't rebuild a 4TB single disk in minutes. It's utterly impossible. However, what you can do is have a distributed system such that 4TB of "lost" data is actually lots of much smaller segments. Let's say, 100 x 40GB each, each of which is part of a redundant set of some sort. Then you re-assemble those 100 x 40GB segments on 100 different disks using spare capacity. It's certainly possible (in theory) to write 40GB to a single disk in less than 10 minutes.

That's the basic principle, you use some form of distributed redundancy system such that if you ever lose one disk, then you can invoke 1 very large number in both the recreation of the "lost" data and in the reconstruction.

Of course you can do this with a clever file system, but it's also certainly possible with block-mode arrays too. File systems can be rather more clever than that as it's only the active data that needs to be recovered. A native block mode device, at best, can only know what blocks have been written to via a bit map. However, doing all this requires a lot of heavy processing and data shuffling. You can't simply delegate the task to some dedicated hardware RAID controller to do in the background.

That 4TB recovered in 10 minutes still involves (at least) reading and writing a total of 8TB of data (depending on how the redundancy is generated), and that's at least 13GB per second. With more complex, and space-efficient redundancy schemes, you may have to read several times that to recreate the data, so it would be even higher. That's very demanding, and will require a lot of CPU and memory bandwidth which also, of course, has to carry the normal workload at the same time. Of course, if you relax that 10 mins restore time, it's less demanding.

It was my experience of storage arrays (mid and enterprise) that the processing capacity was often the limiting factor on I/O performance (depending on access patterns).

2
0

BT demands end to Ofcom wholesale broadband subsidies for BSkyB, TalkTalk

Steven Jones
Bronze badge

Re: @ NinjasFTW -- As much as I hate to say this....

The original competition model against BT was at the infrastructure level. Hence all the cable franchises which were granted. To make these more attractive, they were granted local monopolies on TV transmission along with the right to offer fixed line telecommunication services. Of course the powers that be vastly underestimated the cost and complexity of doing this (not to mention the fragmentation). Then there we also fixed radio link models (which failed economically), not to mention what Mercury was meant to do.

Back in the 1980s, nobody gave any thought to the copper pair network for high speed data or video. The future was meant to be cable, and BT were essentially prevented from delivering a fibre network due to the threat to cable investment.

Hence the new regulatory regime which came about with Ofcom. The option was open to have taken this to the Monopolies Commission who may have enforced a separation between infrastructure and retail. However, even that isn't so simple. The regulatory regime already has two separate layers of wholesale services (Openreach and BTW), and to some extent they overlap, especially now as the FTTC broadband wholesale service now falls under Openreach versus the ADSL exchange services which are regulated under BTW. Such complexities would also affect full separation models.

Of course the new regulatory regime was not what was sold to the original BT shareholders.

2
0
Steven Jones
Bronze badge

Re: When BT can...

Except the Plusnet subsidiary would go broke if it persisted in making losses as it's a separate company, albeit wholly owned. It would be blindingly obvious to regulators if it wasn't economically viable as there's no way to cross-subsidise.

3
0
Steven Jones
Bronze badge

The network was sold. It wasn't a gift.

@ninjasFTW

"Except BTs network was originally provided by the tax payer."

There's always one idiot that makes this statement. Firstly, the network was not paid for by the tax payer. It was funded by the Post Office (and before that the GPO) using the charges levied against what the state organisation cared to call subscribers. Indeed, in most years the UK government took a levy off of the PO telecommunication operations which were profitable due to the high charges (whilst simultaneously limiting investment through borrowing caps, as it was considered part of public finances - that's why party lines persisted well into the 1970s and beyond).

However, even if the tax payer had paid for the network, its irrelevant. The whole operation, network and all, was sold to private investors. It wasn't a gift. Indeed, if you adjust for inflation, the amount raised by the sale of BT shares over three tranches is considerably more than BT's market capitalisation. This is not helped because the shareholder was landed with a huge pension liability, largely because of the inflated UK workforce (over 250,000 at the time of privatisation). Be glad its not like the Royal Mail, where the pension black hole has had to be taken over by the government to make that organisation financially viable.

Bear in mind, those that bought shares in originally did so in a completely different regulatory environment, and this was changed retrospectively, much to the shareholders disadvantage.

11
0

Fukushima radioactivity a complete non-issue on West Coast: Also for Fukushima locals, in fact

Steven Jones
Bronze badge

Re: Cancer from radiation

You clearly don't understand what you are talking about. One of the reasons that the subject of radiation-induced fatalities from accidents like Fukishima is so open to argument, is that any excess cancers will not be measurable. In 10, 20 or 30 years' time, there will not be a sudden spike in cancers cause by this incident as even the worst case numbers are so small they will be man orders of magnitude below natural rates of cancer. How you expect at detect an excess cancer rate of even a few hundred (much higher than any model predicts) spread over 10s of years in a population where the natural rate will be in the millions escapes me.

There are more measurable spikes caused by radioactive iodine (which has a short half life). The excess cancers (of the thyroid) were very clear in Chernobyl, although it was made much worse by the lack of action my the authorities in that case. As far as I'm aware, nobody has seen such a spike in Japan.

Fortunately, thyroid cancer treatment has a very high success rate or the death toll from Chernobyl would have been a lot higher.

2
0
Steven Jones
Bronze badge

Re: Oh fsck. Not Lewis Page again.

I see I got a thumbs down on this post, although I'm not quite sure what's wrong about pointing out innumeracy. Perhaps it's considered bad manners.

0
1
Steven Jones
Bronze badge

Re: Oh fsck. Not Lewis Page again.

@AC

Amazing that a post with a fundamental and obvious mistake in the units chosen gets 19 upvotes. It just rather proves that people vote on the sentiment, not the content. Here is the extract in question :-

"Drax puts out 1.2 million tonnes of ash per year. Let's assume it's solid carbon. That puts its density between about 1.5 and 3.5kg/m^3, so the waste-cube would be between ~700m and 1km each side."

Leaving aside the highly questionable assumption that Drax ash is essentially carbon (its actually largely made up of incombustible minerals, like aluminium silicates, calcium oxide etc.), the the volume of waste is over-stated by 3 orders of magnitude. That's simply because the density of carbon is more like 1.5-3.5 tonnes, not Kg per cubic metre. In fact, the composition of coal ash will be reasonably to the higher of these figures, giving a cube of about 70 metres on each side. Still very large of course, but 0.1% of the volume above.

Of course, to anybody with any real sense of scale and numbers would have thought something was wrong in the first place. If the ash alone was 1 cubic kilometre, then for heaven's sake, how big was the hole in the ground from where the coal came as it would surely be many times larger? Of course, if you are somebody hopped up on the excitement of finding a "fact" which overwhelmingly supports your particular preconception without bothering to wonder about its credibility, then you end up posting this howler (or upvoting it).

That's not to say that furnace ash isn't rather nasty stuff, but it is reused in some building materials where it's pretty well trapped. Indeed, using it in place of some portland cement reduces the CO2 impact of producing the latter. Radiation hazards of such materials are tiny. Yes, there will be the leaching out of some radon from the very low concentrations of uranium, thorium and the like, but the actual amounts are minute when spread out over where it's deployed.

3
3
Steven Jones
Bronze badge

Re: Financial disasaters..

Nuclear cleanups are inherently horribly expensive, although some might argue that these are overdone. However, for good or ill, nuclear power is intimately associated with nuclear weapons, and those of us who lived through the most dangerous parts of the cold war know well why standards have to be so high. Indeed early civilian reactors were deliberately designed with producing materials for nuclear weapons.

It's also worth noting that Britain's AWR and Magnox reactors are particularly expensive to clear up, even if they would not have suffered the particular problems of the Fukishima plants. However, they have very large cores, full of irradiated graphite.

It is also not helped that Sellafield was found to have consistently faked safety records in the past, and early attitudes to dumping radioactive waste into the sea were hardly great examples. There's also an estimated bill of £70bn to come for the decommissioning of Sellafield. Not exactly loose change,

http://news.bbc.co.uk/1/hi/uk/646230.stm

I'd like to emphasise, that I'm not against nuclear power as such, but that it presents particular challenges and dangers which will always require enormous care.

3
0
Steven Jones
Bronze badge

How about a financial disaster

The economic cost of the Fukishima failures are estimaed at between 250 and 500 billion dollars. If it had been fossil fuel plants that had been inundated, then the costs would have been a fraction of this.

So, whilst this may not have been a human disaster, it's an awful long way from being a triumph. It also came dangerously close to a much worse problem. The chief failure, is the abysmal underestimation of the risks of tsunamis from off-shore earthquakes. Despite what some people claim, this is not in retrospect. There was ample historical evidence of tsunamis of this size hitting the Japanese coast (and, of course, tsunami is, appropriately, a Japanese word).

The Fukishima reactor design had known vulnerabilities, especially where the cooling system fails. There was a clear common-mode failure scenario as was proven when the tsunami disabled the backup cooling system.

The fact is, that if the facility had been protected properly, or placed in a less vulnerable location, the vast majority of the economic costs would have been avoided. Indeed, the plant could probably have been restored to operation.

It's simply a case where short cuts were made in the placement and construction of the plant in what is one of the most seismically active areas on the planet.

It's far, far from a triumph.

nb. I'd fully agree those worried Californians are just neurotic, but that's a different issue.

13
6

Better late than never: Monster 15-core Xeon chips let loose by Intel

Steven Jones
Bronze badge

Re: SPARC

Leaving out the "technobabbly blather" as you call it, those of use with actual experience of Sparc T series processors, can attest to the fact that if you have an application which requires fast single-threat processing, then they perform horribly. That's especially true if you try and load up all those threads, as each of those threads will be competing for just two execution units. Also the Sparc T processors were designed to be relatively simple, and lack many of the superscalar features that make processors like Power and Intel's XEONs run very fast in a single threaded mode.

Indeed, for those Solaris-based applications (including many Oracle apps) which require very fast single thread performance, then you have to use M series machines, which are rebadged Fujitsus using Fujitsu's processor design. Unfortunately, excellent as they are, Oracle's M series machines are much more expensive than servers based on Intel's XEON processor.

The T series machines are designed for throughput, not speed. They make very clever use of otherwise "dead" time when a thread is stalled waiting for data from memory, but speedy they are not. Only if you have an application that scales very well over multiple threads will you see good performance. Intel's own implementation of this, hyperthreading, can also be problematical for some applications, although it's rather less ambitious with only two threads. The Fujitsu SPARC64 processor, in later incarnations, had two threads per core, and I've had time-sensitive applications where that feature had to be turned off as response times became erratic once there was a significant statistical chance that two threads were competing for the same core (hyperthreading and similar implementations distort CPU utilisation figures as, once threads start competing, threads in execution start slowing down).

I recall a senior manager who once signed up to buy a lot of these SUN T series machines, having been sold the line that they were incredibly power efficient and could be used to consolidate workloads. Unfortunately, when implemented, he found out the hard way that these servers were wholly unsuitable for many of the applications that were in use. Ironically, many of these were based on Oracle databases. Indeed, some of Oracle's own applications perform rather poorly on T series. Of course, if he'd actually cared to ask people who understood these issues before committing, we could have told him, but he believed the particular line he was spun be the sales team.

A case of horses for courses.

3
0

Mathematicians spark debate with 13 GB proof for Erdős problem

Steven Jones
Bronze badge

Not a new problem...

Surely this is hardly new. When Kenneth Appel and Wolfgang Haken proved the four colour theorem in 1976, it was originally not accepted by mathematicians as the computer-assisted proof was far to large to be proof-read by human beings. it gained a sort of grudging acceptance.

However, I always thought the objection was flawed in principle. Firstly, it's possible to consider a computer program (and the hardware on which it runs) as a mathematical entity which, itself, is subject to inspection and being proof-read. There have even been attempts to produce mathematically provable programs (although that is only possible in principle for some sorts of programs) and, even processors - anybody remember VIPER? Of course, any proof-reading of a program, especially one which cannot be mathematically verified, is open to human error. However, that is also true of human proof-reading of mathematical proofs. The possibility is always there of a human error, especially for very complex proofs.

It's therefore true, in principle, that the body of mathematics is open to human errors, albeit not on the scale of complex computer programs. It is therefore a problem of scale, and not principle.

1
0

Virgin Media sales flat: Firm bags fewer winter sign-ups than last year

Steven Jones
Bronze badge

Re: Virgin Media watched subscriptions to its broadband service plummet?

One reason why everybody in an area, signed up or not, gets multicasted with VM sales guff is that it has to be unaddressed in order to avoid the Mail Preference System (MPS) restrictions. By law, no company can send sales material via direct mail to any address registered under the MPS (with a few exceptions, such as if you are a customer of the company concerned). By doing a blanket street-by-street delivery, it doesn't count as direct mail and the Royal Mail are under no obligation not to deliver such stuff to MPS registered addresses. Also, the RM will provide blanket street-by-street deliveries at a far lower cost than addressed direct mail.

It would be relatively easy for the government to introduce a scheme for indirect sales material. A simple scheme, whereby householders could fit a small preference notice to their door, would do the job cheaply, and with minimum overhead. My suspicion is no such scheme has been introduced as it's a useful source of income for the Royal Mail.

Of course it's not just the RM that delivers junk mail. My letter box gets filled with such stuff for pizza delivery, taxi firms and the like.

2
0

Million-dollar new disk tech could be USELESS for array vendors

Steven Jones
Bronze badge

Re: Wow...so intriguing...

Every time helium filled drives come up, somebody has to explain, yet again, that the heads have to "fly" on a cushion of gas. It's simply impossible to design a drive using a vacuum as the head fly height is now measured in nanometres on the capacity drives.

It is also perfectly possible to store helium. After all, it's kept in cyclinders under pressure. What you have to do is to create a hermetically sealed environment with no rotating part seals.

3
3

Fibre Channel Industry Association extends roadmap to 128G bps

Steven Jones
Bronze badge

Re: Is it me being thick or this makes no sense

I've worked in data centres where 128gbit FC could have been fully used with ease. Not for an individual server (although we had some using well over 32gbit throughput), but for connection tostorage arrays, tape libraries, VM farms and for interconnects essentially the faster the better.

Yes, arrays and switching can, of course, generate the required throughput using multiple interfaces, but as anybody whose worked on networking can tell you, dealing with the configuration management, pathing, subtle bottlenecks and sheer cabling complexities is a major pain. It is much, much easier to have a few very large paths into your major switching and storage backbones than it is to deal with hundreds of connections. Of course, nobody should expect you average little server to require this (and you probably wouldn't use FC anyway), but there are plenty of places where it is extremely useful. Of course, the ability of storage, and to a lesser extent, switch manufacturers to fully exploit this capacity is a different matter. I saw plenty of storage arrays where the amount of front end throughput theoretically possible bore no resemblance to actual capability.

0
0

Just how solid is cloud storage in 2014

Steven Jones
Bronze badge

Cloud suppliers are just another single point of failure...

You'd be mad to rely on any single cloud supplier for mission critical data. It is, after all, a single point of failure no matter what the quality of their operations or finances. Also, if a cloud supplier has technical problems, then they are not necessarily going to prioritise recovering your data, if they can do it at all.

Of course there's an even bigger structural issue. If a single cloud supplier fails, it could bring down the operations of many companies and could cripple entire industries.

It's a fundamental principle of data management to provide protection at multiple, independent levels. Using a cloud storage provider is no different.

0
0

IBM nearly HALVES its effective tax rate in 2013 - report

Steven Jones
Bronze badge

Duirectors fiducial duties

I don't know about the US, but in the UK director's responsibilities are covered in the 2006 Companies Act. That covers a lot of things, including a requirement to operate within the law and duties about the treatment of employees, avoidance of conflicts of interest and so on. However, one of the most important duties is "To promote the long-term success of the company (rather than the interests of, say, the majority shareholder)". That is, the interests of the shareholders at large. The point in brackets is about the protection of minority shareholders.

A perfectly reasonable interpretation of this condition is to optimise the financial performance of the company in the long term, which would include running the company in a tax-efficient manner. Of course it might be that being so aggressive in pursuing tax minimisation policies as to attract unwanted and damaging publicity or unwelcome legislative changes. So it's not an absolutely black-and-white picture. However, the requirement to promote the long-term success of the company would certainly appear to include running in a tax-efficient business as a part of being financially successful.

Another point is that the shareholders are answerable to shareholders (often moderated through non-executive directors) and, as such, if they are appointed with the aim of maximising shareholder return, that is what they have to do. Also beat in mind that these shareholders are often institutions representing the investments of ordinary people in such things as pension schemes.

One thing to note is that the 2006 act rather widened the scope of director's duties to wider interest groups. It superceded the 1985 act, where the duty to the shareholders was even stronger.

0
0

London's King of Clamps shuts down numberplate camera site

Steven Jones
Bronze badge

Re: Scary Stuff

ANPR average speed cameras do not rely on the number plate recognition for proof of speeding. They are backed up with photographic evidence. In that respect they have the same issues as fixed location speed cameras. So miss-identification can still happen, even with humans reading plates, not to mention the problem of cloned plates. However, the system is not dependent on the number plate recognition algorithms for convictions as they would certainly not be sufficient evidence.

Also, it doesn't require 100% accuracy, or anything approaching that to be a deterrent to speeding. It only requires a credible chance of being caught. Even if there was only a 10% chance of being caught (which would require only about 30% accuracy), how many people would be willing to take the chance, especially as you might go through hundreds of such sections every year (long roadworks often have several of these sections).

1
0

Achtung NIMBYs! BT splurges extra £50m on fibre broadband rollout

Steven Jones
Bronze badge

Reply to another fantacist...

The wholesale cost of a copper local loop is fractionally over £7 per month. Quite who you take the retail package from, is up to the consumer, and there are deals around which are a lot less than £15. Also, that retail price includes VAT, so that's about £2.50 off the price you quote.

In any event, the wholesale line cost includes all the costs of rates, maintenance of ducts, poles, the building where all this terminates and much else. That's not to mention recovering the costs of capital expenditure and, dare I mention it, some return to the shareholders who bought this stuff off the government. In practice, phone revenue used to cover much of these costs, but no longer.

So the long and short of it is that, whether it's fibre or copper, the fixed plant costs will remain and aren't going to be substantially cheaper than utilising the copper. Indeed, given the high costs of running fibre to premises, the costs would go up. If it was cheap to do, then you'd see other operators queuing up to run in local fibre loops. It isn't.

1
0

Sweet work, fellas: Boffins build high-density battery powered by sugar

Steven Jones
Bronze badge

Re: Energy density

"El Reg is not at fault here though. Whoever wrote that press release either was being willfully misleading or ignorant of the matter at hand."

Well, I'm afraid they are. What you raise are excellent points that occured to me too, and it doesn't take a great deal of knowledge to note that Ah is not a unit of energy storage. Regurgitating tracts of press release without asking obvious questions is surely closer to churnalism, than journalism.

2
0

The future of storage: disk-based or just discombobulated?

Steven Jones
Bronze badge

Smaller is better...

Of course you can increase capacity by increasing the disk diameter. Indeed, until not so long ago 5.25 inch drives were commonly available. Going back rather further, hard drives could be as large as 14 inches.

However, there are serious problems with doing this. Firstly is the simple issue that you have to spin these disks slower in order to maintain dimensional stability, Larger platters will distort more at high RPM due to the higher peripheral speeds and (therefore) increased forces involved. Go back three decades, and large disks were effectively limited to 3,600rpm, over four times slower than the fastest current enterprise drives, which currently run at up to 15,000rpm. (Whilst 15K drives are nominally 3.5 inch form factors, in practice, the actual platter diameters are smaller to allow this speed to be reached), There's also a more subtle reason why larger disks have to be spun slower. Quite simply, even if a 5.25" platter could be spun at 15K rpm, bit density would have to be reduced as it wouldn't be possible to write the data fast enough as there's insufficient time to polarise the substrate. Reducing the bit density would reduce the capacity so would, at least in part, undo much of the value of increasing form factor.

Large format drives take longer to seek to any given track as there's further to travel, and achieving the required level of accuracy for high track density becomes increasingly more difficult with larger form factors.

Then there is the issue that disk drives are essentially serial storage devices in that they can only read and write one track at a time (albeit with relatively fast access to a given track). It's not possible to put multiple read/write heads into an HDD due to vibration, airflow and other issues (other than old-fashioned fixed head drives, not feasible with modern bit densities). Note this applies even if you increase the form factor in the other dimension by just adding more platters - which also makes the mass to be moved higher.

The upshot of all this is your multi, multi-TB megadrive is going to be horribly slow due to very high rotational latency and seek times. Even the sequential access will be slow in terms of the amount of time it would require to read the entire drive as you'd be stuck with approximately the same transfer rate as we see currently. In fact tape has an advantage for sequential access as it is at least possibly to add more heads to increase the sequential data rate by using parallelism (which is why sequential data rates on modern LTO formats are so much higher than for disk drives - they effectively read/write 16 tracks in parallel).

So the large format drive is dying out for good reason. Already 3.5 inch drives is in slow decline as the access density (IOPs per TB and total device read time) inevitably worsen with increased areal density. (It's inevitable as capacity increases linearly with areal density, sequential data rates to the square root of areal density and IOPs are essentially fixed at a given RPM).

9
0

Page: