Re: the one-time state monopoly
So you managed to forget that the shareholders actually bought the infrastructure off the government. In what sense is that being given it?
1421 posts • joined 21 May 2007
So you managed to forget that the shareholders actually bought the infrastructure off the government. In what sense is that being given it?
"Blighty's spending watchdog"
A strange description for the Competition and Mergers Authority (CMA), which oversees issues of business markets and trading standards. Those words are more appropriate for the National Audit Office,
What this all seems to ignore is where the processing is relative to the data. Even given unlimited bandwidth, physics dictates that there is a minimum latency defined by the speed of light and distance it has to travel (over fibre it's about 1ms round-trip time for every 100km). So if you have an application that requires low latency access to its data, then that application is going to have to be in the cloud too.
Of course that's no impediment to applications where latency is not critical to performance, or for backups, archives and so on, but it is to a lot of transactional applications. For that you need the entire package of at least the latency-sensitive parts of you application "in the cloud" and very close to your data. Not much point in those flash drives with latency measured in 100s (or even 10s) of microseconds if you app is sitting on a few ms of network latency.
So I have my doubts that a pure commoditised cloud storage system is going to suit a large proportion of applications.
I have news for you. All robust backup regimes for commercial IT systems require multi-generational copies. Yes, you can look at using de-dup and incremental techniques, but if you don't have more than one completely independent copy, you are playing with fire. Backup also provides protection against corruption, whether hardware or software induced.
As far as multiple online copies and redundancy is concerned, then major mission-critical systems will have those to, but that's largely to address availability and does not substitute for the recovery of last resort that true backup requires.
It should also be borne in mind that online disk copies require power, and in a modern data centre, power and heat are serious issues. There is always talk of disks that power-down when not being used but frequent power downs and restarts are not recommended.
And finally tapes are physically robust and easily transported off-site. I would not recommend doing that with disk drives (especially the sort of high-capacity ones suited to backup). They aren't designed for frequent handling. Yes, you can do offsite backups using disk storage but the demands on networks can be immense. Not to much of an issue for incrementals, but heave help you if you need to recover a 100TB database over a network.
Tapes aren't suited for many loads, but they have a niche best described as write once and read virtually never.
If you pump 100ma DC down a 5km loop (960 ohms loop resistance), that's a 96v drop. It might draw 100ma at 96V, but the power delivered to the load will be zero (as it will be delivered at 0 voltage drop). I suppose you might consider using earth return, but not by using DC. If you tried that, it would cause enormous problems with electrolytic reaction at the earthing rod. Try it with AC and it will induce horrible signals into adjacent loops.
I'm afraid the maximum power you can deliver at 96V over a single 5K loop is 2.4 watts at 50ma (48V drop over the feed, 48V at the load). Those twelve pairs will deliver just 28.8W and draw 57.6W.
Of course there's not a snowball's chance in hell that anybody is going to agree to put 96V DC down a phone line pair mixed up with all those 48V lines. A voltage of 96V DC is plenty enough to give a nasty shock, especially when stuck down a wet hole.
Those 12 pairs at 5K will therefore deliver just 0.6W each, or 7.2W in total.
Is it possible somebody technically competent could be used to analyse this rather than just write a scare story? Don't you think those looking at the design of what is called reverse power systems have actually thought of some of these issues?
Any RP system will not be dependent on any single line. It will scavenge power off of several lines and not single line (or even several lines) will cause loss of service. Here's a posting I made from another forum based on calculations of the properties of phone lines.
To put all this in perspective, it’s worth doing some calculations to see what level of power can be delivered this way and over what distance. The first thing to note is that the telephone system is (relatively) low voltage and works at a nominal 50V DC. The second thing to note is the resistance of the relatively light gauge of typical phone wire (0.5mm diameter approx). That works out at about 9.8 ohms for 100 metres, although this has to be doubled as there are two wires. So that’s 19.6 ohms per metre “round trip”. The maximum power delivered at the load for a 50m line would be 127W, dropping to 64W at 100m and just 21W at 300m. However, this is highly inefficient as it means drawing double that power at the source as half the power will be wasted in transmission losses.
A more practical solution is to draw the same current off of each line. If we set that at (say) 100ma, it would provide 4.9W over a 50m line yet still provide 4.4W from a 300m line. The power required at each customer’s site would be 5W. At current UK retail power rates that would be of the order of £5 a year, or about 50p per month. (The node mentioned in the article consumes rather less than 3W per line). Note that the power conversion required at both the customer’s premises and the node will incur some losses, but modern conversion circuitry is now quite efficient at doing this.
The biggest problem is possibly that the node will only become practical once a certain minimum number of lines are active. Whilst a large amount of the power requires will be proportional to the number of lines actually in use (largely to power the line amps), there will also be a moderately large fixed element to power the optical circuitry, network switches and so on. It’s that minimum power requirement that might be an issue. Clearly my nominal 100ma could be boosted to (say) 300ma which would provide 13.2W at the load over a 100m line (but only 9.7W over a 300m line). If the node’s minimum power requirement was, say, 50W then it might be possible to power it with just 5 active lines. Of course the subscribers would then be faced with another £15 per year on their electricity bills.
It may well be this “minimum subscriber” issue that’s the biggest issue. Clearly designing nodes with the lowest possible base power requirement will be essential. Ideally you want a node capable of operating at less than 10W with a single line. That’s just about a practical amount of power to deliver over a single line of up to about 250m (that customer would find about £20 on their annual electricity bill).
So there’s the problem. Design a node that can work off of a 10W power budget when supporting a single line and which scales up power consumption at about 2.5W per line added.
Nb. battery backup could be provided at the customer premises or the remote node. It would be more efficient at the remote node. Of course if there is a power cut, then most likely the customer nodes would go down as well, so this may only be required for powering phones. In this case I see little need for a very substantial battery.
Agreed that they were crappy scripts, but then they were playing to a political agenda and he was part of that. When you subjugate story lines to something completely unconnected, then that's what happens. Of course it's not that the most recent incarnations haven't been showing tendencies that way, but at has tried to be a little subversive to its own liberal line (although rather failing recently).
Surely the simple answer is that for the great majority of people, the great majority of time, current PC technology is "fast enough" once you combine a half-decent CPU, graphics card, 8gb memory and (almost any) SSD into the mix.
That's not to deny there aren't a number of people who will demand as much power as the industry can deliver. Ultimate gamers, video producers. It's simply that those people don't constitute a large enough market to revive the market. That's rather different to a decade ago when using a PC could be painfully slow and limited technology.
So until somebody comes up with a killer "must have" that absolutely requires a radical new technology (and what might that be? A true AI system?), then we are just into a mature market with incremental changes.
You aren't. Prime Ministers are always at the mercy of their more unruly members of their party. Governments can, and do lose votes. The party whip is not a guaranteed instrument and there's only so much patronage to go round to buy off the trouble makers.
Of course it's governments with thin majorities that are most at risk. John Major's experience of office was torrid with a constant threat of rebellions. James Callaghan had major issues with his party too and he was prevented in dealing with issues just as he would.
There is no tradition in the UK of prime ministers who have true presidential powers. They are always vulnerable and many have been brought down by their own side. There was one such, and her name was Margaret Thatcher. If it could happen to her, it can surely happen to anybody if they fail to command confidence. That is how the UK parliamentary system works.
Oracle's revenues are dominated by software licencing , support and services. For years they have increased turnover by buying up companies selling enterprise applications. So yes, they have plenty of very large corporate customers, but the considerable majority of that is not in hardware sales. (Also, cloud services aren't, as yet, very high either, which rather questions the number of SPARC servers that Oracle might be able to absorb internally unless there was a vast change).
In US $ the hardware turnover is on a gentle decline. There might still be decent returns, but there's no sign of some recovery in SPARC platforms. It's just the continuation of a story from the time when SUN was the dominant player in the UNIX market. They are just losing out to x86 servers and will, no doubt, continue to do so.
I don't think evidence about Oracles revenue growth necessarily contradicts the notion that the high end market (for specialist SPARC servers) is declining as much cheaper x86 hardware continues to make inroads in that area. The market for servers using other that x86 architecture servers is increasingly a niche one.
However, I'm not sure that the author's hypothesis that Oracle's internal use of SPARC for cloud-based services is enough to continue the level of investment.
Of course there is still a market for what might be called "legacy" customers of SPARC servers, but in my experience, most would wish to move to a more cost effective platform (read x86 architecture), but are deterred by the cost and complexity of migration. If you have an existing application running of an Oracle DB on SPARC servers and need to keep a service going with minimal downtime during a migration to x86, then you are faced with a major technical and logistical challenge. Yes, there's Goldengate, but that's neither cheap to licence or simple compared to uplfiting a SPARC server.
SPARC will limp on (like many legacy platforms), but I can't see it's fortunes reviving. Not many new projects are pointed towards SPARC.
Unnoticed by whom? If you didn't notice, then I'd worry about your ability to manage your finances. However, if you did notice it, and just let is sit there hoping nobody else would spot it with the intention of spending it eventually, then at the very least it's repayable through the civil law and, most likely, it's actually an offence. It's roughly the equivalent of finding a wallet stuffed with money in the street and not handing it into the police (although nobody is going to care much over trivial amounts, like a lost £1 coin). People have been prosecuted for such things, like this couple who found a winning lottery ticket and cashed it in.
Of course laws will vary across the World. This is the position in the UK.
You don't need to explain to an older user how to use a "D" drive. You simply map the HDD partition to be used as MyDocs, or if you want something more sophisticated, keep MyDocs on the SSD and map HDD partitions or folders for bulkier data like videos or photos. Personally I always create a separate data partition on the SSD for MyDocs folders anyway as I prefer to be able to do an image restore of a systems/programs partition without impinging on data files.
Then, of course, there's the use of libraries. If you do all this properly, you never need to see a "D: partition".
A lot of people seem to be wholly unaware that with NTFS it's very easy to map partitions into the file system or use symbolic links. Personally I prefer to use partitions as it fits my backup strategies better.
SSDs don't need to reach the same price per GB level as HDDs. They just need to be cheap enough. A bicycle beats a car hands down on price per seat (at least at the utility end of the market), but people still buy cars as they do things that cycles can't.
An SSD just has to be "cheap enough" and its overwhelming advantages in terms of throughput, latency, IOPs, ruggedness, power consumption and the generally much more responsive nature of applications and systems takes over. I for one will never, ever buy another computer with an HDD as the system disk, and I can see the days when I use them for bulk storage approaching. At a certain point I won't care that I'm paying more per GB as the cost will be outweighed by the utility.
There is no inevitable leakage. At least not at any rate that will matter during the lifetime of a drive. The engineers who design these drives will be aware of the issues. These aren't party balloons.
If you doubt helium can be contained for very long periods, consider how the stuff is actually extracted. It's from separation from natural gas reserves where it accumulates over millions of years from the radioactive decay of uranium and thorium. The same conditions that trap natural gas also trap the helium. If helium can be trapped over millions of years (under pressure), there won't be any problem in containing a small amount in an aluminium casting. I suspect it's the gasket which is the only real problem.
No amount of increasing of bit density is going to resolve the fundamental problem with HDDs, and that's IOPs and, of course, latency. Indeed the higher the areal bit density, the worse (in relative terms) the problem gets. I suppose if HDDs are just to end up as a semi-archival repository for very rarely accessed data then price/capacity is the only advantage left, so it makes sense. It might well be in five years time HDDs are reduced to a niche product.
Helium filled or not, the fundamental issues with HDDs remain as they are dictated by geometry and materials and engineering can only make marginal improvements in ameliorating the issues. Adding caching helps to a certain level (whether device, array, system or application based) helps, but at a certain point it hits the law of diminishing returns. The future looks bleak for the makers of spinning stuff of all sorts of storage, whether it's HDDs, CDs, DVDs or anything related.
There most certainly is a "copper only" product that service providers can buy. It's called MPF and is what is used by all the major LLU operators (and is £87.48 per year). That they they also provide a voice service is simply because it's virtually zero incremental cost to them as modern MSANs effectively provide that capability along with xDSL.
Note that it's awkward if the 50V DC power isn't applied as it can be used as part of a cross-check that a line is allocated and can play a role in diagnostics.
The wholesale charge for a copper line is £87.48 per year (for the fully unbundled MPF product). The WLR product (which is the one used by BT) provides voice too, but is only fractionally more. Anything above that level is due to a mixture of mark-up by the service provider and VAT. It's a lot less than that £192 figure you headline. The choice of which service provider to use is (for the vast majority) completely open.
Note that this includes a (regulated) level of return on what Ofcom deem the network to be worth as a level dictated by the regulator in addition to direct costs (workforce, maintenance, rates, power and the multitude of other items).
The NBN produce a weekly report on progress. It's not particularly fast at about 360,000 premised passed a year (to meet it's final targets it will have to go much faster). The much-maligned BDUK project is doing around 2 million properties a year. Of course it's always necessary to qualify this as the conditions in the two countries will be very different; Australia is much, much larger with a lower population density (but bear in mind the great majority of the Australian population live in urban areas and NBN are tackling the really remote areas with satellite).
The NBN is also rather a large budget, at about £18.4bn, or the equivalent of about £46bn in the UK if adjusted for population levels.
Breach of copyright (unless you are actually making a business of it) is a tort covered by civil law, not an offence covered by criminal law. Civil cases in the UK are not held in front of a jury. (With the exception of some libel cases, but even that is now very rare as almost all of those are judge determined).
It's different in the US where some types of civil cases can be decided by juries.
The music industry isn't gunning for you. It's impractical to enforce. They want the government to raise a levy against things like USB sticks, smartphones, MP3 players, disk drives and the like in order to create a fund to be distributed to copyright owners. That's what happens in several EU countries. So the impact is that you'd pay more for your equipment and storage devices.
It's when domestic law conflicts with EU law. The latter takes priority.
At one level this is an argument that can't be won by those representing musicians. Even if we revert to the position prior to the change of law that made copying music (which you'd already purchased) for private purposes a breach of copyright, it was always a wholly unenforceable law. Breach of copyright (unless you do it on an industrial scale) is a tort in this country, not a criminal offence. Consequently it requires the copyright holder to start civil proceedings, and the idea that they could find a way of suing people who'd ripped CDs to MP3 players and the like is ridiculous. They couldn't before, and they certainly couldn't now. Just about the best the musician's representatives could hope for would be some form of "slush fund" to be doled out based on some sort of levy made on manufacturers producing relevant equipment. Such things as smart phones, USB sticks and the like. This is, of course, the practice in some EU countries. The irony is that such a case was probably only possible by making the copying of music legal in the first place. If the law was left as it was it would have been much more difficult.
It is also interesting as it's an example of judicial review not just looking at the operation of executive powers, but of primary legislation where it conflicts with European law. There's another such case in the offing with the challenge to the law on the emergency Data Retention and Investigatory Powers Act.
It's fairly easy to bypass that too. To avoid this it would be necessary for Google to enforce the use of an authentication system before using its search engine. It's all a matter of how many hoops that somebody has to jump through.
So does this Parisian court ruling go beyond even filtering google.com results by originating IP address? (Which, I realise they don't do at the moment, but presumably could be done). Whilst I can (just about) see how a Parisian court might be considered to have juridstiction over what is seen in France, I can't see how it can extend to the World at large. Indeed, I would have thought there is a good case it would run slap bang into a conflict with US constitutional law with regard to freedom of expression.
Of course, even if a watered down filtering by origin address location was implemented it's easily enough bypassed by use of proxies and VPNs. All that's happening is it makes like a bit more difficult.
It strikes me that Google could make life more difficult for the French if they just decided to remove their services from the country. If they have no legal presence, then there's nothing to fine. It would mean forgoing income, but at some point the cost of doing business outweighs the benefits, especially if it has knock-on consequences.
Not even China has sought to enforce what Google do in the US. It will be interesting to see how this goes.
There are two perceived main issues. The first is that alternative backhaul operators will lose a major customer as BT makes use of its own fixed network for EE's fixed line interconnections. That could well reduced competition in the backhaul market used by mobile operators to their disadvantage, although any such BT backhaul will have to be available to other operators on the same terms under the equivalence regime. There are also more subtle issues such as OR having a guaranteed customer for products such as those being developed to support mobile "micro-cells" which exploit much of the infrastructure put in place for FTTC/FTTP deployments. This would be difficult for other back-haul operators to replicate and might, again, make mobile operators more dependent on BT.
As Ofcom have a proposal to force OR to offer a "dark" fibre product, this may go some way to ameliorating this issue, although it's unlikely any competitor will be able to replicate the "micro-cell" products in much of the country.
A second issue is whether BT retail will be able to exploit its fixed line customer base using bundles, although I've no doubt that Ofcom will look at those very closely.
There's also a general point from the point of view of a other operators, and that is they are going to do their utmost to oppose anything which will create a stronger competitor. Companies will always raise objections where they see themselves disadvantaged.
"BT's ducts and cable runs were, to a large extent, paid for by the taxpayer before privatisation."
Yes, and the government sold BT for (corrected for inflation) about £30bn, which is not so far off BT's current market capitalisation (about £38bn). There was also virtually not fibre in the network in 1984, being utilised only for some voice trunk links. Even then, that fibre will have been long obsolete as the early stuff was very lossy. So, in effect, all the fibre investment is post-privatisation.
In the 1960s, that would have been a Bond Minicar, not a Bond Bug. That latter was a 1970s bright-orange wedge of a two seater powered by a Reliant engine driving the rear wheels. The Bond Minicar was powered by a Villiers two-stroke engine to the front wheel and was unrelated. A friend at school managed to have two of them in succession, and I recall helping to spray paint it. Some models of the Bond Minicar could, indeed, seat five, although it wouldn't have taken them anywhere very fast.
The Villiers engine was mounted on a vertical, swivelling post driving the front wheel (on a trailing arm system) via a chain. There was no reverse, but some models were equipped with a starter motor and two sets of points such that the engine could be started in the opposite direction.
Quite. Is perfectly open to developers to negotiate with any network supplier they so choose. Indeed, some do. Which is why some developers have done deals with fibre suppliers.
Of course many developers will just follow existing practices, but there's nothing stopping them dealing with VM if they so wish.
Most people have no idea what "net neutrality" actually means in practice, apart from something that sounds desirable. Once you try and map this aspiration onto the the reality of the Internet with its complex web of content delivery networks and peering connections it becomes immensely complicated. For instance, can a content supplier who optimises their own costs at the expense of an ISP expect the same service as one that assists ISPs by interconnecting at more points?
Of course it might be reasonable to treat all traffic equally at any one point in an ISPs network, but that's far from the same thing as all network traffic is treated the same overall.
The Internet isn't just a cloud to which an ISP simply leases a bit of bandwidth. The reality, both technically and commercially, is very different.
You ought to do a bit more reading. For the most part, g.fast nodes will be installed much closer to the property. Typically in a DP (atop a pole, or down a hole). Power will be line-fed. Two possibilities, forward fed from a cabinet or other power point or using reverse power. That is using power from the terminating device in the customers premises.
Of course, with only 48V available, distance is an issue with typical 0.5mm diameter copper. To some extent this is dealt with by circuits which extract power from a number of lines and produce a combined output. You might expect a typical g.fast node to support possibly a dozen lines or so. Of course this means running fibre deeper into the D side of the network, which is expensive, but nothing like as much as running new fibre to every property, digging up gardens, replacing master sockets, installing an ONT etc. Another point about running fibre deep into the D side network is that it allows for a GPON node to be installed (which is just a beam splitter). That allows for the potential of a much cheaper form of FTTP on demand, as the customer would only have to pay for the work to run fibre for a shorter distance than from a current fibre concentrator. It might even be possible that combine g.fast/gpon nodes could be designed which are serviced by just one fibre.
A further point to note, it is not just the money required to run fibre to every property that would be the issue, but the resource and timescale. There are only a limited number of people in the country trained to do this work, and it's ridiculously expensive to try and increase workforces several times over for just a few years. So a technology that can be rolled out faster is to be preferred.
A lot has to be proved, but these are all stepping stones. It would get fibre deeper into the network and will benefit from all the work that has got fibre to the VDSL cabinets as the E-side network has already been extensively upgraded.
As to distances to DPs, these are typically in the 10s of metres, not the hundreds, or even thousands of metres (like cabinets).
Not necessarily. ADSL has higher natural latency, and it gets worse if interleaving is turned on. With VDSL the difference is insignificant unless interleaving is turned on (which is a time domain error correction system which adds latency).
From where I am 20km wet of London I get 9ms ping time to the www.bbc.com. I think that FTTP might shave a millisecond or thereabouts as it would omit one physical "hop".
The physics of it is that the speed of propagation of light down fibre (about 65% of the speed of light in vacuum) is not too far different from the speed of propagation of a signal down a transmission line (cat 5 cable is about 64%, and twisted pair phone cable much the same). The signal isn't actually carried by electrons whizzing back and forth from the source to the destination. It's essentially an electromagnetic signal which interacts with electrons in the conduction band of the copper. Or something rather like that.
Strange, I could have sworn I'd read stories of Israeli air strikes where there had been "collateral damage".
There is a product called "SLU", or subloop unbundling which has been used for FTTC use (although the number of non BT FFTC deployments has been very low, although there have been some). However, there's very little difference in wholesale cost. That might seem odd, but the the great majority of the cost of a network is where it "fans out" from the consolidation points,. There may be less copper in that part of the network, but there are far more joints, miles of ducts, telegraph poles and so on.
In any event, the route back to the exchange has to be paid for, and those costs will just get transferred back onto the fibre backhaul.
There's a myth that the main cost of the network is in the copper. It's not. It's in all the infrastructure required to support it all, the manpower, rates, power, poles, cabinets, footway boxes, ducts, builds and so on. All those (or close equivalents) are required for fibre too.
A few years ago Tim Worstall on these very pages produced a laughable estimate of the value of the copper in BT's network (overestimating it by a factor of 20 or more). The value of the "raw" copper is around the £2.5bn mark, although when fashioned into cable it's perhaps double that.
So the return on capital employed in the copper in the "E side" of the network is a relatively small proportion of the total costs of the network infrastucture.
(Somewhere around there is a report that OpenReach has to produce annually on the "book value" of the network assets, albeit that isn't the one that Ofcom uses to regulate prices directly).
Call termination charges (that's what a network operator is allowed to charge for connecting a call on their network) have historically been far higher on mobile networks than landlines. Termination charges on landline numbers are, in comparison, almost insignificant. (The used to be about 0.2p/min, but have been reduced to 0.034p/min).
Indeed, Ofcom specifically engineered it that way as a method of financing the build out of the mobile networks. For a long time landline users have been paying for mobile networks, although this is changing as, mobile networks are having more tightly regulated call termination pricing.
The above is the reason why packages don't include calls to mobiles, whilst mobile networks do include landline calls. Mobile packages can afford to include mobile minutes as, on average, the calls into and out of a mobile network balance.
You might, but many object in principle to the retention of logs, backdoors into encryption and much else, even with judicial oversight. We have developed such systems over hundred years for physical records, yet virtual ones are considered sacrosanct by many.
I still see from the reaction to my comment that some are still unwilling to accept the logical consequences of opposition to record keeping, even with judicial oversight.
So for those that promote untraceable financial transaction systems, be aware that this is the enabler and motivation for crimes such as this. Be careful for what you wish, because it may be granted, because everything has consequences to consider.
Given the number of people who appear to object in principle to the whole idea of any state surveillance capability on use of the Internet (evidenced by the number of comments on this site, others, Twitter, mainstream media any time these issues come up), then it's pretty near impossible to track down the source of these scams, at least without a huge amount of technical and manpower resources, and even then it's doubtful.
These objections are based on the whole issue of traceability, privacy and (I often suspect), a good deal of egotistically driven paranoia. Unfortunately those very same measures which make it virtually impossible for the state to snoop on your activities also makes it easier for scumbags to prey on the vulnerable.
So, decide what you want. It's simply not possible to have an Internet landscape where you have effective policing and complete protection of personal privacy. You can have untraceable electronic transactions and currency. You can have unbreakable encryption, Internet anonymity and the like. But you can't have that with tracking down Internet crooks. Something has to give.
Those exchanges are only market 1 because other operators don't seem them to be cost effective to deploy equipment into. LLU operators have the ability to cherry pick which exchanges to enable, and you can't really blame them.
So market 1 exchanges exist by default, not because they were built that way.
As far as I'm aware council's can't prevent utilities digging up pavements as they have "code powers". That includes VM.
The council does have powers over the placement of cabinets, but even then those have limits.
Of course, planning issues are a useful excuse for utilities not to continue with projects which they don't deem to be remunerative.
Look up bonding, although your ISP has to support it. Failing that, it's possible to do line balancing but that doesn't allow for a single data stream with 2 x the bandwidth. What it allows is several independent data streams which can be useful if the problem is congestion due to multiple users.
Of course, it's expensive. Two lines, two broadband accounts a modem/router which supports bonding.
Failing all this, you are wholly dependent on your telco bringing fibre closer to your property.
That, is rather clever. Of course once you realise that a flash storage device is really a miniaturised storage system, with it's own logical mapping it becomes obvious.
However, one thing occurs to me and that is it will be necessary to be able to coordinate these functions over multiple devices. For example, it's very easy to see single point-in-time consistent snapshots might be required over multiple devices, and it would be nice to be able to delegate that functionality without invoking higher layers.
This is Healthcare Triage's take on the Singapore system (they have an excellent series on various national health care systems). Despite the videos all being produced in the US (largely by doctors), they aren't too impressed with their own system.
One thing to note is that the Singapore system has a huge amount of state intervention in order to minimise costs. They tried open market supply, but found that competition was increasingly through expensive technology and then changed the system to stop it. In effect, avoiding the way the US went.
So this is a long way from being a free market system. The Singapore government is pragmatic if it is anything, but I can't imagine the system of co-pays and enforced medical saving being accepted in the UK. In effect, Singaporeans are forced down a route of compulsory saving for any number of things that are at least partly covered by state welfare systems in most Western European nations. (Of course it helps that income tax is so much lower). The whole philosophy appears to be to minimise state exposure to welfare costs through enforced savings and incentivising citizens to not make demands on the state.
People dying early and quickly are actually rather subsidising everybody else. The biggest cost is the treatment of long term chronic diseases, and particularly of the elderly. It is said that type II diabetes is a problem, not so much because it kills, but because it kills very slowly but involves huge expense over time dealing with all the related chronic diseases. That's just the medical costs. Add in pensions, welfare, free bus travel, heating allowances and so on and it gets worse. The deficit is largely down to us all living longer.
So those Glaswegians expiring early of heart attacks, lung cancer and stabbings are good value. Well, if you're an accountant (and we all know bean counters have no compassion).
Indeed, there's already an Ofcom requirement for BTW and BTOR to supply wholesale communication services on an equivalence basis. This would simply roll over to any purchase of EE as it would not be part of BTW or BTOR. Of course, BTOR and BTW don't have to discriminate in favour of any BT-owned EE anyway. The fact that BTOR & BTW will gain a captive customer is a major advantage. However, I'm pretty sure that BT will not want to alienate other mobile operators who are significant purchasers of fixed line network services in their own right. It's in BT's interests to have a very strong offering for mobile operators.
It's also an interesting point that the BDUK programme has, unwittingly or not, allowed for a considerable extension of fibre deep into the network. Those new fibre concentration points could be very useful, although no doubt the state aid aspects could get rather complex.
It means the electric motor is being used as a generator driven by the petrol engine.
Possibly just as well that the Germans have decided to close down their nuclear power plants if they can't keep critical control systems safe from hackers. Although I would hope that things would at least fail safely, even if not cheaply.
nb. I initially found a few elements of this story not entirely plausible, but as it seems to be official then so must it be.
So now you know what to take as a gift to apartment 4A 2311 North Los Robles Avenue, Pasadena should you ever be invited for Christmas. Personally I'd take a bottle or two of pinot noir to apartment 4B.
If the Register's summary is correct, Richard Deaken s didn't make a statement that "90s kit isn't 'ancient'". What he said was the system had it's roots in the 90's. To put this in context, the World Wide Web has its roots in the late 80's. For that matter, the first draft definition of TCP/IP dates from the early 70's.
It's wholly irrelevant from when the technology originated. What matters is how it has been developed. After all, we are still basing out day-to-day usage of geometry based on what Euclid set out over 2,000 years ago. Roots matter. They stop trees falling over when the wind blows.
In the meantime, please don't misrepresent what was said. The kit isn't from the 90's, and nobody seems to be seriously claiming this was a hardware failure.
Gordon Brown set the objective, which was quite simply to maximise the sale value of the 3G licenses. That he didn't personally design the auction, is not relevant. Although given Gordon Brown loved nothing better than to manipulate figures (like expensive PFI contracts to keep debt off the books), I'd be amazed if he didn't personally approve the final form of the auction.
nb. the economist who advised on the format of the auction was Paul Klemperer, an Oxford academic, who has been very active in defending the decisions made.
Even if it's conceded that the original 3G licence auction maximised the prices paid by the operators, and this didn't result in higher prices to the consumer on the grounds that they were sunk costs (more debatable) and that it didn't adversely impact other aspects, like network investment and thereby economic activity (even more debatable), then there is a much more fundamental reason why the exercise can't be repeated.
That's because at the time of the 3G auction, there were more potential bidders for bandwidth than there were available chunks of spectrum. In addition to the incumbents, there were a number of other operators seeking entry into the UK market including the (state backed) France Telecom and Deutches Telekom. It was this unique blend of ambitious operators and limited supply backed by inflated telecom valuations (and some de-facto state guarantees) that drove bid prices far past their economic value. Once the shareholders and financiers came round to noticing this, the supply of ready money dried up and auctions all across Europe then got fractions of what was achieved in the UK and Germany.
These circumstances will never happen again. It doesn't matter if there are 3 or 4 operators. The costs of entry into the UK and building a new network are immense. The only way that spectrum prices could be manipulated upwards would be to offer fewer chunks of spectrum than there are operators. By definition, that will lose one operator from the new spectrum. It's quite possible that one of the weaker players might decide the whole thing is not worth pursuing anyway and seek to either run as a low cost operator on existing spectrum or pursue other options. Of course if the spectrum is auctioned off such that all operators can get a chunk, then that's less of an issue, but it will not, of course, recreate the circumstances of the 3G auction.
So now that the fit of hubris of 2000 is over, there is no way that the telecom companies are ever going to fall for this again. The 2013 auction fell short of government targets by about £1bn (it raise £2.5bn vs the £22.5bn of the 3G auction). The circumstances at the turn of the millennium are not going to repeat themselves.
There's also another issue. Seeking to maximise the value of the spectrum to the state simply in the capital cost of the license, rather than through more continuous revenues from taxation on increased economic activity is surely short sighted.
In any event, 3 or 4 operators. It's not going to make a great difference to state revenues. The CEO of Telefónica César Alierta, has noted that the industry is not going to play ball with states that manipulate the circumstances of an auction in order to maximise a one-off return.
(nb. in the US, a similar auction approach to that which was eventually taken by the UK government in 2000 was ruled illegal and had to be retracted.)