1312 posts • joined 21 May 2007
What's the real market size of x86 vs ARM
ARM is a minnow in comparison with Intel. However, this surely gives an entirely misleading view of the relative value of the ARM semiconductor processor/gpu market vs that of the Intel x66/gpu one. It would be interesting to see such a comparison to give an idea of the relative financial strengths of the two competing architectures.
"El Reg wants to know: Could a disk read/write head work on more than 1 track at a time? Wouldn’t that increase disk I/O bandwidth?"
I theory yet, but in practice (with modern high density drives) it's not practicable. It's certainly not possible to put two completely independent head mechanisms on a disk due to vibration and air-flow issues. The other option, to have a single head with multiple read-write heads comes right up against the problem that the tracks are simply far too close together on disk to be written simultaneously. I suppose you might conceivably have some sort of staggered arrangement whereby the "parallel" tracks are being written a little distance apart, but I suspect problems will remain over heat dissipation, the size and weight of the head, error recovery and so on.
The reason that reading/writing tracks on tape is practical is that they are much wide apart (and they don't have to be moved fast).
nb. I seem to recall back in the days of fixed-head disks there were some that could work in parallel, although that may be my imagination.
Re: Why is BT relying on a US supplier for webmail?
What in-house resources? The great majority of development and support is off-shored in an attempt to keep costs down. Buying in solutions from specialists is the norm in business these days due to economies of scale (did you not notice the previous system was run by Yahoo! ?).
Re: "...can't I pay a regular ADSL type service charge?"
ISPs already offer TV & Films over broadband in the UK (as do the likes of NetFlix). You can buy them as bundles. However, there's a major difference in the regulatory regime in that there's no option to cross-subsidise infrastructure roll-out from retail revenues. Wholesale line rental is subject to extremely tight regulation, and whilst FTTC wholesale pricing is not regulated as yet, it has to be sold as a wholesale service to all operators.
The consequence? A lot of retail competition, but not much money for infrastructure investment.
Re: not round here
VDSL and FTTC makes no difference to the copper run to the exchange. Apart from passing through a low-pass filter at the cabinet, there's no difference. Indeed, if you had ADSL before, the line would have passed through a similar low-pass filter at the exchange. (Plus a similar low-pass filter in your house). Unless there was a poor connection made at the green box, then I can's see why there would have been any difference to call quality at all.
They could make business more difficult.
The Russians might not easily be able to prevent their citizens using Facebook, Twitter or the like, but they can make it very difficult for such services to make money from local advertising or local financial transactions. Of course, with services like eBay or Amazon, it's essentially impossible to operate without some form of local presence.
Bear in mind the US authorities became very aggressive with companies that offered on-line gambling to their citizens.
Re: That old horse:
BT ducting is available under PIA.
Indeed. The only way that the ICE will disappear is if somebody comes up with a cost effective fuel cell which can work off high energy content liquid fuels (perhaps ethanol). I discount liquid (or compressed) hydrogen as producing it is thermodynamically highly inefficient and it's tricky to store and distribute.
Batteries have fundamental capacity constraints dictated by electrochemistry. Lithium is already just about the best candidate we have, as it is the third lightest of the elements, but it still has very poor energy storage (in battery form) when compared to hydrocarbon fuels. Battery powered vehicles could well have a role in short range, commuting and delivery functions, but not for long range delivery or a general purpose family vehicle. For those, some form of easily transportable liquid fuel will surely still be best, and at the moment, the ICE is what we have.
Wildly misleading NASA claim. This is why...
NASA's claim is wholly misleading. A 1 metre diameter receiving dish might well subtend approximately the same angle as the diameter of a human hair at 30 feet, but that's not the most important factor. What is far more important is the degree of divergence of the laser beam, which you can guarantee is far more than a metre by the time it hits the Earth's surface.
Human hair isn't a great standard measure, as the size varies a lot. However, if we take 2/1000th of an inch, it will subtend an angle of about 5 micro-radians. To a good degree of approximation, laser beam divergence depends on the minimum (waist) diameter of the beam and the (1,500nm) wavelength. If we take a reasonable beam "waist diameter" of 1mm, that gives a beam radial divergence angle of about 470 micro-radians. In other words, the degree of precision required is, perhaps, only about 1/100th of that claimed. Also, of course, the ISS moves in a rather smoother, and predictable manner than a human being walking.
To put this in perspective, it's reckoned that competition level target rifles can manage accuracies as high as 100 micro-radians, albeit, not hand held of course.
Plug in the minimum ISS orbital height of the 350km, and you get a beam diameter of about 160 metres, so the receiver only has to be in that area. Of course the transmission was likely at a considerably greater distance than the minimum ISS orbital height, but then that doesn't change the degree of accuracy required.
ps. sorry about the mixed units, but that's NASA for you, quoting wavelengths in nm and distances in feet. You'd have thought by now, having crashed a probe due to mixing up systems, they'd have stuck with the metric system.
Re: Stop taking GCHQ money the first place Vodafone Executives!!!
There's no "allowing" involved. It will be mandated by government.
A bit late for Paul Chambers
If the UK authorities had such a filter, it might just have saved Paul Chambers quite a lot of trouble, as it was fairly clear that the police, prosecution and lower levels of the judiciary were completely unable to do it using normal judgement.
Re: Why Taiwan?
It's very common for multi-national companies to spread their research around the globe. It's particularly important for the likes of ARM which has a business model which involves licensing technology and working as a partner with customers. By having research facilities close to their main customers, then they can have a much closer and responsive relationship than would be possible from a purely UK base, with all the travel, language and time zone issues. It also has the side-benefit of being seen as a true partner, and not just a foreign supplier.
As far as China is concerned, then that's a bit of a different issue. Having research facilities in Taiwan might not be looked upon very favourably by the Chinese government.
Sensational headline trumps reading The Register's own news items...
Indeed, it's hardly unusual for The Register to confuse telecom products. Indeed in 2012 the Register actually published an item over Ofcom's proposed regulation of BT's leased line pricing outside of London. However, a sensationalist headline seems to trump proper research, even when it's stuff they've published themselves.
Re: if BT are delighted......
Whilst it was true that the first tranche of BT shares were sold at £1.30, there were two further tranches at £3.35 and £4.10 respectively. Also, looking at the RPI index, you have to apply a factor of 2.46 to that first tranche price to correct for inflation. So that £1.30 is equal to £3.20 at today's prices. As the two other tranches were in 1991 and 1993 respectively, the effect of inflation was not so high, but you are still looking at equivalent prices well over £5 (although there were staged payments for all offerings). Against that, there are more shares in circulation now (due to a share option issued) which does dilute the price, but then the shareholders stumped up that extra cash so it really balances out. A further calculation ought to involve the O2 offload, but as that business scarcely existed in 1985, it's not really very relevant,
If you start doing all the mathematics on this, you find that BT's capitalisation value is (inflation adjusted), considerably below what it was when the final tranche of shares was issued, even making allowances for all the factors I've mentions. At the time of privatisation, BT over 250,000 UK employees, whilst these days it's now more like 90,000. As far as UK operations are concerned, it's a much smaller company than before despite the product range being vastly larger, such is the state of what is now a highly diverse telecoms industry.
I'm not questioning the future of tapes for very large archives and very large backups. For those purposes it's still unbeatable for cost, power consumption and robustness in transport. However, what I'm pointing out is that there's something inherently unscalable with both tapes and disks and performance becomes a big issue.
nb. I made a bit of an error with the requirement for more heads. In fact the number of heads is increased linearly with bit density, then total read/write time remains fixed. My analysis only works if the number of heads is kept fixed (tracks have to go up, but that's a different matter). LTO has doubled the number of heads in the past, which explains why the total read/write time hasn't increased as much as expected, but this can't go on for ever. So another doubling of head numbers before this theoretical LTO-12 arrives might allow for total read/write time of about 16 hours.
"Vulture View: With its LTO connections, such an IBM technology could well banish Sony’s impressive 185TB effort to the tape wilderness. Reckoning on LTO generations doubling capacity we could see a hypothetical LTO-22 pass the 154TB mark."
I think Vulture Centre needs a little bit of work on his mathematics. If capacity doubles every generation, it will take 6 generations to go from LTO 6's native 2.5TB to 160TB, which would make it LTO12 (or LTO-11 if it's compressed capacity at 2:1).
Of course, one of the big problems with a 160TB is just how long is required to read/write the whole thing. Assuming that the bit density is the same in both directions, if you double the linear density, you quadruple the capacity. Even if you double the number of tracks (and therefore read/write heads), you only double the read/write speed so it will take twice as long to read/write an entire tape as compared to 4 drives of the previous generation. Scale up LTO6 to LTO12 and that would, theoretically, mean an uncompressed read/write speed of about 1.28GB per second, or 5.56GB per second on 2:1 compressible data. That means it would take about 34 hours to read/write the whole 160TB. A saving in media space and drives of course, but you create a bottleneck, and all that assumes you can keep doubling the number of heads per generation (unlikely I would say). Basically a bottleneck, not unrelated to that of (linear) write speed on ever larger disks. Namely capacity goes to the square of bit density whilst linear access speed is only linearly related.
(Random disk access is another issue which is even worse).
Re: "zippy performance, something the original BBC Micro was not famous for"
@ Jim 59
Complacent? As in designed a brand new RISC processor and (for those days) very fast computer complete with a full graphical operating system using such things as scalable typefaces in the shape of the Archimedes? All on a small fraction of the resources available to the industry giants.
On reflection, it was probably inevitable that Acorn were going to get steamrollered by the sheer mass and inertia of the IBM PC and all the clones that followed. Staying alive in a niche market is about companies can do in such a tidal wave of commoditisation, and even IBM had to bail out eventually. Only Apple have retained any sizeable market share for an alternative personal computer standard (at least prior to the smart phone/pad revolution). However, that complacent company you describe left the legacy of ARM, which now designs and licenses by far the most popular CPU architecture (by number) that has ever existed.
Indeed, I've no idea what era the author first became aware of computers, but the BBC micro was faster than virtually all the direct competitors at the time. Indeed, BBC Basic (which was advanced for the time) was famously much faster than the competition.
The 6502 was clocked at 2Mhz whilst some of the competitors ran at 1Mhz. Whilst the Z80 based competition was often clocked at 4MHz, this was a somewhat misleading comparison as the Z80 as the latter used more "clock ticks" for most operations (like memory access). The Z80 could be faster in some circumstances as it had more registers to play with, but the 6502 had some tricks of its own in referring to low memory. Generally I found that my 2Mhz BBC micro outran the 4MHz Nascom II I built.
Anyway, this was long ago, but the BBC was considered pretty fast for its time.
What's "ultra fast"?
That's impressive, although perhaps the author of the piece might care to define rather more precisely "ultra fast" means as, literally speaking, it just means "beyond fast". As I'm not at all sure what the limit of "fast" is, I don't know if this is equivalent to a gentle lob, a fast bowler or a speeding bullet.
Re: Barmy (@Psyx)
Driving too fast is most certainly a criminal offence. A criminal offense is (usually) when the state prosecutes somebody (or an organisation) for breaking the law. Exceptionally, there can be private prosecutions, when a private individual (or some types of organisation) takes on the role of the state as prosecutor. The penalties include such things as fines, community service, being bound over or custodial sentences.
In contrast, civil law is between private individuals or organisations to resolve issues such as contract law, nuisance, libel and so on. The penalty for losing such cases is often a financial award to the plaintiff or various sorts of court orders. If there's a fine involved, it is not a civil offence.
What you are probably confusing is whether an offence gets you a criminal record or not. Driving too fast doesn't (in general). However, it most certainly is not a civil offence.
The US does distinguish between (minor) misdemeanours and (major) felonies in criminal law (but neither are anything to do with civil law).
It would only violate US law if the filtering extended to those accessing Google from US territory, which, almost certainly, it will not. Indeed if Google were to do such a thing, they'd be in trouble.
Of course that makes it simple to bypass the filtering - just use a proxy service based in the US. I rather suspect any European journalist will use this technique if filtering becomes at all common.
Never mind the nominal scale, it's detaile that matters.
1:1 as a definition is clear nonsense. What matters is resolution. In principle, you can zoom any source of mapping information into whatever scaling you like, but it's the mapping resolution that gives you the detail. If what is meant be 1:1 scaling means that you can expect to see on your computer screen the level of detail you would see if there in person, then it's clearly not the case. Such a model would require vastly more information than a mere 1TB.
So, as always, don't swallow the headline, but find out what it really means.
Simply because the pound has appreciated against the dollar. Of course, if you go back and recalculated the conversion using today's, rather than the historical conversion rate then applicable, you'll get exactly the same growth rate.
Re: One for Newton...
nb. just checking my maths, I'd made a slip, so this is the formal derivation. Always show your work I was told...
Total distance = 440m
Total time is 43 secs
Top Speed is 20 m/s
Rate of acceleration and deceleration are identical
a = rate of acceleration (will be -a for deceleration)
t = time take for acceleration (clearly time for deceleration will also be t)
Clearly the acceleration time (t) is simply the top speed (20 m/s) divided by acceleration rate so :-
t = 20 / a
The distance traveled during the acceleration phase is given by the equation
s = ut + 1/2 * at^2
where s = distance travelled
u = initial speed
a = rate acceleration
t = time
but u = 0, so we get
s = 1/2 * at^2
the distance (and time) traveled during deceleration will be identical. The time spent traveling at top speed will be the total time (43) less the time spent accelerating and decelerating (2t). Substituting for the t with 20/a (see earlier), and adding in the distance in acceleration and deceleration, we get.
(43 - 2t) * 20 + 1/2 * at^2 + 1/2 * at^2 = 440
860 - 40t + at^2 = 440
420 = 40t - at^2
But, we know that t = 20/a so, substituting for t
420 = 800/a - 400/a
420a = 400
a = 20 / 21 m/s^2
But t = 20/a, so t = 21 secs
So this actually means accelerating at approx 0.95 m/s for 21 seconds, traveling at 20 m/s for 1 second and then decelerating at approx 0.95ms for 21 seonds. So about +/- 0.096g.
One for Newton...
Your analysis is flawed as it has a single period of acceleration which means you'll be existing the top of the tower at some considerable speed. Modeling the travel as a single period of acceleration is wrong - you have to use two equal periods of acceleration (with opposite signs) use the lifts maximum speed (20 m/s) as the limit and then have the remaining time traveling at top speed.
Assuming acceleration/deceleration at at a constant rate, the maximum speed is 20 m/s, the height traveled is 440m and the time taken is 43s, the lift accelerates for 11 secs at approx 1.82 m/s^2 (110m), then travels for 21 secs at 20 m/s (420m) and then decelerates at approx -1.82 m/s^2 for 11 seconds (110m). The acceleration encountered (net that of gravity) is therefore about +/- 0.185g.
Re: Next year, I will mostly be living in Luxembourg
I rather think they won't be relying on your IP address to locate you, but details from your credit card, bank account or other payment method.
Re: The first clones...
You are quite right - DME, not DMA (it's been many years). DMA is, of course, Direct Memory Access. I was aware there was also a 1900 emulation too, and there were those who swore by the merits of George (if I've remembered the name properly). Of course, the 1900 had absolutely nothing to do with the DNA of the IBM S/3260.
The first clones...
It's worth mentioning that in 1965 RCA produced a mainframe which was a semi-clone of the S/360, and almost bankrupted the company in an attempt to compete with IBM. It was binary compatible at the non-privileged code level, but had a rather different "improved" architecture for handling interrupts faster by having multiple (in part) register sets. The idea, in the days when much application code was still written in assembly code, was that applications could be ported relatively easy.
The RCA Spectra appeared over in the UK as well, but re-badged as an English Electric System 4/70. Some of these machines were still in use in the early 1980s. Indeed, UK real-time air cargo handling and related customs clearance ran on System 4/70s during this period (as did RAF stores). Of course, English Electric had become part of ICL back in 1968. Eventually, ICL were forced to produce a microcode emulation of the System 4 to run on their 2900 mainframes (a method called DMA) in order to support legacy applications which the government was still running.
In a little bit of irony, the (bespoke) operating systems and applications mentioned were ported back onto IBM mainframes (running under VM), and at least some such applications ran well into the 21st century. Indeed, I'm not sure the RAF stores system isn't still running it...
Of course, this had little to do with the "true" IBM mainframe clone market that emerged in the late 1970s and flowered in the last part of the 20th century, mostly through Amdahl, Hitachi and Fujitsu.
It was each of the local authorities that carried out the Open Market Reviews under the BDUK framework, and they would have treated all the telecommunication suppliers in that respect (although clearly that means VM or BT for the vast majority), and they that drew up the intervention areas. Of course it's always going to be difficult for small companies with limited finances to commit expenditure, but frankly that's because in the world of major infrastructure projects the capital requirements are high, as are the risks. If they weren't, there wouldd be hundreds of local companies doing it, and frankly there aren't. It's a game for companies with deep pockets who can absorb risks. (Like Google - who can afford a large scale commercial experiment with Google Fibre).
What you appear to be requiring is that a commercial company releases its investment plans to competitors, and I really don't see that happening, especially in an area of investment like telecommunications, where there can be rapid take-up and change.
I suspect that the tendency in fixed line telecommunications is very similar to that for electricity, water and gas. The economies of scale are with large operators and that's a natural state of the market. What that means of course is we end up with a highly interventionist regulated environment (which is what we have), with more competition at the higher and added value levels. There will be specific areas were smaller companies will make an impact - industrial estates, new apartment blocks and so on (there have been some developments recently), but I don't think we are going to see the country somehow covered by a patchwork of small, local network suppliers. That's how both electricity and telephone provision started out, and it ended up being consolidated into national networks in virtually every country you care to name (usually nationalised, as in the UK, but privately in places like the US). By a quirk of history, Hull & Kingston retained it's own local telephone network, but that's highly unusual.
One other point. It's rather unfortunate that public money is required at all to subsidise rural roll-out. In the case of the telephone (and other utilities), that subsidy was achieved via a cross-subsidy system. That continues to this day in that the copper loops in rural areas are cross-subsidised from revenues in urban areas. That can be done, as there is a regulatory regime that is enforced via a USC, but that's not the route that Ofcom (or government policy) favours. What they went for is a highly competitive market as deep into the network as technology allows, which with ADSL was essentially from the DSLAM onwards. As penetration goes deeper into the network, then costs become prohibitive and you end up with a de-facto monopoly on FTTC solutions. However, the structure of the market - with very low cost competition via LLU operators means that there isn't the potential to cross-subsides roll-out.
Perhaps if Ofcom had adopted a model which actually represented the differences in cost structures between urban and rural, such that customers in those areas bore the real cost of provision, then subsidies wouldn't have been required (cross or otherwise), the market would have provided. However, I rather suspect that rural dwellers wouldn't much appreciate paying the full commercial costs involved, but as the market was structured, they didn't have the choice.
@ Warm Braw
EU state assistance rules do not allow any substantial overbuild of any comparable existing privately-funded system. In the case of the BDUK funded scheme, that included VM broadband as that is capable of exceeding the chose NGA standards for "superspeed". Indeed, VM keep a close eye on this for obvious reasons and would object to any state funded competitor encroaching on their "patch" to any significant extent.
The BDUK process includes an open market survey asking for any (credible) privately-funded schemes before the intervention areas were defined. The length of time required to gain EU approval was responsible for a considerable part of the delay in the scheme as, not surprisingly, politicians tend not to consider such issues before making their announcement.
So, I can't be sure in your area, if there is a "legal" overlap with the VM network. If there is, most probably it was part of the commercial roll-out. It's extremely common (almost the norm) for some cabinets on an exchange to be part of commercial roll-out and others to be on the BDUK scheme as they were not considered to be commercially viable. It can be very difficult to tell the difference. Some authorities (like the Bucks & Herts schemes) actually publish which cabinets are to be enabled as part of the BDUK scheme, but that's far from universal.
nb. my cabinet is similarly in a VM area and was enabled three months ago, but it was part of the commercial roll-out, whilst I expect others on the same exchange (serving smaller communities) to be BDUK enabled.
That doesn't mean there won't be a small amount of overlap, as inevitably the footprint of a
Re: If you're going to plow billions into telecoms infrastructure...
I'm not sure what country you are from, but it's spelt plough in the UK.
As for spending £30bn on an FTTH network (which is a credible figure and about what the equivalent island of Jersey is spending per property), then you'll need to find a legal way of doing it. Overbuilding the VM and BT NGA networks would fall foul of EU laws on state assistance, so you have to factor in re-nationalising their access networks, which will wipe out half or more of your budget. So now your down (optimistically) to £15bn, which is nothing like enough. So let's make your budget £45bn. Also, how are you going to get people off of the copper network? The evidence is that the majority of folk stick to the copper as it's cheap (being a sunk cost) and meets most of their needs. Withdraw it, and you've got a whole bunch of LLU operators who'll want compensation for the investments they've made in kit. In reality, any government would be stuck with running both fibre and copper in parallel for many years, and wholesale charges will be forced up to recover the costs.
What you are describing is exactly what the Labour government decided to do in Australia with the National Broadband Network. That was aimed at delivering fibre to 93% of properties (so didn't covered the remote areas) and, on the latest review, was costed at $72bn (AUS), or £40bn albeit about 40% of the properties. It's since been downgraded to a mixture of FTTP and FTTC, but it still going to cost £24bn (that's assuming it actually delivers).
Against that, the public expenditure on broadband infrastructure in the UK is very low. In fact, many might argue that there is something wrong with the regulatory and commercial structure if the government is spending public money anyway. The problem is the path that Ofcom (and most EU regulators) have gone down. They've forced down the price of copper to the point where it's very difficult to justify investment in NGAs outside "prime" areas as a mechanism for minimising retail pricing. There is precious little incentive for private investors to put money into infrastructure.
Re: fudging numbers
You are just plain wrong - the percentage of EO lines in rural areas is nothing like 90% (although it could be for individual exchanges). Among other things, relatively few village exchanges serve just one village, and all the others will have cabinets. There are solutions for EO lines, but I rather suspect that they aren't priorities as other lines can be covered at lower costs.
Re: This is a real success story for our country
If you think you can connect more than a small minority of properties to fibre with a budget of £1.2bn, you are living in fantasy land.
Re: "antiquated nature of bank IT systems"
Concentrating on the underlying hardware and OS rather misses the point. Certainly you can run rock-solid IT systems on mainframes, and characterising them as "antiquated" actually tells you nothing about the underlying resilience of the applications. However, even the most reliable and robust systems can be undermined by poorly trained and managed staff. It shouldn't be forgotten that the 2012 RBS outage was not due to dodgy Windows XP, Linux or UNIX systems, but a problem with the support and maintenance processes of good old CA-7 on a mainframe system. It's not that CA-7 or Z-OS is fundamentally unreliable, but a failure in good operational and support practices.
The real issue is that, in the drive to reduce costs and roll out new features, that what is being sacrificed is the quality and experience of operational, technical support and IT management staff and resources. If good practices are not maintained, then even the most reliable hardware in the world will not prevent catastrophic outages.
Also the power backup for the FTTC cabinets is via internal batteries which are kept charged by the mains supply. They are sufficient to keep the broadband going for a few hours, but if the power is off for any longer, a portable generator will be required (which is, incidentally, how small telephone exchanges are powered if there's an extended outage).
Note that if/when we see fibre to the remote node (or whatever people care to call it), the small "mini-dslams that will run either VDSL of G.Fast are likely to be powered over the (short) copper loop from the customer's premises, thereby avoiding the need for mains power.
Re: Could competition have worked?
Contention is all to do with how much bandwidth the ISP installs in the backhaul (plus things like peering). Quite how the backhaul is provisioned depends on the particular exchange and what the long haul options are, but in any event, it's "just" a matter of money. Note that some operators (mentioning no names) have the reputation of being parsimonious on exchanges they haven't unbundled as they have to buy the back-haul (at least to a hand-off point) from BT Wholesale. If they cared to buy more bandwidth, then the congestion would be eased or even eliminated.
Anyway, the point is to pester the ISP with complaints, and if all else fails, use an ISP with a better reputation for managing congestion.
Re: You're an idiot.
Like most things in telecommunications, there's usually a reason why things are the way they are, and often it's historical. That applies to lots of old infrastructure - for instance, if we were building the railways now, they wouldn't have the bottle-necks and load gauge restrictions that have been inherited. To the simple-minded, it all seems so easy to rip it up and start again, but it's never that simple.
Of all people, those involved in IT ought to know this. Any very large organisation will, over the years, have inherited a vast legacy of systems which is often not economic to just replace, and almost never possible to just change overnight. So it is with major bits of national infrastructure. Changes will tend to be evolutionary, not revolutionary. For long periods, old and new will co-exist
If people want some idea of what the problems are with grand national telecommunication projects in western countries, I'd invite them to look at the (highly politicised) Australian National Broadband Network, which was originally planned to bring FTTP to 93% of Australian properties (so didn't cover the really remote areas). Estimated costs escalated to $73bn (AUS) with timescales disappearing over the horizon. Following a strategic review, this is meant to be coming down to a "mere" $41bn (AUS) using a mixture of FTTP and FTTC. Of course, even suburban Australia is less densely populated than the UK, but bear in mind there are only about 40% of the number of premises.
So that approx £23bn for 10m properties rather puts the BDUK (and related) public broadband expenditure of about £1.4bn in perspective as it covers roughly the same number of premises in the intervention areas, and will deliver rather earlier.
Re: You're an idiot.
VDSL is not allowed over EO lines due to the ANFP, which is adminstered by the NICC. This sets power and frequency profiles which are designed to allow different services to co-exist by limiting cross-talk. The concern over running VDSL over EO lines is the possibility of conflict with ADSL (and, maybe a few other services which use carrier modulation). Similarly, you can't run ADSL from street cabinets.
The NICC is effectively a trade body controlling the technical rules for use on BT's copper network.
Re: Were Fujitsu ever really serious?
The BDUK framework required that the successful bidder had to provide a non-discriminatory wholesale services. Indeed, it would have been virtually impossible to have got anything else through EU state subsidy rules if ISPs were locked out. Of course, this means that the commercial case is poorer as the winner couldn't count on retail level revenue (and hence the gap funding is higher).
Re: I've just been told by a little bird
I've no idea where the distance from the exchange comes into it. There are lots of cabinets in the country which have been enabled which are far more than 1 mile from the exhange (like the one I'm attached to). Of course, the further the cabinet is from the exchange, the more it costs to run the fibre to it, but what is probably far more important is how many properties can be usefully serviced from the cabinet. Of course, there may be particular obstacles - like the cost of running power, or blocked ducts, but these aren't directly associated with the distance from the exchange.
Of course, if what you meant was the distance from the cabinet, then the speed available will be greatly reduced at 1 mile (or 1.6km) as the limit for the 24mbps BDUK threshold is at around 1.2km line length, but you can get useful speeds up to 2-2.5km from the cabinet.
There are also trials being performed on fibre-to-the-remot-node. Basically a very small DSLAM up a pole which is connected by fibre to the exchange and which might be line powered (maybe from the customer premises, as it's perfectly feasible to provide a few watts over a few hundred metres, where it's not possible over km type distances). All experimental just at the moment.
If you join the club, you keep to the rules
Of course an country is free to decide who to sell goods to. However, if a country voluntarily signs up to the WTO, then they have to abide by the rules. If China wishes to be able to sell goods and services into other markets without undue discrimination by using WTO rules, then it is obliged to work within them.
Some of the rules involve not seeking to create an artificial advantage to local industries by attempting to monopolise local raw materials. Applying arbitrary quotas which apply only to exports of raw materials falls under this. Indeed this does affect the US as well, as they are unable, under WTO rules, to prevent the export of their cheap "fracked" gas, and some US gas terminal ports are being refitted for export.
Note this doesn't mean that there has to be a free for all which means any resource can be plundered without consideration to the environment, but what it does mean is that a country can't discriminate in favour of its own industry. Of course, it's all vastly complex in practice, but that's the basis of the principle.
"Think drone delivery is hot air? A BREWERY just proved you wrong"
Oh yes, of course it did...
Extending the methodology
Using the same basis of estimating how much the planet has shrunk by the extent and size of wrinkles, my head has got an awful lot smaller since my youth.
Re: HTTPS compulsory?
From what I can see, from the user end the logon credentials for btinternet are HTTPS. Of course, that doesn't means it's HTTPS end-to-end. The HTTPS termination point can easily be at a different point in the communication chain to the actual email web server - it just goes through some form of proxy service. However, it's unclear from the article where the exposure is meant to be.
Re: "two triple disk RAID 6 failures"
Their website describes it quite well, although they do use some dubious terminology. For instance, under self-healing they claim the disk is "remanufactured"
"using our patented software technology, every drive in an ISE can be taken into repair mode, power-cycled, and remanufactured just as it would be if it were RMA’d to the drive’s supplier — all while the ISE continues to operate at full performance and full capacity."
That's a weird use of the term "remanufactured". Of course what they will really be doing is performing a full low-leve format, identify bad tracks/sectors and so on. It's hardly the normal use of the word "remanufactured", which usually means replacing parts of the device (like bearings) which have fallen out of manufacturing tolerances. Of course, the can get away with these weasel terms as no manufacture actually remanufactures disk drives these days - they simply aren't seviceable that way. So yes, they might do what a manufacturer does when faced with a returned disk, but remanufacturing it isn't. I think re-certifying, as you had it described to you, is far more accurate.
However, they also say that if part of the disk is unusable then they will take that element out of service. I've no doubt that extends to a full HDD failing when the entire disk will have to be retired. At that point there's no way a user can restore it to full resilience as the units are sealed.
nb. the likes of NetApp have schemes which monitor drives and dispatch replacements to be user-replaced (not a hard job). The returned drives will then be assessed and go through reformats, map-outs and re-certification. If they don't meet standards, they can be taken out of the pool of replacements for dispatch to customers. I've no idea of the exact methods used, but I suspect it's possible to do more extensive work back at the factory so to speak. I doubt what NetApp do is much different in principle to other enterprise storage suppliers.
Re: "two triple disk RAID 6 failures"
It's fairly easy to see how they achieve such low failure rates. They simply put the redundancy in the package and put in a system than supports multiple disk failures, probably via some sort of virtual RAID. That way you could provide full access, even if two of the actual drives have suffered physical failures.
Of course, you can't replace the internal drives yourself to restore the full resilience - it's a sealed package. After 5 years, you could well start suffering more failures if some of the internal redundancy has been compromised.
The secret here is the failure rate is measured by the failure of the entire ensemble failing, not the rate of failure of internal disks. The proper comparison is that of the failure rate of a RAID array, and not an individual disk.
Good idea, but maybe not an optimal solution
Given that the ultracapacitor can only provide power transient peaks, I assume that this will require some form of close control with the device being powered where it's capable of sustained high demand. For instance, on a smartphone it might be necessary to "throttle-back" the processor in order to reduce power consumption to the sustainable level. I suppose that it could be done without a full control interface (e.g. the device could monitor demand using "dead reckoning", rather than a closed-loop control system, but I don't think such a system would be optimal).
Given this logic has to be embedded into the device's management systems, then it would surely make more sense to embed this in the device. Then the battery can remain just that, which means not having to invent a whole new interface. It would also be cheaper where batteries are replaceable, rather then embedded. Also, this is something that device manufacturer's could implement without a new battery specification. Indeed, maybe they already do it.
Re: I applaud their ambition.
You might, but at what cost. You are already cross-subsidised by other consumers on your phone line as the much longer line will represent more capital investment, a higher maintenance overhead and so on. The same is probably true of your water and electricity, assuming those are longer than the UK average. Similarly delivering mail to you probably loses money too.
It's a matter of simple economics. Certain services are just cheaper to provide in high density areas. If a company could see a viable market for high speed broadband in your area at a price that consumers would be willing to pay, then it will be done. Of course, the reality is that people expect to pay the same price wherever they are, and the regulation of the market is such that it doesn't allow for significant market differentiation. Hence it's only going to happen through a subsidy. Either a cross-subsidy via some form or duty incumbent on operators (as happens with telephone, electricity and water) or an explicit state subsidy.
So, if consumers in your area are willing to pay - say - £50 per premises for a high speed broadband service then it would be likely that a means would be found to provide it. However, I suspect that most won't pay that much. Hence the need for a subsidy.
Re: I applaud their ambition.
Or "I want somebody to subsidise where I live"...
" using the connerie-vian* rule-of-thumb formulated in Australia, which suggests 10 million FTTP connections cost $AU90 billion, Indonesia will need over $100bn for this project, or about eight per cent of GDP."
It's simply ridiculous using Australian models for projecting costs for infrastructure provision in Indonesia. Whilst the technology costs will be similar, it is well known that 80% of the cost comes down to installation. That's roadworks, installing cables, putting up poles, running cables and the like. Those costs depend heavily on local labour rates and not those in Australia (some of the highest in the world).
Also, the Australian National Network is not exactly a shining example of efficiency, not to mention it includes the costs of effectively re-nationalising lots of assets. Indeed the network has been subject to a strategic review (which is now recommended to be a mix of technology, not just FTTP).
So yes, this is going to be expensive, but 8% of GDP? I think not.
- Updated HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
- Peak Apple: Mountain of 80 MILLION 'Air' iPhone 6s ordered
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion
- US judge: YES, cops or feds so can slurp an ENTIRE Gmail account