1219 posts • joined Monday 21st May 2007 21:57 GMT
The patent system was never invented for any purpose justice. It was meant to be a pragmatic, economic instrument designed to encourage investment and innovation by granting a period of monopoly in order to realise a return, Once the operation of the patent system acts to stifle innovation and competition it is not operating correctly, Of course, these days it is often used just as a method of restricting competition or something close to extortion in the case of patent trolls.
nb, of course, prior to the patent system government (or, more often, crown) granted monopolies were very often granted as means of rewarding political friends.
Re: You missed the point
No you can't. At least not in my experience. I've already put a hybrid drive in my laptop which uses SSD as a cache (a Momentus XT), and it performs nothing like my desktop, where I have a separate SSD for the system disk and use 2TB HDDs for data. Yes, it's better than a normal laptop drive, but it still struggles with things like system updates. Admittedly, the hybrid drives have nothing like 128GB of flash, but I, personally, would like to manage what is placed where. Cache algorithms are all very well, but they tend to fail where there are periods of intensive activity (such as system updates or software installs) which are relatively infrequent, yet cripplingly slow when they occur.
Also, if you assign all that 128GB in a "traditional" cache arrangement (as the hybrid drives do), then you lose that much space. If you go for a more sophisticated approach to avoid that loss of space (which effectively migrates data seamlessly between slow and fast store), then you have to put up with the overhead and the unpredictability.
Of course separating your data and OS is an essential of this approach, but frankly it's good practice to do so, In fact, when I get a new machine, my first approach is always to re-partition to get functional separation. That makes it much, much easier to run robust backup and restore regimes. That separation is good practice, whether you are running a massive server or a laptop.
I had been wondering just how long it would take for a product like this to appear. I had even been wondering if it was technically possible to even present two devices down a single SATA connection.
Perfect for my purposes as it roughly echoes how my desktop is configured.
Commercial reality, not technical capability
The primary reasons that Intel are so uncompetitive in the mobile space isn't because they lack the technological capability. It's essentially a commercial issue. Intel is simply far too big to be sustained by a mobile market which has grown on the back of the incredibly cost-effective ARM eco-system. ARM has a small fraction of the market capitalisation and cost-base of Intel by charging relatively small royalities for use of its technology, and this has driven massive growth. Quite simply, even if Intel did manage to duplicate the ARM eco-system, with all its third party support and flexibility, it would only garner a fraction of the income it derives from the x86 market.
There is a certain irony here. Intel managed to do much the same thing to other processor architectures by offering a more cost-effective option and leaving any survivors with what are niche markets. Intel itself may come to be dependent on just such a niche, albeit lucrative market. The mass market for processors embedded into virtually everything is structurally incapable of supporting a market with the sort of margins that Intel became used to.
The lesson is, once you lock in a high cost-structure company, you are always vulnerable to those that are not. It's no doubt something that the folk at ARM are well aware of.
Re: There's a difference
Scriptwriters rights aren't the same as performance rights. The former rights would stop you making a new production based on the original idea and scripts (without payment), but doesn't stop the performance being copied.
Of course there are all sorts of other IPRs related to the characters, costumes and so on, but they generally relate to new productions.
Re: What is the ISPs' position?
The ISPs basically need a court order. If there wasn't one, then they'd be liable to legal action from the blocked site. Of course, they might still be able to win such a case (after all, ISPs routinely block sites carrying "illegal" content using subscription lists without court orfer), but I can guarantee no ISP wants to be responsible for policing the Internet for material infringing copyright. It would be a thankless, fruitless and incredibly expensive job.
So they'll leave it to the rights holders to get the court orders with, probably, only an objection if there are obvious objections. Otherwise, it's just a waste of money for the ISPs.
Re: A start
You are clearly not very up to date with the law. Whilst copyright infringement is, for the most part, a civil issue, it can be criminal if it is deliberate and systematic, especially for commercial gain.
This is a link to the CPS guidelines
So, to summarise, if you an individual infringes copyright by making casual copies, then it's not going to end up in a criminal case, but if you knowingly make a business of it, then it could be a different story.
Again, yet another poster not bothering to research freely available information.
Why bother reading when you can post crap...
The Governments definition of superfast is neither 2mbps not 1mbps. In that any speed is quoted, it is the former and as a basic broadband speed for where the faster options are disproportionately expensive. So yes, the BDUK funding goes towards 2mbps minimum speed, but nowhere is that termed "superfast".
And here's the exact wording
"Our ambition is to provide superfast broadband to at least 90% of premises in the UK and to provide universal access to standard broadband with a speed of at least 2Mbps."
And you can find the original here
Re: Short term gain, long term pain
So are you prepared to pay the full economic cost of providing higher speed broadband in your area, or are you demanding it is subsidised by the state (or other customers)?
What's an order of magnitude in a joke?
Somebody on #HIGNFY did indeed say that the Indian Mars mission is costing 0.01% of the bill for the HS2 link from London to Birmingham. Of course they are wrong - it's 0.1% (about £42m vs £42bn, not £420bn). With such cavalier disregard of mere orders of magnitude in costing, the writers of Alexander Armstrong's autocue joke might just be moonlighting from a job in MOD weapon procurement.
Re: No surprise
Of course, HG Wells got there first - the Martians in War of the Worlds succumbed to Earthly bacteria. You'd have thought they'd have had their jabs before venturing across the void.
Surely we already knew from Independence Day that space craft are peculiarly vulnerable to computer viruses.
There's a healthy interest and then...
I'm beginning to think that Tony Smith ought to get out more.
Re: As a note
"a lot of anime fans have a weird hatred of calling anime cartoons"
Indeed, no lesser an authority than Dr. Sheldon Cooper would agree,
For something a little larger
Of course, if you want to scale this up to something rather larger, you might require a rather larger 3D printer. For example, this particular "pinhole" camera starts with finding an empty aircraft hanger...
Re: He isn't that cheap
"Helium isn't that cheap. Liquid He is the biggest annual budget item for our physics dept."
Ahead of wages, ? I rather think that's unlikely.
Rhode Island is 3,140sq km in area, so 1.99 nano-Rhode Islands would be 6.25sq m. A 450mm diameter wafer would therefore be 0.199 nano-Rhode Islands as it's about 0.6sq m...
I'm inclined to think that if the Inhabitants of Rhode Island were only to reclaim another 2sq km from the sea, they could rename their Island kilo-Pi. That would then become a transcendental place to live.
The definition of "syncronous" is not just "low latency"...
"But if you’ve got a dedicated link, transferring data from one SAN to another, the latency could be as low as a few tens of milliseconds. That would be typically referred to as a synchronous link."
Synchronous replication is not defined by how long the latency is - that's a matter of the configuration of the replication software. Many replication protocols, whether based on SAN (including SRDF), database, logical volume etc. support both both synchronous and asynchronous replication. Asynchronous protocols often support techniques which respect the time order in which writes are performed (so the target is at least consistent),
What the latency does determine, along with application requirements, is whether synchronous replication is viable at all. As fibre-based comms travels about 2/3rds the speed of light (around 1m round-trip time per 100km), the delays can be substantial. If you have an application that requires low latency writes (typical of many transaction systems), then your write latency might have to be measured in low single digit milliseconds. That's easily achievable using local enterprise arrays with non-volatile write-back cache (or flash of course), but it's not going to be possible if your replication target is several hundred kilometres away, let alone thousands. Once the effective write latency goes up into the 10s of milliseconds typical of replication to a remote DR site, it can kill application performance, especially if the granularity of committed applications is very fine.
There are ways of dealing with this by having "relay" systems which have a more local short-term replication target (outside the so-called "blast radius") and then asynchronously replicating to the disaster site, but that gets hideously expensive.
There's also another downside of synchronous replication in that it can make your production system vulnerable to failures in the replication process. If you absolutely must guarantee writes are performed to the DR site, then the main production system will fail of anything goes wrong with the production system. It is usually possible to configure replication so that processing will continue should replication fail, but you've lost your guarantee of a replica remote system.
Having dealt with many dozens of DR systems, then the most cost effective option is usually to design applications and recovery processes so that they can tolerate a certain amount of data loss in the event of a DR failover, even if that eventually involves some manual correction. Is vastly easier (and cheaper) to design systems where a small amount of data loss is tolerated than to rely on synchronous replication.
It's also worth noting that synchronous replication (especially at array level) will also synchronously replicate data corruptions caused by software errors. It's often desirable to replicate at logical/transactional levels rather than array level for this reason.
Producing highly available systems with built in disaster recovery and zero data loss is non-trivial and very expensive. Done wrong, and it can make things much worse.
Re: Recorded music has no value
"If an artist is quite happy to watch a video of a surgeon operating on their fatal disease rather than have the surgeon perform in person for them, I'll buy a download of their last album and say farewell, until then, I'll not be buying recorded music."
The very epitome of an asinine analogy.
Re: Soldering exercise
I seem to remember that I got my self-build kit through a cheap offer in Electronics Today International. I was in my first year studying physics at Imperial College at the time, and everybody lusted after the HP calculators in the labs (solidly fixed in their cradles with security cables).
Of course the Sinclair shared RPN with the HP, but, besides lack of functions, suffered from lack of accuracy (only about three-and-a-half significant figures could be relied upon in practice). From what I recall, and involved some very convoluted exercises to do something as simple as produce the inverse of the number currently displayed. Of course, in a year or so, you could buy fully-fledged scientific calculators for pocket money.
Happy memories - and perspective
Nice to see Nascom being recognised as the UK's first microcomputer (I'm not sure "arguably" comes into it - if there was an earlier commercially available option with full keyboard/screen, then I don't know of it). I was a little late into the game as I started with the Nascom II.
Many years ago I designed and built a control and timing system for a car racing track using the Gemini system, which was basically a company setup by some Nascom staff when the company went into receivership (eventually bought by Lucas). The Gemini basically adopted the NAS-BUS renamed as 80-BUS and, for a while, there were compatible cards produced which would work with either Gemini or Lucas-Nascom systems. Sadly, whilst the race track was built and the system installed, the business was fundamentally flawed and now operates as a kart track. However, I still have my original Nascom II and the prototyped Gemini-based timing system, complete with the multi-tasking real time control system I designed for it.
As for the know-it-all who condemns blames the failure of Nascom on not choosing Intel, this shows an enormous amount of ignorance. At the time the Nascom was produced, the 8086/8088 family wasn't available (the Z80 was an improved 8080), and many manufacturers in both the US and the UK used other chips. Indeed the industry-standard OS at the time was CP/M and that was very much centred on the Z80 & 8080 architectures. At the time the Nascom was designed it could easily be said that it was based around what looked like an industry standard processor.
Meanwhile, many other companies used the Z80, some Motorola 68000 and yet others the MOS technology 6502, including one called Apple. Shame they came to nothing because they adopted the wrong chip for their first computer...
Of course the 8086 family only become the industry standard through marketing and the happy fact (coincidental or not) that IBM chose it for the basis of their belated entry into the business PC market thereby inadvertently setting an industry standard (in IBM's rush to jump aboard a boat that they very nearly missed, they weren't able to do what they'd done in the past and tied up designs with IBM proprietary standards - by the time they realised it, and sought to regain control - remember Micro Channel - it was too late). If it wasn't for this action by IBM the standard PC architecture might well have been based on another processor family. Indeed, in many ways, the Intel 8086 was one of the least likely choices on purely technical grounds - it was a flawed architecture, and arguable several of the alternatives at the time were better. For that matter, we may have had a different OS model - MS-DOS was little more than a CP/M look-alike for the 8086 family, and it's well known that IBM's adoption of the OS could easily have gone another way. It is not always the case that the "best" solutions win through in industry. Much also relies on luck or seemingly arbitrary decisions by people who just happen to be in the right position at the right time.
Of course there were many UK computer companies that did seek to adopt the Intel 8086 family and produce industry standard designs, and they have pretty well all disappeared as they got squashed by competition. The problem with just producing industry standard designs is that there's little room for true innovation. It becomes a commodity market, and the spoils go to those with the lowest cost base.
Finally, one few UK survivors in the computer market is ARM, the offspring of Acorn Computers, and they did innovate. For the record Acorn Computers did not adopt Intel processors, they started with the 6502.
Re: Different rider?
Nothing different would happen if Chris Hoy was on the bike, as it was the Ford Zephyr that ran out of steam first. In any event, it's probably more suited to a pursuiter than an out-and-out sprinter.
Of course the thing that made Fred Rompelberg's 167mph possible was the traditional Brooks leather saddle he used. Proper British craftsmanship...
The calculation rather disregards the immense potential of compression. After all, the basic blueprint of a human being is written in the DNA which is rather less than one Gigabyte, Of course, this doesn't encode for all the experiences and environmental and random factors that lead to a particular human being at a particular point in time. However, it would seem that a vast amount of compression could be achieved by encoding, for instance, cell types and recording the approximately 10 trillion locations would achieve huge compression. Recording the state and configuration of the brain would require, for instance, about 10,000 trillion items on information, but still vastly less than the three-dimensional high res photocopy approach.
Of course, this lossy compression "JPEGing" of a human means the result wouldn't be exactly the same as the original. However, it also shows that this isn't really teleporting a human being. It's faxing one to make a copy. Real teleportation would actually require some form of manipulation of space-time to "move" the original,
indeed - both VM and FTTC services should be called fibre/copper hybrid. I think it was VM that started debasing the concept of fibre in this way. Once one starts it, then competitors have to follow.
Not that there's anything wrong with hybrid provision in principle. After all, how many people using fibre LAN infrastructure in their home? It's all about the distance over the copper that matters. Making major changes to change the last few hundred metres to fibre gets disproportionately expensive over simply exploiting the copper loop. For most domestic customers, I suspect it will be more than enough where there's a cabinet within 500m or so; arguably even 1,000m.
Re: Hate paying LINE RENTAL? Sign this...
A particularly idiotic e-petition showing zero understanding of the costs involved. The vast majority of the cost of the phone line is in the infrastructure of the local distribution network. Quite apart from the fact the local distribution network has a real value (bear in mind it was sbeold by the government to the shareholders), and those people, pension funds and the like, are due at least some return, there is the little matter that it costs a lot to maintain as ducts, connections, telegraph poles, drop cables all deteriorate. Then rates have to be paid on the network infrastructure including the necessary ducts, cabinets, poles and exchange buildings.
15% of the cost of a phone line? Forget it. You think any organisation can actually provided a physical path to each property for about £14 per year exc VAT (which is about 15% of the regulated price of providing the loop).
It's perfectly possible for any ISP to offer a non-voice ADSL service (in fact SDSL does that). Just provide a loop. That LLU operators find it necessary to provide a phone service too in order to make the entire package financially viable is just a fact of economic provision. Just inventing random percentages of existing line rental without taking into account the cost of provision is just being clueless.
A mere $3.5bn, or 0.18% of the $1.9T cost of the Iraq War (according to the Congressional Budget Office - some others estimate it much higher). That sounds like a bargain.
Re: It's not all twisted pair…
I know that the distribution cable isn't as highly specified as highly as modern data cables of course. It's a fairly loose twist. It is therefore, of course, not as resistant to EMI (and, therefore will also "leak" more signal), but this has to be kept in perspective - if it can be considered and antenna, then it's an incredibly inefficient one. The biggest problem is with cross-talk within multi-pair cables, and I've yet to see any evidence that external EMI is a significant issue to other communications with existing DSL (which, after all, goes up to 30Mhz). As people will know, lower frequency radio waves (in general), travel further with less attenuation than higher, so I'd expect frequencies up to 100Mhz to propogate even less.
As far as the sub-loop is concerned (the bit from the "green cabinets" to the household), then the ANFP-S for VDSL (only currently carried over the sub-loop) certainly refers to this is a twisted pair network.
"it is applicable to all sub-loops in the BT access network provided using unscreened twisted metallic pairs." It goes on to specifically exclude fibre provision (which is not relevant of course). Maybe there are non-twisted sub-loops (but I'd be surprised), then they aren't going to be suitable for vdsl (and, of course, g.fast) too.
All BT local access loop cables have a specification as regards the balance to earth, mutual capacitance and so on. The main standards are CW 1128 & CW 1128/1179.
There are other suppliers with single and dual pair cables to the same spec.
There's also a well-known spec for extension cabling (and there are external and armored versions available). This is also twisted pair and is known as CW1308, albeit I don't think it is used as part of the sub-loop itself.
The standard for "drop wire" is CW1411/CW1417, and whilst these may not be twisted par as such, they still included relevant specifications for balance to earth, mutual capacitance and so on.
In general, the cables are designed to have considerable rejection of interference and, by dint of this, make very poor antennas.
From what I can find, egress interference from VDSL signals to amateur bands is not usually much of an issue due to relatively low VDSL power densities. Indeed the reverse ingress interference is more of an issue due to the much higher power densities of amateur transmitters.
In general, I can't find much in the way of actual, rather than theoretical, egress interference from existing DSL services despite the fact they overlap a considerable number of bands up to 30Mhz. In other words, is this really a problem in practice (unlike powerline transmission).
Is FM interference a real issue?
Just how much leakage of signal will there be? After all, phone lines use twisted pair cables. If they didn't, then existing DSL wouldn't travel very far at all due to signal loss and interference (especially cross-talk). Indeed voice frequencies wouldn't travel that far either - something the Victorians picked up on fairly rapidly. The whole point of transmission lines, like the humble twisted-pair phone cable, is that it is simultaneously designed to minimise signal loss and susceptibility to EMI (even if DSL pushes things way beyond what the original designers ever anticipated).
As it is, existing ADSL frequencies overlap AM bands without, apparently, causing significant problems (albeit AM is a minority interest these days). That's despite the fact that ADSL is often carried into domestic extensions which often don't use twisted pair. With VDSL (and beyond) frequencies, the termination point is at the point of entry to the house (where its filtered), and it doesn't travel down domestic extension wiring. VDSL overlaps shortwave frequencies, and again nobody seems to be complaining of interference in the real world.
For those that think this is anything like the issue of pumping high frequency data carrying signals down mains wiring, forget it. Mains wiring is almost designed for propagating EMF. Telephone wiring is not.
"In contrast, anyone can understand water flowing in a pipe and that is why our drinking water system is in such a shambles - with much of it running to waste through leaks."
Simply speaking, that's complete nonsense. There's no reason leakage in drinking water systems need to achieve anything remotely like losses of one in many orders of magnitude. It's simply a cost tradeoff. It's perfectly feasible to achieve very low levels of loss in systems using fluids (how many people have refrigerators or freezers running happily after a couple of decades - as mine does), but it makes no sense at all to do the same with water delivery where (depending on the costs and availability of collection vs demand) less rigorous standards of loss are required. It also makes no financial sense to use expensive engineering, scientific and mathematical specialists on such a requirement.
Quite simply, a completely pointless - and indeed, misleading comparison. The reason high water loss levels are tolerated is nothing at all to do with the easy understanding of the technical issues, and everything to do with the costs involved. This simply reads as a smug pop at another discipline for no good reason as the criteria of success are wildly different.
Incidentally, no error correction is "perfect". All have tolerance levels - in fact, as others have pointed out, the theoretical error levels which can be achieved on disk drives are such that,on very large, and very active storage systems, uncorrected (or, much worse, undetected errors) are a realistic possibility. (And not just storage systems - data transmission, electrical interfaces and others can suffer this way). Not to say that the theoreticians, engineers and solid state physicists who have achieved this haven't done something approaching the miraculous, but perfect it isn't. Indeed it's an impossibility
Re: Not a monopoly
Most telco conduits in the access network are not watertight. It would be a waste of money and involve a huge amount of maintenance cost as tree roots, ground movement and so on would constantly cause leaks. It's the cable that is waterproofed, and provided the joints are made above ground, then the conduits could be filled to the top with water and it would have zero effect on the performance of copper loops.
Note that this is not true of all conduits of course - major ones near and between exchanges which require regular access may indeed be fully watertight, but for most of the network, it's the integrity of the cable that matters.
Size (and slowness) is everything
A simple bit of maths and physics shows why this thing has to be huge. Heavier than air flight (in open air - ground effect flight is different) is achieved by accelerating a mass of air downwards, and it's the counterbalancing force upwards which keeps the craft airborne. As the force is the mass of air times acceleration there is the choice of accelerating a small amount of air very fast or a large mass of air relatively slowly. If you accelerate half the mass of air per unit time as twice the rate to achieve the same thrust, you impart twice as much kinetic energy in total (as kinetic energy goes linearly with mass and to the square of velocity). It's all a bit more complicated, but that's the nub of it. That is roughly why high bypass jet engines are more efficient than low-pass ones, why jets are more fuel efficient than rockets for the same thrust and so on. In short, all those wondering why this sort of technology can't be scaled down to more manageable sizes in energy efficient aircraft (or flying cars) are wishing for the impossible. If you want fuel-efficient heavier-than-air flight, then you need to accelerate large masses or air gently - that means large and slow; large wings, large turbines and son on. None of your little rotors embedded in a flying car body. It simply won't scale (downwards).
Re: Short answer
BT did not get it for nothing as you put it. The shareholders of BT bought the assets from the government if you remember. The total paid at the time (over three tranches) amounts to about 75% of BT's current total capitalisation before adjusting for inflation. You can't sell something then claim it was given away...
Of course TVs and set top boxes do filter out signals outside their specified range, but the new 4G services will be intruding into frequencies that were previously used for TV, so any general band-pass filter will not get rid of the 4G signal. Of course there is a tuning mechanism, but if there's any wide-band amplification stage before that, it could be overloaded by a nearby 4G transmitter.
However, the primary problem isn't with TVs or set-top boxes, it's with signal boosters, often installed on the masthead, which are often required in marginal areas. These were, of course, designed with a band-pass filter for TV channels. However, now there will be 4G services transmitting on what were previously TV frequencies, these will also amplify the 4G frequencies. If the 4G transmitter is nearby, it could drive the amplifier into overload and render the Freeview TV frequency signals unusable due to distortion. As mast head amps were only intended for use on TV (or, sometime VHF audio) channels in marginal areas they may well not have the headroom to cope with a combination of very weak Freeview signals and local 4G ones received at much higher power levels. The designers of these things, often installed many years ago, can't really be blamed for not foreseeing such a radical change in the transmission environment.
The simple solution is a new bandpass filter placed before any signal booster, but as that may be on the masthead, then it involves people up ladders. For a small minority, even this might not work though.
The figure raised by the Treasury in the flotation of BT (over three tranches) was more like £14bn. That was £4bn from tranche 1 and £5bn each from tranches 2 & 3 (the last one of which was at £4.20 per share). Your figure of £25bn is the approximate amount raised by the 3G auction at the height of the telecom bubble.
Of course, £15bn is (inflation adjusted) far higher than BT's current £20bn market capitalisation, so it would appear that the government got good value. However, it's never that simple. Following the split of O2 from the main BT group (with the latter carrying the enormous debts built up largely by developing the then Cellnet, buying out Securicor's share and the purchase of 3G licenses plus buying out various foreign partners), the shareholders owned both companies. O2 was then taken over by Telefonica and thereby gained about £17.7bn. On that basis, BT shareholders did a bit better than these figures indicate. However, it is never that simple - BT launched a rights issue in 2001 which raised £5.9bn from shareholders. Overall, for those who bought into the original three tranches in proportion, inflation adjusted, it might - just - break even.
Of course the shareholders were somewhat cheated - they were sold shares under one regulatory regime and competition model but the rules were changed enormously in the late 1990s to an increasingly tougher one.
nb. one other thing the government is probably grateful for - they don't have to cover the enormous pension deficit (the liabilities for which were largely incurred pre-privatisation), unlike the position with Royal Mail. That's assuming they aren't silly enough to allow the company to go broke through an over-zealous regulator of course, in which case a privatisation-time state guarantee will be called upon).
Re: Why is line rental so much?
Wholesale line rental is £94.75 per year (or £7.90 per month) - that's just the cost of the line. However, that doesn't include VAT, which would be the equivalent of £9.50 per month. The BT Retail price of £15 is equivalent to £12.50 per month net of VAT, so a theoretical £4.30 per month profit before operational costs (and the initial Openreach connection costs). However, if you care to pay a year in advance, then the line rental cost is £10.75 per month - or £8.95 net of VAT allowing for a profit of just over £1 per month before any other retail costs.
Of course the line doesn't have to be retailed by BT - indeed full LLU operators make use of the wholesale costs. However, the fact that no other retail operator cares to rent lines and allow other ISPs to provide the broadband or call services is simply because it isn't economic for them, although the costs would be identical. The margins in the fixed line business are very thin.
Given that Virgin (or rather the original cable franchise holders) targeted areas of high population density, then it's hardly surprising that FTTC/FTTP areas often overlap with VM for similar economic reasons.
Getting back to basics on the purposes of patents.
The simple thing here is for patents to only be maintained where the owner is either exploiting the technology, or has credible plans for its potential use. That could be tested in court. It would immediately get rid of the issue of patent trolls just sitting on patents, often of dubious merit (but still costly to fight in court). In a lot of cases, patent trolls are counting on being paid off as an alternative to a costly action.
It would also be of economic value. It should be remembered that patents, which grant a limited life monopoly on a technology, where only introduced in order to provide incentives for companies to innovate and develop by allowing them a window of opportunity to recoup their investment. This temporary monopoly was never intended to provide some form of trade-able capital asset or as some form of moral right to "intellectual capital".
Re: Abusing the legislation
There is no legislation as yet. It's still at the consultation stage...
Re: Cal me thick, but...
In that part of London all the services are already underground. That makes the problem of finding underground space below the pavement in narrow London with sufficient access for an engineer and the relevant VDSL DSLAMs and power supplies even more difficult. (In principle, if PON was used, now power supply would be required, although that would require running optical fibre to each property requiring the new service.
nb. last time I looked, overhead supply of power in the US was far more common than in the UK. Indeed much of the skyline seems to be festooned with cables.
Re: Cal me thick, but...
Yes, it could be done, but it would be a great deal more expensive as any underground chamber would have to be completely waterproof, require active cooling and good access for engineers. You might also expect It would be more like a small basement. It would also take a great deal longer to install.
The reason that cost matters is that adding perhaps £100-200K per box (given the cost of underground construction in London - where the pavements and roads are full of services) would make the roll-out financially non-viable unless a very considerable premium could be charged in the borough. As FTTC/FTTP is in competition with cable & exchange-based ADSL services, take-up is likely to be much lower, which would increase that premium much more.
Maintenance is the key.
The BT FTTC boxes look quite good to me, and provided that they are properly maintained the should continue to do so as they seem to be made of better materials than the standard green cabinets - possibly because they need to be as they will be stuffed full of active electronics powered off the mains. One of these has just been installed at the bottom of my road and, within a few days some oik sprayed some graffitti on it in silver paint. A few days later it had been cleaned off. Let's hope that they are kept maintained. Given what's in them, it will be in the company's interests to do so.
I've also not see any that are 6 foot tall - more like shoulder height to an average person. The FTTC cabinets in the adjacent borough of Hammersmith & Fulham look OK to me.
Re: are just scratching the surface
There's a very good reason why multiple read/write heads are not available on HDDs (and it's been tried) and that is because the vibration introduced by moving one read/write head disrupts the others. I'd also imagine that given that the heads "fly" incredibly close to the surface of the disc, that aerodynamic interference could also be an issue.
Multiple read/write heads were used in the dim and distance past on a type of disk that used fixed heads (rather like an alternative to drums). These were used as paging store on mainframes back in the 1970s, but were inherently very expensive and had low capacity - even compared with moving head drives of the same era. I have a vague recollection that ICL's ill-fated CAFS (content addressable file system) of the 1970s made use of multiple read heads. It used logic at the disk head controller level to perform searches on data content, but improvements in processor speed meant it was a commercial failure.
(nb. the integration of search logic into disk controllers was once commonplace in the form of CKD - count-key-data drives which could embed certain searchable data into key fields before every data block. Typically this was used for things such as index data for indexed sequential files, and the programmes to search for such data could be despatched against a channel controller using a very limited and special "channel program". To this day, IBM mainframe disk controllers have to emulate this function as "legacy" access methods require it. The norm used to be that programs did not access storage by going through a file abstraction layer, but that they assembled the channel program directly as this saved CPU cycles. This still happens on "legacy" programs,. but the O/S has long had a role in "vetting" the channel programs for security reasons.
CKD techniques have long been replaced by software and logical block addressing, but the traces still remain)...
HDD performance scaling with capacity
100x the (areal) density does not mean 100x the performance on disk. As 100x the areal density corresponds to 10 x the linear density, then sequential throughput increased by 10x. As areal density hardly improves random access at all, then the IOPs is virtually identical (and IOPs are controlled by mechanical factors, and it's generally recognised that these are already close to the limits of what can be achieved for reasons such as power consumption, material stability, bearing reliability etc.) Such mechanical issues are subject only to marginal gains.
What's worse, is the effect on IO access density. That is the number of IOPs per GB. For random access, this gets 100x worse for a 100x the capacity. The sequential access speed per GB stored gets 10x worse. Already it takes several hours to read a full disk in the 2-3TB region. For a 2-300TB drive the time taken would be measured in days. This would mean that a rebuild of a RAID set using such drives would take many days - it's bad enough with current disks in the TB region.
The inherent problem with physical disk drives is one of basic geometry when combined with physical material limitations. No doubt there are some types of data where access requirements are low enough that this will be tolerable, but it is already an major problem.
As far as flash write performance is concerned, it may not be as good as read performance, but it is still incomparably better than physical drives, especially when combined with intelligent controllers, write caching etc. (albeit HDDs benefit from this too).
nb. in the unlikely event that 100x the areal density was achievable, it's most likely that this will result in smaller form factor drives, the continuance of a current trend. However, costs do not reduce linearly with reduced form factor as individual unit complexity remains high, and its inconceivable that there will be (say) 10x the number of physical units in the space capacity of a current one.
So where is the justification that this saves 97% of data centre cooling costs? I suppose you might (just) make a tenuous case that it saves 97% of the associated energy costs, but given this solution appears to require special servers, storage devices and comms equipment (all of which produce lots of heat) into this thermally conductive equipment and a whole lot of new infrastructure, I would think the up-front infrastructure costs are going to be very significant. Even if it can use simple heat exchangers in the outside environment, they are still going to have to be very large as, ultimately, unless you have a convenient nearby water source, it will still ultimately depend on how fast heat can be transferred to the air.
Then there is all the practical stuff about being able to move and upgrade equipment without this liquid leaking out everywhere. It's possible to see how this might work with something large and fixed (like and old-fashioned mainframe, which used to be liquid cooled in the past), but rather problematic with smaller stuff. Maybe somebody can design a blade server of some sort with tight thermal coupling to this sort of liquid-cooled infrastructure, but it is not going to be easy.
I rather think that a far more important approach is to improve the energy efficiency of IT infrastructure as that reduces total energy costs, not just those of cooling.
Re: Water has a refractive index of about 1.33; a brick would be much higher.
Refractive index is a completely different thing to opacity. A brick is moderately transparent in some parts of the electromagnetic spectrum (think of parts of the radio spectrum for instance). As it happens, refractive index is just the inverse ratio of the propagation speed of light in the medium to that in vacuum, and is not, directly, related to opacity. For most mediums the speed of light is also a function of wavelength (or frequency of course). That's why in (most) materials transparent to light, a degree of dispersion occurs with different colours. Indeed that's a potential issue with fibre optics which is why the narrow spectrum emission lines of lasers are preferred.
Anyway, the upshot of this is that the refractive index of brick, in those parts of the spectrum where its relevant, is nothing like infinite. (For parts of the spectrum where it is opaque, refractive index is a meaningless term). For very high refractive indexes you need a Bose-Einstein condensate, not a brick...
Probably they are timing the round-trip time. It's a good idea to get confirmation from the far end...
i always worked on latency of 1ms per 100km or 60 miles (round trip).
- Review Best budget Android smartphone there is? Must be the Moto G
- Fun-killing fireshow-flunking ZOMBIE COMET ISON only LOOKED alive
- On the matter of shooting down Amazon delivery drones with shotguns
- Review Bring Your Own Disks: The Synology DS214 network storage box
- Inside IBM's vomit-inducing, noise-free future chip lab