1184 posts • joined Monday 21st May 2007 21:57 GMT
Of course TVs and set top boxes do filter out signals outside their specified range, but the new 4G services will be intruding into frequencies that were previously used for TV, so any general band-pass filter will not get rid of the 4G signal. Of course there is a tuning mechanism, but if there's any wide-band amplification stage before that, it could be overloaded by a nearby 4G transmitter.
However, the primary problem isn't with TVs or set-top boxes, it's with signal boosters, often installed on the masthead, which are often required in marginal areas. These were, of course, designed with a band-pass filter for TV channels. However, now there will be 4G services transmitting on what were previously TV frequencies, these will also amplify the 4G frequencies. If the 4G transmitter is nearby, it could drive the amplifier into overload and render the Freeview TV frequency signals unusable due to distortion. As mast head amps were only intended for use on TV (or, sometime VHF audio) channels in marginal areas they may well not have the headroom to cope with a combination of very weak Freeview signals and local 4G ones received at much higher power levels. The designers of these things, often installed many years ago, can't really be blamed for not foreseeing such a radical change in the transmission environment.
The simple solution is a new bandpass filter placed before any signal booster, but as that may be on the masthead, then it involves people up ladders. For a small minority, even this might not work though.
The figure raised by the Treasury in the flotation of BT (over three tranches) was more like £14bn. That was £4bn from tranche 1 and £5bn each from tranches 2 & 3 (the last one of which was at £4.20 per share). Your figure of £25bn is the approximate amount raised by the 3G auction at the height of the telecom bubble.
Of course, £15bn is (inflation adjusted) far higher than BT's current £20bn market capitalisation, so it would appear that the government got good value. However, it's never that simple. Following the split of O2 from the main BT group (with the latter carrying the enormous debts built up largely by developing the then Cellnet, buying out Securicor's share and the purchase of 3G licenses plus buying out various foreign partners), the shareholders owned both companies. O2 was then taken over by Telefonica and thereby gained about £17.7bn. On that basis, BT shareholders did a bit better than these figures indicate. However, it is never that simple - BT launched a rights issue in 2001 which raised £5.9bn from shareholders. Overall, for those who bought into the original three tranches in proportion, inflation adjusted, it might - just - break even.
Of course the shareholders were somewhat cheated - they were sold shares under one regulatory regime and competition model but the rules were changed enormously in the late 1990s to an increasingly tougher one.
nb. one other thing the government is probably grateful for - they don't have to cover the enormous pension deficit (the liabilities for which were largely incurred pre-privatisation), unlike the position with Royal Mail. That's assuming they aren't silly enough to allow the company to go broke through an over-zealous regulator of course, in which case a privatisation-time state guarantee will be called upon).
Re: Why is line rental so much?
Wholesale line rental is £94.75 per year (or £7.90 per month) - that's just the cost of the line. However, that doesn't include VAT, which would be the equivalent of £9.50 per month. The BT Retail price of £15 is equivalent to £12.50 per month net of VAT, so a theoretical £4.30 per month profit before operational costs (and the initial Openreach connection costs). However, if you care to pay a year in advance, then the line rental cost is £10.75 per month - or £8.95 net of VAT allowing for a profit of just over £1 per month before any other retail costs.
Of course the line doesn't have to be retailed by BT - indeed full LLU operators make use of the wholesale costs. However, the fact that no other retail operator cares to rent lines and allow other ISPs to provide the broadband or call services is simply because it isn't economic for them, although the costs would be identical. The margins in the fixed line business are very thin.
Given that Virgin (or rather the original cable franchise holders) targeted areas of high population density, then it's hardly surprising that FTTC/FTTP areas often overlap with VM for similar economic reasons.
Getting back to basics on the purposes of patents.
The simple thing here is for patents to only be maintained where the owner is either exploiting the technology, or has credible plans for its potential use. That could be tested in court. It would immediately get rid of the issue of patent trolls just sitting on patents, often of dubious merit (but still costly to fight in court). In a lot of cases, patent trolls are counting on being paid off as an alternative to a costly action.
It would also be of economic value. It should be remembered that patents, which grant a limited life monopoly on a technology, where only introduced in order to provide incentives for companies to innovate and develop by allowing them a window of opportunity to recoup their investment. This temporary monopoly was never intended to provide some form of trade-able capital asset or as some form of moral right to "intellectual capital".
Re: Abusing the legislation
There is no legislation as yet. It's still at the consultation stage...
Re: Cal me thick, but...
In that part of London all the services are already underground. That makes the problem of finding underground space below the pavement in narrow London with sufficient access for an engineer and the relevant VDSL DSLAMs and power supplies even more difficult. (In principle, if PON was used, now power supply would be required, although that would require running optical fibre to each property requiring the new service.
nb. last time I looked, overhead supply of power in the US was far more common than in the UK. Indeed much of the skyline seems to be festooned with cables.
Re: Cal me thick, but...
Yes, it could be done, but it would be a great deal more expensive as any underground chamber would have to be completely waterproof, require active cooling and good access for engineers. You might also expect It would be more like a small basement. It would also take a great deal longer to install.
The reason that cost matters is that adding perhaps £100-200K per box (given the cost of underground construction in London - where the pavements and roads are full of services) would make the roll-out financially non-viable unless a very considerable premium could be charged in the borough. As FTTC/FTTP is in competition with cable & exchange-based ADSL services, take-up is likely to be much lower, which would increase that premium much more.
Maintenance is the key.
The BT FTTC boxes look quite good to me, and provided that they are properly maintained the should continue to do so as they seem to be made of better materials than the standard green cabinets - possibly because they need to be as they will be stuffed full of active electronics powered off the mains. One of these has just been installed at the bottom of my road and, within a few days some oik sprayed some graffitti on it in silver paint. A few days later it had been cleaned off. Let's hope that they are kept maintained. Given what's in them, it will be in the company's interests to do so.
I've also not see any that are 6 foot tall - more like shoulder height to an average person. The FTTC cabinets in the adjacent borough of Hammersmith & Fulham look OK to me.
Re: are just scratching the surface
There's a very good reason why multiple read/write heads are not available on HDDs (and it's been tried) and that is because the vibration introduced by moving one read/write head disrupts the others. I'd also imagine that given that the heads "fly" incredibly close to the surface of the disc, that aerodynamic interference could also be an issue.
Multiple read/write heads were used in the dim and distance past on a type of disk that used fixed heads (rather like an alternative to drums). These were used as paging store on mainframes back in the 1970s, but were inherently very expensive and had low capacity - even compared with moving head drives of the same era. I have a vague recollection that ICL's ill-fated CAFS (content addressable file system) of the 1970s made use of multiple read heads. It used logic at the disk head controller level to perform searches on data content, but improvements in processor speed meant it was a commercial failure.
(nb. the integration of search logic into disk controllers was once commonplace in the form of CKD - count-key-data drives which could embed certain searchable data into key fields before every data block. Typically this was used for things such as index data for indexed sequential files, and the programmes to search for such data could be despatched against a channel controller using a very limited and special "channel program". To this day, IBM mainframe disk controllers have to emulate this function as "legacy" access methods require it. The norm used to be that programs did not access storage by going through a file abstraction layer, but that they assembled the channel program directly as this saved CPU cycles. This still happens on "legacy" programs,. but the O/S has long had a role in "vetting" the channel programs for security reasons.
CKD techniques have long been replaced by software and logical block addressing, but the traces still remain)...
HDD performance scaling with capacity
100x the (areal) density does not mean 100x the performance on disk. As 100x the areal density corresponds to 10 x the linear density, then sequential throughput increased by 10x. As areal density hardly improves random access at all, then the IOPs is virtually identical (and IOPs are controlled by mechanical factors, and it's generally recognised that these are already close to the limits of what can be achieved for reasons such as power consumption, material stability, bearing reliability etc.) Such mechanical issues are subject only to marginal gains.
What's worse, is the effect on IO access density. That is the number of IOPs per GB. For random access, this gets 100x worse for a 100x the capacity. The sequential access speed per GB stored gets 10x worse. Already it takes several hours to read a full disk in the 2-3TB region. For a 2-300TB drive the time taken would be measured in days. This would mean that a rebuild of a RAID set using such drives would take many days - it's bad enough with current disks in the TB region.
The inherent problem with physical disk drives is one of basic geometry when combined with physical material limitations. No doubt there are some types of data where access requirements are low enough that this will be tolerable, but it is already an major problem.
As far as flash write performance is concerned, it may not be as good as read performance, but it is still incomparably better than physical drives, especially when combined with intelligent controllers, write caching etc. (albeit HDDs benefit from this too).
nb. in the unlikely event that 100x the areal density was achievable, it's most likely that this will result in smaller form factor drives, the continuance of a current trend. However, costs do not reduce linearly with reduced form factor as individual unit complexity remains high, and its inconceivable that there will be (say) 10x the number of physical units in the space capacity of a current one.
So where is the justification that this saves 97% of data centre cooling costs? I suppose you might (just) make a tenuous case that it saves 97% of the associated energy costs, but given this solution appears to require special servers, storage devices and comms equipment (all of which produce lots of heat) into this thermally conductive equipment and a whole lot of new infrastructure, I would think the up-front infrastructure costs are going to be very significant. Even if it can use simple heat exchangers in the outside environment, they are still going to have to be very large as, ultimately, unless you have a convenient nearby water source, it will still ultimately depend on how fast heat can be transferred to the air.
Then there is all the practical stuff about being able to move and upgrade equipment without this liquid leaking out everywhere. It's possible to see how this might work with something large and fixed (like and old-fashioned mainframe, which used to be liquid cooled in the past), but rather problematic with smaller stuff. Maybe somebody can design a blade server of some sort with tight thermal coupling to this sort of liquid-cooled infrastructure, but it is not going to be easy.
I rather think that a far more important approach is to improve the energy efficiency of IT infrastructure as that reduces total energy costs, not just those of cooling.
Re: Water has a refractive index of about 1.33; a brick would be much higher.
Refractive index is a completely different thing to opacity. A brick is moderately transparent in some parts of the electromagnetic spectrum (think of parts of the radio spectrum for instance). As it happens, refractive index is just the inverse ratio of the propagation speed of light in the medium to that in vacuum, and is not, directly, related to opacity. For most mediums the speed of light is also a function of wavelength (or frequency of course). That's why in (most) materials transparent to light, a degree of dispersion occurs with different colours. Indeed that's a potential issue with fibre optics which is why the narrow spectrum emission lines of lasers are preferred.
Anyway, the upshot of this is that the refractive index of brick, in those parts of the spectrum where its relevant, is nothing like infinite. (For parts of the spectrum where it is opaque, refractive index is a meaningless term). For very high refractive indexes you need a Bose-Einstein condensate, not a brick...
Probably they are timing the round-trip time. It's a good idea to get confirmation from the far end...
i always worked on latency of 1ms per 100km or 60 miles (round trip).
I'd be concerned if potentially libellous comments in the transcripts were turned into a bean feast for lawyers. Yet more tax payers' money up in smoke (the license fee is, technically speaking, a hypothecated tax).
That's not to say that those responsible should be protected, but its not obvious that publishing the redacted bits would achieve that. It might just end up with yet more expensive legal fees.
Re: Oh god
Your talking complete nonsense. You can sample as fast as you like and as deep as you like, but unless the original analogue source has sufficient bandwidth and SNR you are just digitising noise. The signal to noise ratio of a vinyl LP is, at best, 60-70db and that's assuming it's absolutely pristine, pressed from an absolutely top quality master in prime condition (and most will not be after a few thousand pressings) and the vinyl is of the best quality. Those sort of discs are expensive, and are damaged during playback. Bandwidth is limited by a mixture of the materials used, the ability to actually cut masters with very high frequencies and the ability of vinyl to actually record such fine details. A 20Khz frequency signal near the inside of an LP will require features only about 12 microns in size. Indeed, in producing masters for vinyl pressing higher frequencies will actually be filtered out as the presence of such signals can cause other problems.
It has been mathematically proved that its only necessary to sample at twice the highest frequency component in order to reproduce the waveform. Go beyond that, and you are digitising randomness. There some utility in higher sampling rates and bit depths in the actual mastering, but purely to avoid "aliasing" problems in the production process (which these days tends to be done using digital, not analogue processing such as filters). There's absolutely no point if the aim is to just digitise vinyl output as there simply isn't sufficient information in the analogue output to justify it.
This is quite apart from the little matter of the ability of the ear to actually distinguish signals above about 20Khz. The ability to distinguish high frequencies deteriorates with age.
These aren't negotiable things - they are fundamental to information theory and the nature of the materials in use. Vinyl pressings simply don't contain more information than a CD - indeed, quite the reverse.
Re: People forget...
"it could nowhere near beat the more detailed sound quality". Oh dear, an audophile using the sort of vague, undefined qualitative words like "detailed" without any form of technical definition or measurability in an area which has been thoroughly defined by Claude Shannon (and others). This is the audio version of homeopathy or aromatherapy.
Possibly the author has mixed up production quality with reproduction capability, as many CDs were poorly produced. However, that's nothing to do with the technical quality of the media and everything to do with poor techniques and some faults in some early digitisation processes.
Also, the vast majority of audio (even that committed to vinyl) is now digitally processed, albeit using higher sampling rates and bit depths than CD. However, these higher rates are only really required during the production process as the production process inevitably reduces the original information content (as with digital photography) so the higher initial resolution helps avoid these aliasing effects. However, once committed, then the differences are inaudible between CD rates and higher rates - that's unless you count maxing out the volume control in very quiet parts, when you might just get some aliasing effects; but then your ears would bleed during the louder sections.
Most likely, reported differences are down to care during the recording/production process, which vastly outweighs technical differences in the digital replay (assuming its been done competently).
Re: Oh god
It's worth noting that analogue recordings are subject to a form of compression vs capacity trade-offs. The issue is that if the producer wishes to squeeze a longer run time onto a side of a vinyl record, this can only be achieved by reducing the modulation, especially at the bass end of the spectrum, or adjacent parts of the spiral groove will interfere. For this reason, many "budget" compilation vinyls sounded rather "thin" as the bass had to be disproportionately reduced in amplitude. Taken to more extremes, this has to be done further up the frequency range (that is amplitude reduced) and thuse the SNR will suffer. So compression is an issue on vinyl too - it's a trade-off with capacity. Note that vinyl recordings roll off amplitude with lower frequencies anyway using an RIAA equalisation curve, or the lower frequencies would lead to adjacent parts of the grooves "interfering" with one another as the physical modulation has to be higher. The lower frequencies were boosted on replay, at least with magnetic pickups; to a certain extend ceramic/crystal pickups provided their own natural boosting of bass frequencies. This boosting of bass frequencies can lead to "rumble" being heard, where low frequencies due to the turntables rotation will be boosted.
Note that this is a rather different issue to dynamic range compression, which is driven by the record producers wish to boost the quieter parts of musical tracks to make them sound "louder" or "brighter". This is a plague. Radio stations tend to do this as well in order to hide what was often rather poor SNR of the broadcast medium in marginal reception areas - or, for that matter, listening in noisy environments.
I think it best to say that vinyl sounds notably different to CD reproduction, although, in theory, a CD recording of a vinyl playback ought to sound identical assuming that it's done competently. Thus the vinyl lover should be able to hear all the peculiarities of the vinyl recording and playback without damaging their precious bit of plastic...
I rather suspect a few gun-toting hunting types will make short work of any peeping tom drones. Perhaps the American obsession with guns might have an upside after all.
Debatable bid for immortality.
Species go extinct. That will include humans; get used to the idea, although it will be almost certainly irrelevant for those of us alive today. In the meantime, maybe we ought to concentrate on making sure this planet doesn't become uninhabitable through our own actions.
Terrestrial TV doomed
The days of terrestrial broadcast TV are numbered. Once 90% of the population can get their TV through broadband, then Ofcom will no doubt want to flog off the entire TV spectrum. The proportion that either can't get broadband at the right speed, or don't care to pay for it can be serviced through freesat.
Only radio will survive as a terrestrial broadcast service.
Re: I want the job
Untrue - our universities are stuffed full of academics looking at those subjects. We may have no active volcanos, but we were most certainly affected by a recent one in Iceland and a lot of work has gone into studying the effects on air traffic, forecasting clouds and under what conditions aircraft might still fly scheduled services.
As for Earthquakes, then you clearly missed the fuss over fracking and the rules that have been put in place regarding "artificial" quakes.
Also, potentially vulnerable infrastructure is assessed against these risks - including tsaunamis. Following Fukishama, coast nuclear power stations were further risk assessed against the danger of tsunami. Indeed, here is the official report.
Whilst we don't have an early warning system for tsunami, there is are those campaigning for it.
Re: I want the job - miised link
Oops - missed the relevant link
Re: I want the job
According to the following report, there were about 45,000 fires in dwellings in the Britain in 2010/11. Given that there were something over 26m households in the country, that implies a chance of a fire in a dwelling in any one year of rather less than 1 in 550.
I assume on the basis you think a 1 in 200 chance per annum is not worth spending money on, that you have also forgone fire insurance on any property you might own.
Re: Helium hype
The molecules are small and have the bad tedency to "tunnel" through every known enclosing.
Helium exist as individual atoms, not molecules...
In any event, it's perfectly possible to design a container to hold helium at atmospheric pressure which will be more than sufficient for the expected lifetime of a hard drive. Helium diffusion rates through solids are only about 3 x higher than ordinary air. You can, however, reasonably expect and such drive to be hermetically sealed with no spindles penetrating the casing.
Re: rather than the technical ability to do it.
You might like to support your point by providing some references, especially as you claim this has appeared in The Register.
If it's Windows indexing that you are referring to, then there are some sites that suggest turning it off on SSDs to reduce the amount of write I/O (and hence prolong the drive performance), but there's precious little to suggest that SSD performance degrades to HDD levels (except, that some older firmware/controllers did have issues with degrading performance over time). That's also not an indexing limit per-se rather than an issue relevant to a particular platform.
As far as the fundamentals of HDD performance is concerned, then ever larger HDDs are condemned to very poorer performance per GB simply because capacity increases as to the (aereal) bit density whilst sequential throughput only increases to the square root of bit density. Worse, random access barely changes at all with increasing density. Indeed, for the very highest densities, spin speed has to be reduced in order to be able to read & write (especially) data reliably. The fastest (15K RPM) hard drives are limited to smaller capacities than slower drives. There is also very little prospect of any significant increase in spin speed due to limits being reached in available materials and excessive power consumption (albeit helium will help - a bit).
The HDD limitations are inherent. They are down to basic. In contrast, SSDs are not so confined as it is possible (at least in principle) to exploit more parallelism. Also, there is the possibility of SSDs based on non-flash devices (albeit that this is speculative - for the moment there is development possible in flash and controllers).
I should add that there are multi-TB SSDs available in the form of PCI-e cards with appropriate drivers. Those are most certainly sold on performance.
Re: The question I want to ask is...
"Why doesn't Intel just buy ARM?"
Quite simply because the won't be a competition authority in the World that would allow such a thing due to very obvious monopoly issues. Of course the way is open for another company to buy ARM (Appled have been mentioned in the past), but that might prove tricky as any threat to the independence and customer neutrality of ARM could undermine the entire basis of its business as customers started exploring strategic alternatives.
ARM holdings a juggernaut?
I think you might reasonably consider the huge international ARM sub-economy as a juggurnaut, but I don't think that word can be used to describe ARM holdings. Whilst ARM holdings are highly profitable, in terms of profit margin, they are a tiny (if technologically critical) part of the whole ARM sub-economy, let alone the whole micro-processor market.
Of course it's this very point, that the ARM IPRs are so cheap to license and produce that makes it so very difficult for Intel to compete. Short of Intel virtually giving away technology (with consequent issues about anti-competitive behaviour), it's difficult to see how the company would approach what it managed in the PC/server market. Even if they did, the returns would be tiny compared to what they were used to.
SSD vs HDD per unit capacity
Whilst the price per unit argument for HDDs holds up, I'm not sure the capacity per unit one does. There is this 4TB unit which fits a standard 3.5" form factor.
Aslo bear in mind that "standard" SSD units tend to be 2.5" (or smaller). There is this 2TB 2.5" SSD reported, the same capacity as the largest HDDs in the same form factor (indeed the 2TB HDDs are 12.5mm thick vs the 9.5mm reported for the SSD).
If you can get 4TB in a standard 3.5" enclosure and 2TB into 2.5", then data density is not going to be the deciding factor for HDD. It's also likely that there is more scope for increasing data density on SSD than there is with HDD given the relative rate of development.
The reason that large capacity SSDs are thin on the ground is probably more a matter of cost (and consequent demand) rather than the technical ability to do it.
Re: way rad consumer devices
Indeed there are some radioactive consumer devices, but the rate of activity is tiny. To put this in perspective, there will be of the order of 10^21 helium atoms in a 3.5" disk drive enclosure. Replenish just 1% of those a year, and that's about 10^19 alpha particles required. If we use Americium 241 as a source - as commonly used in smoke detectors - then each devcay yields 5 alpha particles. So that means approximately 2x10^18 nuclei will have to decay in a year. That's about 0.5gm of decayed Americium per year. With a half life of over 400 years, it will require over a hundred grams of the element. In comparison, an average smoke detector only has about 0.3 micrograms.
So the equivalent radioactivity of several hundred million smoke detectors would be required to replenish just 1% of the helium required for one of these drives in a year. It would also generate about 10 watts of radioactive power. Released into the environment, that could rather spoil somebody's day...
Re: Helium leakage...
Ionising radition made up of high energy alpha particles in a sealed box with highly sensitive semi-conductor heads and minute magnetic regions. What could possibly go wrong?
nb. it would take quite a lot of something highly radioactive to produce alpha ions in any appreciable quantities. It's a non-starter, but a nice bit of lateral thinking...
Re: Why not just evacuate the drive
You cannot operate a hard disk drive in a vacuum as they require the read/write head to "fly" over the surface of the platter using aerodynamic effects. Even if it was possible to engineer a lightweight head and arm assembly accurately enough to skim just a few microns above the platter surface (which is extremely unlikely), it would be impossible to provide the sort of cushioning effect against mechanical shocks that a thin film of gas gives.
Why don't people do just a little research before posting comments like this? I've lost count of the number of times this particular suggestion has been made at various times.
To add to this, after a bit of research I find that Nissan are to release the first ever production "steer-by-wire" car some time in 2013.
I've never heard of a mainstream car with purely electric steering. However, there are many with electric power assisted steering. (There are some specialist vehicles with pure fly-by-wire electric steering, but I can't find any cars of that sort - and it strikes me as unwise).
Re: Arse-end of nowhere != crap speeds
There is, indeed, no reason which fibre can't be run in from another exchange if there's suitable ducting. Indeed, I believe it is not uncommon for FTTC to be provided that way. Given that, it might well be cost effective to provide FTTC to your village if there's a sufficient concentration of housing as it would not be necessary to upgrade your local exchange.
I'd concur that you need stats, especially downstream attenuation and SNR margin (sometimes called nois margin). Ideally logged over a period of time.
Personally I would suspect either a line fault or a local source of noise. I recently suffered from a problem due to ingress of water into an underground joint (when my estate was built in the 1970s, they just buried the joints). It started out as intermittent issues and gradually got worse (audible as crackling on the line). During the final death throws, it caused so many disconnections the profile got fixed onto a 256kbps profile which did not automatically reset when the fault was cleared (involving diffing up the pavement. It required quite a lot of persistence to get through to a second level support team - generally call centre staff are just following scripts. It's the service providers job to chase this sort of stuff through (it's unclear if the service provider here is actually BT Retail or not, but all SPs should have the ability to get through to basic ADSL quality stats and reset DS profiles.
I'm about 3km from my local exchange (and the cable length is more like 3.5-4km) and get 6.5mbps on ADSL2+ with a 49db attenuation and 6db SNR margin. However, I've been very careful with internal wiring filtering at the faceplate and using twisted pair cable to get to the router/modem. Many people will get a lot worse at 49db attenuation.
Another place to look is the service providers users forums (assuming they have such a thing). Often there are SP reps and you tend to get more immediate attention from other users, who are often quite experienced. In an ideal world, SPs would have such skilled staff on the call centre. Unfortunately, with the pressure of getting costs down, this is rarely the case.
Re: 300 TB HDDs are waiting in the lab, but are being suppressed by the marketing dept, eh?
The 100mph carburettor at least (or rather, 100mpg fuel injection car) is meant to be here inside the next few years. Perhaps the others will follow.
In any event, the 300TB HDDs will just exacerbate the fundamental problem that capacity goes up to the square of disk area whilst sequential access only goes up linearly and random access barely improves at all. Simply the larger the HDD the worse the I/O bottleneck becomes as data rates increase at a much faster rate than the ability to access the data. It doesn't much matter if the drive is filled with air or helium You simply can't spin the platters any faster as they are pulled out of shape with the forces involved, the drives become unstable and bearings give out.
It is not the "BBC licence fee"
It is not "BBC licence fee". The TV licence for reception equipment is, indeed, a hypothecation tax (a rarity in the UK), but there is nothing in the legislation which defined the revenues as actually belonging to the BBC, whatever the corporation might want us to believe. The revenues are apportioned according to the annual appropriation act for the Department of Culture, Media and Sport and there is nothing that requires it to be only allocated to the BBC. It could, for instance, be used to fund other public broadcasting should there ever be a political will to do so. For instance, C4 is a public service TV broadcaster (which is a largely advertising funded publicly owned corporation, and not a commercial company). Of course it's in the BBC's interest to represent the licence fee as somehow belonging to it, but that's not what the statutes say. One might argue that it would serve the public interest if more public service broadcasters were supported.
Decouple handset recover costs from data/call packages
Why not enforce de-coupling of the costs of the so-called subsidised handsets from the call package itself. The handset recovery cost is completely predictable and that portion could be allowed to continue should the subscriber decide to walk if the call/data package prince increase during the period.
Of course equipment promotional offers have been subsidised by artificially high termination charges into mobile networks for years. Effectively those on fixed line networks have been subsidising handsets for years on the basis that Ofcom has allowed these as promotional costs. That's a disgraceful situation which is taking years to be resolved (all so Ofcom can artificially promote the mobile market through cross-subsidies).
As for the claim that mobile operators can't reasonably forecast their costs for a period of 24 months, then that's plain ridiculous (aside from VAT changes - which obviously don't need to be in the fixed charge part). Even if they are so incompetent as to be unable to do it, then that's just what's called a commercial risk.
Of course the truth of it is that they want such retrospective pricing policy so they can adjust their cost recovery models to allow for the cross-subsidising of new customers from old with special offers.
I have a Garmin Dakota 10 which you can buy on Amazon for about £93 at the moment (plus the cost of the bike mount & silicone rubber cover). It has enough memory to install a full UK map base using the openstreetmaps source compiled into the appropriate form for Garmins at http://talkytoaster.info/free-uk-maps-faq.htm. It has a touch screen and works very well for cycling (and for walking too - it's primarily a hand-held).
It also uses AA batteries (including rechargeables), which makes it easy to carry a few spares for multi-day rides.
The Dakota 20 also has a barometric altimeter, an electronic compass and an external micro-SDHC slot, but is rather more expensive. If you want the (expensive) complete 1:50K OS maps, you need that extra memory).
It's not the biggest screen in the world, and the route finding is not as good as that on cars (and you can't do searches by road names - at least not with the talkytoaster files I've used to data, although they are shown on the maps). It's also a bit more bulky, but at the price (when used with free maps), it's something of a bargain. Yup, there are lighter ones and integration with heart monitors etc., but you'll pay a lot more.
Re: Is the existing laws are so wonderful
Because they weren't enforced properly. Next question...
Re: Come on
Vince Cable was removed from his decision-making role in the News Corp 'takeover' of BSkyB because he stated publicly that he "would declare war on Murdoch"
Not quite. Or at least he didn't intentionally make that statement publicly. He'd actually thought it was a statement made in confidence, but unfortunately for him, it turned out to be a "sting" operation by some jounalists working for the Daily Telegraph. Of course that hardly does him credit, but those are the facts. Of course, publicly intended or not, it made his role overseeing the decision untenable. Indeed I rather think that if he'd not been such a major character in the Lib-Dem side of the coalition it would have been a resigning matter...
Old fogie winge...
Oh dear. Disinterest does not mean "lack of interest". Yet another sign of declining modern standards...
Re: Tin Foil Hat time - Was the account Hacked or not?
It's quite common for people to use the same password on multiple accounts using an email address as the login ID. Whilst a dictionary attack is unlikely to succeed on any of these (for the reasons you state), they are highly vulnerable to phishing attacks (at least to the unwary). Once the combination of an email address and a password is available, it can open many accounts. Of course a Twitter account is also vulnerable to a phishing attack. (At the very least people should use a unique password on their main email account as that's where notification of account changes on other systems will normally be sent - lose control of that, and anything might happen).
It would be extremely unwise for anybody to fraudulently claim their account had been hacked in the event of a case coming to court, as if this could be shown to be untrue, it would expose the individual to charges of perjury - which are generally treated very seriously indeed.
Re: On a completely unrelated topic ...
For a criminal case, the prosecutor has to prove that the account wasn't hacked against the "beyond all reasonable doubt" test. However, it would be very unwise for anybody to just assume saying an account was hacked would be sufficient in itself to cast enough doubt.
For civil cases (which this would not appear to apply to), it's the overall balance of probabilities that matters.