Strange, I could have sworn I'd read stories of Israeli air strikes where there had been "collateral damage".
1389 posts • joined 21 May 2007
Strange, I could have sworn I'd read stories of Israeli air strikes where there had been "collateral damage".
There is a product called "SLU", or subloop unbundling which has been used for FTTC use (although the number of non BT FFTC deployments has been very low, although there have been some). However, there's very little difference in wholesale cost. That might seem odd, but the the great majority of the cost of a network is where it "fans out" from the consolidation points,. There may be less copper in that part of the network, but there are far more joints, miles of ducts, telegraph poles and so on.
In any event, the route back to the exchange has to be paid for, and those costs will just get transferred back onto the fibre backhaul.
There's a myth that the main cost of the network is in the copper. It's not. It's in all the infrastructure required to support it all, the manpower, rates, power, poles, cabinets, footway boxes, ducts, builds and so on. All those (or close equivalents) are required for fibre too.
A few years ago Tim Worstall on these very pages produced a laughable estimate of the value of the copper in BT's network (overestimating it by a factor of 20 or more). The value of the "raw" copper is around the £2.5bn mark, although when fashioned into cable it's perhaps double that.
So the return on capital employed in the copper in the "E side" of the network is a relatively small proportion of the total costs of the network infrastucture.
(Somewhere around there is a report that OpenReach has to produce annually on the "book value" of the network assets, albeit that isn't the one that Ofcom uses to regulate prices directly).
Call termination charges (that's what a network operator is allowed to charge for connecting a call on their network) have historically been far higher on mobile networks than landlines. Termination charges on landline numbers are, in comparison, almost insignificant. (The used to be about 0.2p/min, but have been reduced to 0.034p/min).
Indeed, Ofcom specifically engineered it that way as a method of financing the build out of the mobile networks. For a long time landline users have been paying for mobile networks, although this is changing as, mobile networks are having more tightly regulated call termination pricing.
The above is the reason why packages don't include calls to mobiles, whilst mobile networks do include landline calls. Mobile packages can afford to include mobile minutes as, on average, the calls into and out of a mobile network balance.
You might, but many object in principle to the retention of logs, backdoors into encryption and much else, even with judicial oversight. We have developed such systems over hundred years for physical records, yet virtual ones are considered sacrosanct by many.
I still see from the reaction to my comment that some are still unwilling to accept the logical consequences of opposition to record keeping, even with judicial oversight.
So for those that promote untraceable financial transaction systems, be aware that this is the enabler and motivation for crimes such as this. Be careful for what you wish, because it may be granted, because everything has consequences to consider.
Given the number of people who appear to object in principle to the whole idea of any state surveillance capability on use of the Internet (evidenced by the number of comments on this site, others, Twitter, mainstream media any time these issues come up), then it's pretty near impossible to track down the source of these scams, at least without a huge amount of technical and manpower resources, and even then it's doubtful.
These objections are based on the whole issue of traceability, privacy and (I often suspect), a good deal of egotistically driven paranoia. Unfortunately those very same measures which make it virtually impossible for the state to snoop on your activities also makes it easier for scumbags to prey on the vulnerable.
So, decide what you want. It's simply not possible to have an Internet landscape where you have effective policing and complete protection of personal privacy. You can have untraceable electronic transactions and currency. You can have unbreakable encryption, Internet anonymity and the like. But you can't have that with tracking down Internet crooks. Something has to give.
Those exchanges are only market 1 because other operators don't seem them to be cost effective to deploy equipment into. LLU operators have the ability to cherry pick which exchanges to enable, and you can't really blame them.
So market 1 exchanges exist by default, not because they were built that way.
As far as I'm aware council's can't prevent utilities digging up pavements as they have "code powers". That includes VM.
The council does have powers over the placement of cabinets, but even then those have limits.
Of course, planning issues are a useful excuse for utilities not to continue with projects which they don't deem to be remunerative.
Look up bonding, although your ISP has to support it. Failing that, it's possible to do line balancing but that doesn't allow for a single data stream with 2 x the bandwidth. What it allows is several independent data streams which can be useful if the problem is congestion due to multiple users.
Of course, it's expensive. Two lines, two broadband accounts a modem/router which supports bonding.
Failing all this, you are wholly dependent on your telco bringing fibre closer to your property.
That, is rather clever. Of course once you realise that a flash storage device is really a miniaturised storage system, with it's own logical mapping it becomes obvious.
However, one thing occurs to me and that is it will be necessary to be able to coordinate these functions over multiple devices. For example, it's very easy to see single point-in-time consistent snapshots might be required over multiple devices, and it would be nice to be able to delegate that functionality without invoking higher layers.
This is Healthcare Triage's take on the Singapore system (they have an excellent series on various national health care systems). Despite the videos all being produced in the US (largely by doctors), they aren't too impressed with their own system.
One thing to note is that the Singapore system has a huge amount of state intervention in order to minimise costs. They tried open market supply, but found that competition was increasingly through expensive technology and then changed the system to stop it. In effect, avoiding the way the US went.
So this is a long way from being a free market system. The Singapore government is pragmatic if it is anything, but I can't imagine the system of co-pays and enforced medical saving being accepted in the UK. In effect, Singaporeans are forced down a route of compulsory saving for any number of things that are at least partly covered by state welfare systems in most Western European nations. (Of course it helps that income tax is so much lower). The whole philosophy appears to be to minimise state exposure to welfare costs through enforced savings and incentivising citizens to not make demands on the state.
People dying early and quickly are actually rather subsidising everybody else. The biggest cost is the treatment of long term chronic diseases, and particularly of the elderly. It is said that type II diabetes is a problem, not so much because it kills, but because it kills very slowly but involves huge expense over time dealing with all the related chronic diseases. That's just the medical costs. Add in pensions, welfare, free bus travel, heating allowances and so on and it gets worse. The deficit is largely down to us all living longer.
So those Glaswegians expiring early of heart attacks, lung cancer and stabbings are good value. Well, if you're an accountant (and we all know bean counters have no compassion).
Indeed, there's already an Ofcom requirement for BTW and BTOR to supply wholesale communication services on an equivalence basis. This would simply roll over to any purchase of EE as it would not be part of BTW or BTOR. Of course, BTOR and BTW don't have to discriminate in favour of any BT-owned EE anyway. The fact that BTOR & BTW will gain a captive customer is a major advantage. However, I'm pretty sure that BT will not want to alienate other mobile operators who are significant purchasers of fixed line network services in their own right. It's in BT's interests to have a very strong offering for mobile operators.
It's also an interesting point that the BDUK programme has, unwittingly or not, allowed for a considerable extension of fibre deep into the network. Those new fibre concentration points could be very useful, although no doubt the state aid aspects could get rather complex.
It means the electric motor is being used as a generator driven by the petrol engine.
Possibly just as well that the Germans have decided to close down their nuclear power plants if they can't keep critical control systems safe from hackers. Although I would hope that things would at least fail safely, even if not cheaply.
nb. I initially found a few elements of this story not entirely plausible, but as it seems to be official then so must it be.
So now you know what to take as a gift to apartment 4A 2311 North Los Robles Avenue, Pasadena should you ever be invited for Christmas. Personally I'd take a bottle or two of pinot noir to apartment 4B.
If the Register's summary is correct, Richard Deaken s didn't make a statement that "90s kit isn't 'ancient'". What he said was the system had it's roots in the 90's. To put this in context, the World Wide Web has its roots in the late 80's. For that matter, the first draft definition of TCP/IP dates from the early 70's.
It's wholly irrelevant from when the technology originated. What matters is how it has been developed. After all, we are still basing out day-to-day usage of geometry based on what Euclid set out over 2,000 years ago. Roots matter. They stop trees falling over when the wind blows.
In the meantime, please don't misrepresent what was said. The kit isn't from the 90's, and nobody seems to be seriously claiming this was a hardware failure.
Gordon Brown set the objective, which was quite simply to maximise the sale value of the 3G licenses. That he didn't personally design the auction, is not relevant. Although given Gordon Brown loved nothing better than to manipulate figures (like expensive PFI contracts to keep debt off the books), I'd be amazed if he didn't personally approve the final form of the auction.
nb. the economist who advised on the format of the auction was Paul Klemperer, an Oxford academic, who has been very active in defending the decisions made.
Even if it's conceded that the original 3G licence auction maximised the prices paid by the operators, and this didn't result in higher prices to the consumer on the grounds that they were sunk costs (more debatable) and that it didn't adversely impact other aspects, like network investment and thereby economic activity (even more debatable), then there is a much more fundamental reason why the exercise can't be repeated.
That's because at the time of the 3G auction, there were more potential bidders for bandwidth than there were available chunks of spectrum. In addition to the incumbents, there were a number of other operators seeking entry into the UK market including the (state backed) France Telecom and Deutches Telekom. It was this unique blend of ambitious operators and limited supply backed by inflated telecom valuations (and some de-facto state guarantees) that drove bid prices far past their economic value. Once the shareholders and financiers came round to noticing this, the supply of ready money dried up and auctions all across Europe then got fractions of what was achieved in the UK and Germany.
These circumstances will never happen again. It doesn't matter if there are 3 or 4 operators. The costs of entry into the UK and building a new network are immense. The only way that spectrum prices could be manipulated upwards would be to offer fewer chunks of spectrum than there are operators. By definition, that will lose one operator from the new spectrum. It's quite possible that one of the weaker players might decide the whole thing is not worth pursuing anyway and seek to either run as a low cost operator on existing spectrum or pursue other options. Of course if the spectrum is auctioned off such that all operators can get a chunk, then that's less of an issue, but it will not, of course, recreate the circumstances of the 3G auction.
So now that the fit of hubris of 2000 is over, there is no way that the telecom companies are ever going to fall for this again. The 2013 auction fell short of government targets by about £1bn (it raise £2.5bn vs the £22.5bn of the 3G auction). The circumstances at the turn of the millennium are not going to repeat themselves.
There's also another issue. Seeking to maximise the value of the spectrum to the state simply in the capital cost of the license, rather than through more continuous revenues from taxation on increased economic activity is surely short sighted.
In any event, 3 or 4 operators. It's not going to make a great difference to state revenues. The CEO of Telefónica César Alierta, has noted that the industry is not going to play ball with states that manipulate the circumstances of an auction in order to maximise a one-off return.
(nb. in the US, a similar auction approach to that which was eventually taken by the UK government in 2000 was ruled illegal and had to be retracted.)
So they won't call it broadband. Simple.
It's perfectly proper to include the publicly funded part of healthcare as welfare. Private expenditure on health is another issue (although you could make a case that tax breaks on health insurance could be included).
I did link to a source. I think the 10-11% figure includes private health expenditure, not just public. When it comes to private health expenditure in the UK, it's not just those BUPA policies. There's a significant part of health costs that are only partly covered by the NHS. Expenditure with Opticians is primarily private as is much of the dental work.
On reason why US welfare expenditure is so high is surely down to the incredible inefficiency (form a financial point of view) of the American medical system. It's not commonly appreciated that the US government spends almost the same % of GDP on their public systems (Medicare and Medicaid) as the UK government does on the NHS. Given the difference in coverage, this is astonishing. It's around the 8% mark in both cases, and in the same general area as many large western countries. It can't even be explained because the US has an older population. It doesn't, but rather the reverse.
A lot of this must come down to the basic cost structure of the US medical industry with al the insurance, legal indemnity, billing and other cost issues (and some very well paid medical staff).
Not to mention a motorbike powered by a gas turbine engine from a helicopter.
It's true the UK and several other European auctions were setup to maximise the auction value. The German auction was even more expensive than the UK one. It wasn't helped by a number of state owned telco operators (like France Telecom) piling into the action with state backing. For the existing operators, it was an existential threat. They either got one of the new bands or they were essentially dead. Unfortunately for some countries slower off the mark, like Italy, the bubble has burst as shareholders and banks took fright. Then it all came crashing down.
Fortunately, things have calmed down a bit. To put this in perspective, the UK 3G auction raised $34bn in 2000 (about 2.5% of GDP). Correct for inflation, and it's considerably more than this latest US auction, a country with five times the population. Not exactly cheap, but not the insane levels of 2000 in proportion to the market size.
Vacuum cleaner tests. VACUUM CLEANER TESTS? In The Register. What the hell is happening?
How many times can it be pointed out, have a backup strategy, and implement it properly. No device is foolproof. No cloud storage system will be perfect. Even if the hardware is perfect, software is not, and neither is you (the user) perfect. And then there's the little issue of ransomware or other malicious software.
Have a backup system, make sure it works and it validates what it does properly. Decide how much data you can afford to lose, and plan you system appropriately. Relying on a single storage device or service, not matter how well engineered is insufficient. Nothing on earth can guarantee you won't lose data, but you can improve the odds.
My mechanism involves two independent backup external drives, which I rotate frequently, and I always keep one off site. It is far, far quicker to recover from than any cloud system. And that includes driving 25 miles to my parent's house (where I keep the second copy) to pick up the "disaster recovery" system. (By all means use cloud for small incremental changes for interim backups as well).
I count this as a low cost solution given the risks of losing all your data.
Count me as confused, but I don't understand why this is called a hybrid. It seems to be a straight hydrogen fuel-cell car (and there have been other examples, albeit mostly prototypes). In contrast, surely a hybrid (by definition) includes two (or maybe more) power sources.
Go look at the thinkbroadband.com site, and there's very clear evidence of increase in speeds. The upper quartile measure (which aligns quite well with an estimated 26% take-up of so-called superfast packages) has moved up a long way in the past year or so.
Of course, there will be something approaching 10% that will not get such speeds from the first phase of BDUK, but there's very good reason to believe (like Cornwall), that the original objectives will be overachieved by some margin. But that will still leave some disappointed of course, albeit there are later funding phases.
Of course if somebody could magic up the estimated £30bn required for a full fibre network, then all could change. However, nobody has managed to come up with a viable way for paying for it which is politically acceptable (equivalent to about £4 per line over a 30 year period).
You're expecting neutral, dispassionate fact reporting from the Register? It's not the BBC news you know, who at least have to pay lip service to the idea.
The only sort of back there is for large format are digital scanning backs, which work rather like a flat-bed scanner in that there's a linear array which physically move across the focusing plane. If you want, one model produces a 1.1GB files with 48 bit output.
Of course, they are useless for moving subjects.
I'd certainly like to see a cameraphone produce a good closeup photo of a bird in flight, or a macro photo or a great closeup of an athlete. Or of a myriad of different subjects.
This trope that a great photographer will always surpass the limitations of their equipment, and outshine the mere snapper is always trotted out. Of course, a gifted photographer will beat the snapper, but it's still the case that for some sorts of photographs you need the right equipment.
There are a very few systems cameras with a fully electronic shutter, like the Sony A7S and the Panasonic GH4. Unfortunately, the problem is that they take a long time to scan the sensor as they lack what's called a "global shutter". On the latter, the senor can, in effect, take an instantaneous "snapshot" of scene. However, on a CMOS sensor, the photosites have to be read sequentially, and row-by-row. On even relatively low resolution sensors with 12-14MP, this process takes, perhaps, 30ms. In consequence, for even modestly fast shutter speeds, the sensor rows have to be cleared and read as a sort of rolling strip that passes up the sensor. Of course, this is essentially what a focal plane shutter does, by exposing a narrow strip for higher shutter speeds. The difference is, electronic shutters take about 1/30th second, whilst a half-decent focal plane shutter traverse the sensor in about 1/250th sec or less. What this means is the top of the image is exposed before the bottom, so you get "leaning verticals" on moving objects. That's called "rolling shutter". You still get it on focal plane shutters, but it's about an order magnitude worse on electronic ones. Also, this problem is worse the higher resolution the sensor, which is why you don't see the option on sensors of 16MP upwards. (A lot of cameras do have an option for something called "EFCS", or electronic first curtain shutter. That's a partially electronic shutter which uses electronics to clear the photosites (which can be done faster than reading), and this runs ahead of a physical second curtain which shuts of the exposure. It's quieter than a fully mechanical shutter, but far from silent.
You see the problem with "rolling shutter" on a lot of video cameras with CMOS sensors as you get weird effects like twisted aeroplane propeller blades. It';s technically possible to create a CMOS sensor with a global shutter, but (currently at least), only by creating a temporary charge storage area for each pixel, which means giving over silicon real-estate which, in turn, means compromising other aspects of sensor performance, like dynamic range and noise performance.
Having taken more than a few photos at gigs myself, I know the problem of noisy shutters. Of course it depends on the circumstances. In a full-on rock performance, especially if it's outdoors and you are in the pit in front of the stage, not problem. If it's a folk singer or a string quartet in a quiet concert hall, it's nasty (not to mention at wedding ceremonies).
Yes, your 6D collects about 2.6 x the amount of light in total (as Canon APS-C has a crop factor of about 1.6). That translates to about 1.5 stops better performance across the ISO range.
For some things, bigger is better.
The four thirds system was actually defined as a joint venture between Olympus and Kodak, not Panasonic. However, it's now a consortium which includes Panasonic. The micro four-thirds system was defined by Olympus and Panasonic and essentially defined a new lens mount with a shorter register eliminating the option of a mirror box (but the sensor format is that of the original four-thirds system).
In the medium/long term it seems to me that fully electronic system cameras (like MFT, Sony. Fuji etc.) will gradually push DSLRs into a niche market as having mirrors flapping around doesn't seem like the future.
All that's needed now is for a manufacturer to crack the problem of the fully electronic shutters on systems cameras (existing examples all have serious shortcomings), and we can have properly silent cameras.
I see. I do remember the proposal. It also would have been CCD at the time. It hit all sorts of technical issue and, in retrospect, was a dead-end as there were all sorts of integration issues and a DSLR designed from the ground-up would have lots of advantages.
The "system camera" approach is actually returning. If you take something like the Sony A7 series, they have a full-frame sensor and, because of the very short sensor-flange distance, can mount almost every 35mm (or MF) lens made via adapters, excepting only some of those which are fully "fly by wire". OK, it's not quite the same as a film back, as it includes all viewfinder and a lens mount, but it's not so much different in principle to using a digital back on an MF camera.
nb. the CCD vs CMOS argument is one of those "religious war" issues which comes up from time to time.
You are right in that image quality is ultimately limited by the total number of photos detected, but that's over the whole image area (and, for a common output resolution, that's per-pixel). In principle that's purely a function of the lens alone. A smaller sensor requires a proportionately shorter focal length in order to get the same field of view. However, to collect the same number of photons, it will need an aperture proportionately wider. Take the example of a 35mm so-called "full frame" sensor 24x36mm and imagine you mount a 50mm lens with an aperture of f4. Now imagine a sensor of half the dimension, 12 x 18mm (not a usual sensor size, but it makes the arithmetic easier). You will now need a 25mm lens to get the same field of view, and to collect the same total number of photons in a given exposure time, it will now have to be f2 (and get the same depth of field characteristics). This is all part of what's called "the principle of equivalence". As the f-stop is simply the focal length divided by the aperture diameter, then you can see the physical diameter of the aperture will be exactly the same in both cases. As the maximum (physical) diameter of the aperture is the primary factor that dictates the lens diameter, you can see that for the same light gathering power the two lenses will (broadly) be similar diameter (although not length).
So the question might be asked, why do we need large sensors, if we can just use smaller sensors with wider lenses. Leaving aside the issue that lenses with very small f-stops become increasingly difficult and expensive to design (only partly ameliorated by the smaller image circle), there is a major sensor limitation. That is the ability of a sensor to detect photons before saturating. Broadly speaking, a sensor with 4 times the surface area can detect 4 times the number of photons before saturating (or blowing highlights). Note that this is not just sensors it applies to, but also film. Slide film, especially, "blows" highlights and to collect more light in total, you need bigger films.
Of course there is another issue, that for any given output resolution, the smaller sensor will have to have smaller photosites (clearly half the dimensions in this case) and that, in turn, means the 25mm lens would have to be able to resolve twice as well.
As the ultimate dynamic range of the sensor is defined by the ratio between the saturation level and what's called the "noise floor", there is an advantage to the larger sensor. It has the potential for four times the number of detected photons before saturation which means, all other things being equal, it can achieve a couple more EV of dynamic range.
There's a lot more to it than that of course but, essentially, the reason "big is better" just comes down to that ability to detect more photons by dint of the greater surface area.
Why the CCD obsession? They were used on early DSLRs, but CMOS technology has long overtaken CCD performance for the size of sensors used in SLRs. Indeed, even in the MF world CMOS sensors have started appearing (all utilising a recently released Sony sensor). Also, Leica have no adopted a CMOS sensor for their M9. For comparable cameras, current CMOS sensors (size for size) beat CCD on frame rate, high ISO performance, dynamic range and colour sensitivity. Yes, there are some exceptional specialist CCD sensors for scientific work, but not for still photography.
For example, here's a DXOmark comparison of the Leica M 240 (CMOS) vs a couple of Leica M9 (CCD) models. On all the objective criteria, the CMOS model wins out all through the ISO range.
Some people claim there's such a thing as CCD "colour". However, both CMOS and CCDs are (close to) colour blind, and what gives them the ability to distinguish colour is the filter matrix. (There is an exception, the Foveon sensor, which distinguishes colour by difference in the silicon depth penetration by different wavelength photons. To be ultra-picky, some video cameras use prismatic separation using multiple sensors, but not any current still cameras).
If you want an CCD DSLR, then there are plenty of second hand ones around. Here's a list of Nikon models with the sensor type listed. There's also the Leica S MF DSLR if you have very deep pockets and don't care about frame rates or performance at anything much above base ISO.
" A four-thirds format digital camera is unlikely to deliver more than four megapixels of information per frame, irrespective of how much data it outputs. "
This is simply nonsense. Quite apart from it bearing no resemblance to the resolving power of typical lenses, DXOmark have tested a large number of M43 lenses and measured resolutions far in excess of 4MP. I cursory glance came up with several that reached 11MP. There are also plenty of on-line comparative samples which show this.
A further point is that the limit of lens resolution isn't a binary thing (and neither is diffraction limiting for that matter). It manifests as a gradual loss in contrast. There's no sudden "cut-off". It depends on the criteria used.
Further, even if the sensor does "out resolve" the lens, there are still advantages as the 2x2 bayer matrix found on most digital cameras is twice the pitch of the sensors. Thus you get better colour sampling with the higher resolution sensor.
Also, it takes far more than 4 photosites to output a single RGB pixel. Any algorithm which did so, would produce horrible results apart, maybe, a 2 x downsizing. In practice, demosaicing algorithms are complex beasts and make a huge difference to the final output.
My 5 year old 1.6 Focus diesel turbo still averages 53-54mpg, and only costs £30 per year to tax (as it just scraped under the relevant CO2 emission target at the time). From memory it had around 112bhp, but it seems to plenty for my purposes. I think it cost be just under £14K at the time. Yes, there are issues with diesel particulates (although it's got a particulate filter, so I don't know how much that helps).
In all, it's not obvious that much progress has been made in the last 5 years.
I recently took this picture of a hawk (XL592) which used to be sitting on a concrete plinth at Booker airfield looking very sorry for itself. I cam across it in a field at the back of the 15th century Ockwell's Manor, on the outskirts of Maidenhead. Since I took the photo it's paint job has been completed with RAF roundels.
The Hawk was/is a very tough aircraft, and I suspect it will be flying long after more modern fighters are grounded as it is relatively simple. Unlike modern aircraft it doesn't need complex electronics to fly, and I suspect the Hawk will be flown by enthusiasts long after modern fighters are grounded.
There is something about highly tuned machinery from the 60s, whether it's fighter aircraft or racing cars. They somehow "look right" whilst the modern stuff just seems to have ugly bits tacked on.
nb. I'm not sure the item mentioned that Tommy Sopwith almost won the Americas cup in 1934 and there was more than a little controversy about the result.
In fact, HDDs have to fly their heads nanometres above the disk surface, not microns.
However, to go back to the original question why HDDs are cheaper (per GB) than flash storage, it's largely down to how the data is stored. On an HDD, the data is stored in the form of magnetic domains on a substrate. The manufacturing process does not require every single bit to be represented by a photo-lithographic process. So once that bit of high-precision engineering has been produced to fly the heads into the right position (and decades of engineering have minimised that process), the coating comes relatively cheap. That's also why tape store is (per GB) less than on disk. The coating comes cheap.
The other issue is that each new generation of flash storage requires an immense investment in new equipment as it's dealing with fundamentally smaller elements. You can't, for instance, simply take something designed for 20nm elements and adapt it to 14nm. In contrast, the mechanical side of HDDs has remained relatively static for a long time. Platters, bearings, motors. servos and so on are pretty well the same save that a bit more precision is required in track location and following. The heads have to be designed to fly lower and with narrower gaps, but the process is more one of refining and evolution than having to throw out a whole plant.
Surely write endurance and reliability are two things which are only loosely connected. The first is essentially the lifetime of the device for a given workload pattern whilst reliability is much better descried using failure rate figures within the drive's anticipated lifetime.
However, I'd certainly sit up and take notice of those write endurance figures. If they are to be believed, and there aren't other factors at play, this would really open the device up for use in enterprise storage uses, especially if the storage device can balance write loads over many devices so that "hot spots" don't arise.
That's correct. In practice, most white LEDs don't produce "white" light by mixing primaries. The do it by using a phosphor coating which "downshifts" much of the blue light to longer wavelengths and mixing this with the blue light that penetrates the phosphor layer.
Of course, none of these technologies produce the continuous (black-body) spectrum of an incandescent bulb, although many people seem to believe they do.
Physical access to the machines control system doesn't give you access to the money cassettes. That's totally different. Bank staff, for instance, will have access to some parts of the machine to recover things like "swallowed" cards. What they won't have is access to the hardened money "safe".
It's far easier to gain access to the control panels than the money safe. The real issue is that it is so easy to "infect" a machine with malware.
Unless you are doing something truly exception, hitting the write endurance limit is simply not an issue. Save that consideration for those running update intensive server applications. Most likely something else will fail on your machine first. HDDs don't have an indefinite like either (and don't make the mistake of thinking MTBF gives you expected lifetime - it doesn't; it's a statistical value that applies to devices within their rated lifetime and, generally, HDD manufacturers never give you rate lifetimes).
As for relocating MyDocs, MyMusic and so on, that's very easy. Assuming you are using Windows 7 or 8, then what I do is assign a system partition on the SSD large enough to take all the system files, program files and so on with plenty of room for expansion. Next I create a data partition on the SSD, which is where I place my MyDocs folder. Then, on an HDD I create partitions for my major data areas (like video, pictures). I then mount these as sub-folders in MyVideo, MyPictures. That way all these "mass storage" areas appear in subfolders in the relevant storage areas (you can also use symbolic links, but I prefer to "hard partition" the mass storage areas).
Of course you don't have to place MyDocs on the SSD. You can place it on an HDD, but personally I find that it's useful to be able to place some data files on the SSD for speed purposes. For example, I place my email client files on the SSD, and some applications (like Lightroom) greatly benefit from keeping the meta-data files on the SSD. As a general point, be careful to make sure that programs (like email clients) place their data and, as far as possible, config files in the data area. That way it's much easier to move to a new machine.
I backup the system partition using an imaging product (which allows me to restore the system without wiping out data). I backup the data areas using a synchronising backup to an external disk (USB3 in my case, but NAS eSATA etc. will work too). I prefer sync type backup as it allows me to mount those onto another machine and get to my data.
The general principle is you should always have a backup regime which is designed for the worst case. Any disk can fail catastrophically, so design the regime such that you don't lose everything. If all the things to worry about, write-persistence is one of the last to consider.
Valuing storage solely by £/GB is like comparing food on the basis of how many calories you can buy. Yes, it's a factor, but far from the only one.
I did work for a company once where the accountants actually did work on that principle. They seemed to have great trouble understanding why anything else might matter...
For most users the PCIe performance gain will simply not show. Also, SATA reviews are applicable to more people. They fit laptops as well as desktops. It also doesn't involve complex issues over drivers, boot arrangements and so on.
For the most part, it's not throughput that makes the user experience so much better with SSDs, it's the vastly reduced latency and (the other side of that coin), increased IOPs. I'd venture for most people, 500MBps is going to be plenty. I think PCI-e is almost a separate market and really won't figure in considerations unless you are the ultimate speed freak or have some specialist server application.
If it's things like boot time, system responsiveness and application start-up times that matter, then the real-world difference you'll see on most PCs will be very small. That's not surprising, as most applications will have other resource bottlenecks (like cpu, network activity, or interactions with devices other than storage). These latter start to dominate response times.
For example, see this.
If you really must have hyper-fast copying of large files or are running an incredibly I/O intensive enterprise app, then go ahead. But my guess is that this is irrelevant to most people wanting a "consumer level" SSD. All of them will transform the user experience, and it's probably ease of migration, reliability and price that's is most important for this sort of comparison.
The announcement appears to be an acceptance that Sparc is no longer a general purpose processing chip but something optimised for running Oracle applications. Admittedly that's an awful lot in the application space - it's not just databases of course. However, it does seem a retreat from what SPARC was once mooted to be. A high-speed RISC general purpose, cost-effect, CPU that could compete on all aspects of performance and suit a wide range of applications.
Of course the real problem is that the T series essentially gave up on single thread performance in favour of increased aggregate throughput. It's a bet that applications will be developed to suit this architecture. As many of us found when deploying T series machines (often bought by senior managers who swallowed SUN's line of power efficiency, throughput and virtualisation), they were fundamentally crippled for some sorts of applications. It often showed itself up where latency was an issue. Call centres are expensive to operate, and keeping agents (and customers) hanging around for slow systems is not efficient.
As it is I would not choose SPARC except for reasons of supporting legacy applications.
It's not entrapment in the legal sense by a law enforcement agency, but entrapment according to one of the other recognised definitions. See meaning 2 (a), which would seem to cover it.
Is to why quote an example in criminal law, as Andrew did, the justification would appear to be that the press can justify their actions under the press code by analogy to the way it's interpreted by the coursts as applying to criminal law. Quite why he chose a US example though, I'm not sure as the interpretation of entrapment is different in the two regimes. Generally the US allows the legal enforcement agencies far more freedom. Witness the various cases involving exports of arms perpetrated by the FBI. Those would not be allowed in UK law.
tr.v. en·trapped, en·trap·ping, en·traps
1. To catch in or as if in a trap.
2. a. To lure into danger, difficulty, or a compromising situation. See Synonyms at catch.
b. To lure into performing a previously or otherwise uncontemplated illegal act.
I've no doubt papers used to be lazy (remember all those stories of never ending expense-paid lunches, corrupt employment practices for printers and so on). However, those halcyon days have gone. These days newspapers (with a few exceptions) are under enormous financial pressure with their circulation eroded by media fragmentation and the internet. Then there other main source of income, advertising is being strangled by competition from the on-line world. Even those papers with successful on-line presence can't get anywhere near recovering the difference.
So the short story is, they aren't so much unwilling to do proper journalism as unable to finance it.
nb. what's true for newspapers is also true for free-to-air broadcasting financed by advertising.