1265 posts • joined 21 May 2007
Re: I applaud their ambition.
You might, but at what cost. You are already cross-subsidised by other consumers on your phone line as the much longer line will represent more capital investment, a higher maintenance overhead and so on. The same is probably true of your water and electricity, assuming those are longer than the UK average. Similarly delivering mail to you probably loses money too.
It's a matter of simple economics. Certain services are just cheaper to provide in high density areas. If a company could see a viable market for high speed broadband in your area at a price that consumers would be willing to pay, then it will be done. Of course, the reality is that people expect to pay the same price wherever they are, and the regulation of the market is such that it doesn't allow for significant market differentiation. Hence it's only going to happen through a subsidy. Either a cross-subsidy via some form or duty incumbent on operators (as happens with telephone, electricity and water) or an explicit state subsidy.
So, if consumers in your area are willing to pay - say - £50 per premises for a high speed broadband service then it would be likely that a means would be found to provide it. However, I suspect that most won't pay that much. Hence the need for a subsidy.
Re: I applaud their ambition.
Or "I want somebody to subsidise where I live"...
" using the connerie-vian* rule-of-thumb formulated in Australia, which suggests 10 million FTTP connections cost $AU90 billion, Indonesia will need over $100bn for this project, or about eight per cent of GDP."
It's simply ridiculous using Australian models for projecting costs for infrastructure provision in Indonesia. Whilst the technology costs will be similar, it is well known that 80% of the cost comes down to installation. That's roadworks, installing cables, putting up poles, running cables and the like. Those costs depend heavily on local labour rates and not those in Australia (some of the highest in the world).
Also, the Australian National Network is not exactly a shining example of efficiency, not to mention it includes the costs of effectively re-nationalising lots of assets. Indeed the network has been subject to a strategic review (which is now recommended to be a mix of technology, not just FTTP).
So yes, this is going to be expensive, but 8% of GDP? I think not.
Re: CAPS LOCK MUSIC
CAT 6 for speaker cables? You're joking right? It might be good for noise rejection, but that's hardly an issue for speakers. The problem is the cross-section and the insertion loss due to resistances. It's only 0.58 mm2, which is only good for about a metre at best on a 4 ohm speaker system. Anything longer than that and you'll want much bigger cross-sectional area.
Re: Black Box
Whilst the cost of uploading flight recorder information to satellite is certainly a factor, the technology is not. There exist a number of proven ways to provide robust data transfer over unreliable data links and, in any event, none of this would preclude the use of local storage on the data logger.
In fact the subject of real-time upload of flight recorder information has been around for many years. It's just a matter of the authorities mandating its use. Looking around, the cost of satellite data transmission is of the order of several dollars per GB, albeit I've no doubt it depends on the nature of the services. Flight data recorders seem to have capacities measured in the several GB region, so it seems to me to be perfectly practicable.
The archive storage of such data is hardly a major cost issue. Such data can be discarded as soon as it's no longer relevant, although I rather think that summarised extracts of safety-related information would be useful. Also, it has been the practice to archive some flight data recorder and airline engine operating information for operational analysis to optimise such things as engine reliability and fuel economy.
Re: Very sad indeed...
@ James Hughes 1
""The odds of dying per mile are
by car, 1 in 100,000
by plane,1.6 in 100,000,000,000"
Really? For car driving it's a chance of 1 in 100,000 per mile? I rather think not, as the average UK annual car mileage is about 12,000. In fact, the chance of being killed in a car is about 5 per billion miles (in the UK) as against 0.08 per billion miles in an airline. That's 60 times safer (per mile) by schedule airline, not 625,000 times safer.
However, that isn't really comparing like-with-like. Nobody flies an airliner from their front door (well, unless your name is John Travolta). In practice, safety stats like this only make sense where they are genuine alternatives. If you take long distance car journeys on motorways, then the death rate per billion miles is reduced by a factor of 3 (in the UK). So that's more like 17:1 in favour of the air alternative. However, factor in the much higher fatality rate per journey of air over car (the stats I find show that), then they safety comparison of short-haul air versus long car journeys closes again. Indeed, you might well be safer travelling by coach on than short-haul plane flights.
This is not an appropriate time for that comment.
There must be a better way...
So this is a patent that has been granted for running the sort of abstracted IP-based I/O services that have been run on physical servers in one guise or another for many years, just on the basis that it is now being run on a virtual machine? That's absurd.
It's about time that some other means was found of adjudicating what is, or is not, a patentable IT products than this charade that only serves to enrich lawyers and this sort of nonsense. Frankly the number of true IT innovations, rather than just natural developments of existing technologies, are relatively few. Perhaps what's required is an independent group of IT specialists who can draw up some vigorous guidelines and provide an early sifting service on what is truly innovative. There are peer-review systems for academic journals. Perhaps something in equivalent. It might cost more money in the first place, but is surely a more economic system in the long run than this apparent free-for-all in granting trivial patents.
You are quite right, you can't rebuild a 4TB single disk in minutes. It's utterly impossible. However, what you can do is have a distributed system such that 4TB of "lost" data is actually lots of much smaller segments. Let's say, 100 x 40GB each, each of which is part of a redundant set of some sort. Then you re-assemble those 100 x 40GB segments on 100 different disks using spare capacity. It's certainly possible (in theory) to write 40GB to a single disk in less than 10 minutes.
That's the basic principle, you use some form of distributed redundancy system such that if you ever lose one disk, then you can invoke 1 very large number in both the recreation of the "lost" data and in the reconstruction.
Of course you can do this with a clever file system, but it's also certainly possible with block-mode arrays too. File systems can be rather more clever than that as it's only the active data that needs to be recovered. A native block mode device, at best, can only know what blocks have been written to via a bit map. However, doing all this requires a lot of heavy processing and data shuffling. You can't simply delegate the task to some dedicated hardware RAID controller to do in the background.
That 4TB recovered in 10 minutes still involves (at least) reading and writing a total of 8TB of data (depending on how the redundancy is generated), and that's at least 13GB per second. With more complex, and space-efficient redundancy schemes, you may have to read several times that to recreate the data, so it would be even higher. That's very demanding, and will require a lot of CPU and memory bandwidth which also, of course, has to carry the normal workload at the same time. Of course, if you relax that 10 mins restore time, it's less demanding.
It was my experience of storage arrays (mid and enterprise) that the processing capacity was often the limiting factor on I/O performance (depending on access patterns).
Re: @ NinjasFTW -- As much as I hate to say this....
The original competition model against BT was at the infrastructure level. Hence all the cable franchises which were granted. To make these more attractive, they were granted local monopolies on TV transmission along with the right to offer fixed line telecommunication services. Of course the powers that be vastly underestimated the cost and complexity of doing this (not to mention the fragmentation). Then there we also fixed radio link models (which failed economically), not to mention what Mercury was meant to do.
Back in the 1980s, nobody gave any thought to the copper pair network for high speed data or video. The future was meant to be cable, and BT were essentially prevented from delivering a fibre network due to the threat to cable investment.
Hence the new regulatory regime which came about with Ofcom. The option was open to have taken this to the Monopolies Commission who may have enforced a separation between infrastructure and retail. However, even that isn't so simple. The regulatory regime already has two separate layers of wholesale services (Openreach and BTW), and to some extent they overlap, especially now as the FTTC broadband wholesale service now falls under Openreach versus the ADSL exchange services which are regulated under BTW. Such complexities would also affect full separation models.
Of course the new regulatory regime was not what was sold to the original BT shareholders.
Re: When BT can...
Except the Plusnet subsidiary would go broke if it persisted in making losses as it's a separate company, albeit wholly owned. It would be blindingly obvious to regulators if it wasn't economically viable as there's no way to cross-subsidise.
The network was sold. It wasn't a gift.
"Except BTs network was originally provided by the tax payer."
There's always one idiot that makes this statement. Firstly, the network was not paid for by the tax payer. It was funded by the Post Office (and before that the GPO) using the charges levied against what the state organisation cared to call subscribers. Indeed, in most years the UK government took a levy off of the PO telecommunication operations which were profitable due to the high charges (whilst simultaneously limiting investment through borrowing caps, as it was considered part of public finances - that's why party lines persisted well into the 1970s and beyond).
However, even if the tax payer had paid for the network, its irrelevant. The whole operation, network and all, was sold to private investors. It wasn't a gift. Indeed, if you adjust for inflation, the amount raised by the sale of BT shares over three tranches is considerably more than BT's market capitalisation. This is not helped because the shareholder was landed with a huge pension liability, largely because of the inflated UK workforce (over 250,000 at the time of privatisation). Be glad its not like the Royal Mail, where the pension black hole has had to be taken over by the government to make that organisation financially viable.
Bear in mind, those that bought shares in originally did so in a completely different regulatory environment, and this was changed retrospectively, much to the shareholders disadvantage.
Re: Cancer from radiation
You clearly don't understand what you are talking about. One of the reasons that the subject of radiation-induced fatalities from accidents like Fukishima is so open to argument, is that any excess cancers will not be measurable. In 10, 20 or 30 years' time, there will not be a sudden spike in cancers cause by this incident as even the worst case numbers are so small they will be man orders of magnitude below natural rates of cancer. How you expect at detect an excess cancer rate of even a few hundred (much higher than any model predicts) spread over 10s of years in a population where the natural rate will be in the millions escapes me.
There are more measurable spikes caused by radioactive iodine (which has a short half life). The excess cancers (of the thyroid) were very clear in Chernobyl, although it was made much worse by the lack of action my the authorities in that case. As far as I'm aware, nobody has seen such a spike in Japan.
Fortunately, thyroid cancer treatment has a very high success rate or the death toll from Chernobyl would have been a lot higher.
Re: Oh fsck. Not Lewis Page again.
I see I got a thumbs down on this post, although I'm not quite sure what's wrong about pointing out innumeracy. Perhaps it's considered bad manners.
Re: Oh fsck. Not Lewis Page again.
Amazing that a post with a fundamental and obvious mistake in the units chosen gets 19 upvotes. It just rather proves that people vote on the sentiment, not the content. Here is the extract in question :-
"Drax puts out 1.2 million tonnes of ash per year. Let's assume it's solid carbon. That puts its density between about 1.5 and 3.5kg/m^3, so the waste-cube would be between ~700m and 1km each side."
Leaving aside the highly questionable assumption that Drax ash is essentially carbon (its actually largely made up of incombustible minerals, like aluminium silicates, calcium oxide etc.), the the volume of waste is over-stated by 3 orders of magnitude. That's simply because the density of carbon is more like 1.5-3.5 tonnes, not Kg per cubic metre. In fact, the composition of coal ash will be reasonably to the higher of these figures, giving a cube of about 70 metres on each side. Still very large of course, but 0.1% of the volume above.
Of course, to anybody with any real sense of scale and numbers would have thought something was wrong in the first place. If the ash alone was 1 cubic kilometre, then for heaven's sake, how big was the hole in the ground from where the coal came as it would surely be many times larger? Of course, if you are somebody hopped up on the excitement of finding a "fact" which overwhelmingly supports your particular preconception without bothering to wonder about its credibility, then you end up posting this howler (or upvoting it).
That's not to say that furnace ash isn't rather nasty stuff, but it is reused in some building materials where it's pretty well trapped. Indeed, using it in place of some portland cement reduces the CO2 impact of producing the latter. Radiation hazards of such materials are tiny. Yes, there will be the leaching out of some radon from the very low concentrations of uranium, thorium and the like, but the actual amounts are minute when spread out over where it's deployed.
Re: Financial disasaters..
Nuclear cleanups are inherently horribly expensive, although some might argue that these are overdone. However, for good or ill, nuclear power is intimately associated with nuclear weapons, and those of us who lived through the most dangerous parts of the cold war know well why standards have to be so high. Indeed early civilian reactors were deliberately designed with producing materials for nuclear weapons.
It's also worth noting that Britain's AWR and Magnox reactors are particularly expensive to clear up, even if they would not have suffered the particular problems of the Fukishima plants. However, they have very large cores, full of irradiated graphite.
It is also not helped that Sellafield was found to have consistently faked safety records in the past, and early attitudes to dumping radioactive waste into the sea were hardly great examples. There's also an estimated bill of £70bn to come for the decommissioning of Sellafield. Not exactly loose change,
I'd like to emphasise, that I'm not against nuclear power as such, but that it presents particular challenges and dangers which will always require enormous care.
How about a financial disaster
The economic cost of the Fukishima failures are estimaed at between 250 and 500 billion dollars. If it had been fossil fuel plants that had been inundated, then the costs would have been a fraction of this.
So, whilst this may not have been a human disaster, it's an awful long way from being a triumph. It also came dangerously close to a much worse problem. The chief failure, is the abysmal underestimation of the risks of tsunamis from off-shore earthquakes. Despite what some people claim, this is not in retrospect. There was ample historical evidence of tsunamis of this size hitting the Japanese coast (and, of course, tsunami is, appropriately, a Japanese word).
The Fukishima reactor design had known vulnerabilities, especially where the cooling system fails. There was a clear common-mode failure scenario as was proven when the tsunami disabled the backup cooling system.
The fact is, that if the facility had been protected properly, or placed in a less vulnerable location, the vast majority of the economic costs would have been avoided. Indeed, the plant could probably have been restored to operation.
It's simply a case where short cuts were made in the placement and construction of the plant in what is one of the most seismically active areas on the planet.
It's far, far from a triumph.
nb. I'd fully agree those worried Californians are just neurotic, but that's a different issue.
Leaving out the "technobabbly blather" as you call it, those of use with actual experience of Sparc T series processors, can attest to the fact that if you have an application which requires fast single-threat processing, then they perform horribly. That's especially true if you try and load up all those threads, as each of those threads will be competing for just two execution units. Also the Sparc T processors were designed to be relatively simple, and lack many of the superscalar features that make processors like Power and Intel's XEONs run very fast in a single threaded mode.
Indeed, for those Solaris-based applications (including many Oracle apps) which require very fast single thread performance, then you have to use M series machines, which are rebadged Fujitsus using Fujitsu's processor design. Unfortunately, excellent as they are, Oracle's M series machines are much more expensive than servers based on Intel's XEON processor.
The T series machines are designed for throughput, not speed. They make very clever use of otherwise "dead" time when a thread is stalled waiting for data from memory, but speedy they are not. Only if you have an application that scales very well over multiple threads will you see good performance. Intel's own implementation of this, hyperthreading, can also be problematical for some applications, although it's rather less ambitious with only two threads. The Fujitsu SPARC64 processor, in later incarnations, had two threads per core, and I've had time-sensitive applications where that feature had to be turned off as response times became erratic once there was a significant statistical chance that two threads were competing for the same core (hyperthreading and similar implementations distort CPU utilisation figures as, once threads start competing, threads in execution start slowing down).
I recall a senior manager who once signed up to buy a lot of these SUN T series machines, having been sold the line that they were incredibly power efficient and could be used to consolidate workloads. Unfortunately, when implemented, he found out the hard way that these servers were wholly unsuitable for many of the applications that were in use. Ironically, many of these were based on Oracle databases. Indeed, some of Oracle's own applications perform rather poorly on T series. Of course, if he'd actually cared to ask people who understood these issues before committing, we could have told him, but he believed the particular line he was spun be the sales team.
A case of horses for courses.
Not a new problem...
Surely this is hardly new. When Kenneth Appel and Wolfgang Haken proved the four colour theorem in 1976, it was originally not accepted by mathematicians as the computer-assisted proof was far to large to be proof-read by human beings. it gained a sort of grudging acceptance.
However, I always thought the objection was flawed in principle. Firstly, it's possible to consider a computer program (and the hardware on which it runs) as a mathematical entity which, itself, is subject to inspection and being proof-read. There have even been attempts to produce mathematically provable programs (although that is only possible in principle for some sorts of programs) and, even processors - anybody remember VIPER? Of course, any proof-reading of a program, especially one which cannot be mathematically verified, is open to human error. However, that is also true of human proof-reading of mathematical proofs. The possibility is always there of a human error, especially for very complex proofs.
It's therefore true, in principle, that the body of mathematics is open to human errors, albeit not on the scale of complex computer programs. It is therefore a problem of scale, and not principle.
Re: Virgin Media watched subscriptions to its broadband service plummet?
One reason why everybody in an area, signed up or not, gets multicasted with VM sales guff is that it has to be unaddressed in order to avoid the Mail Preference System (MPS) restrictions. By law, no company can send sales material via direct mail to any address registered under the MPS (with a few exceptions, such as if you are a customer of the company concerned). By doing a blanket street-by-street delivery, it doesn't count as direct mail and the Royal Mail are under no obligation not to deliver such stuff to MPS registered addresses. Also, the RM will provide blanket street-by-street deliveries at a far lower cost than addressed direct mail.
It would be relatively easy for the government to introduce a scheme for indirect sales material. A simple scheme, whereby householders could fit a small preference notice to their door, would do the job cheaply, and with minimum overhead. My suspicion is no such scheme has been introduced as it's a useful source of income for the Royal Mail.
Of course it's not just the RM that delivers junk mail. My letter box gets filled with such stuff for pizza delivery, taxi firms and the like.
Re: Wow...so intriguing...
Every time helium filled drives come up, somebody has to explain, yet again, that the heads have to "fly" on a cushion of gas. It's simply impossible to design a drive using a vacuum as the head fly height is now measured in nanometres on the capacity drives.
It is also perfectly possible to store helium. After all, it's kept in cyclinders under pressure. What you have to do is to create a hermetically sealed environment with no rotating part seals.
Re: Is it me being thick or this makes no sense
I've worked in data centres where 128gbit FC could have been fully used with ease. Not for an individual server (although we had some using well over 32gbit throughput), but for connection tostorage arrays, tape libraries, VM farms and for interconnects essentially the faster the better.
Yes, arrays and switching can, of course, generate the required throughput using multiple interfaces, but as anybody whose worked on networking can tell you, dealing with the configuration management, pathing, subtle bottlenecks and sheer cabling complexities is a major pain. It is much, much easier to have a few very large paths into your major switching and storage backbones than it is to deal with hundreds of connections. Of course, nobody should expect you average little server to require this (and you probably wouldn't use FC anyway), but there are plenty of places where it is extremely useful. Of course, the ability of storage, and to a lesser extent, switch manufacturers to fully exploit this capacity is a different matter. I saw plenty of storage arrays where the amount of front end throughput theoretically possible bore no resemblance to actual capability.
Cloud suppliers are just another single point of failure...
You'd be mad to rely on any single cloud supplier for mission critical data. It is, after all, a single point of failure no matter what the quality of their operations or finances. Also, if a cloud supplier has technical problems, then they are not necessarily going to prioritise recovering your data, if they can do it at all.
Of course there's an even bigger structural issue. If a single cloud supplier fails, it could bring down the operations of many companies and could cripple entire industries.
It's a fundamental principle of data management to provide protection at multiple, independent levels. Using a cloud storage provider is no different.
Duirectors fiducial duties
I don't know about the US, but in the UK director's responsibilities are covered in the 2006 Companies Act. That covers a lot of things, including a requirement to operate within the law and duties about the treatment of employees, avoidance of conflicts of interest and so on. However, one of the most important duties is "To promote the long-term success of the company (rather than the interests of, say, the majority shareholder)". That is, the interests of the shareholders at large. The point in brackets is about the protection of minority shareholders.
A perfectly reasonable interpretation of this condition is to optimise the financial performance of the company in the long term, which would include running the company in a tax-efficient manner. Of course it might be that being so aggressive in pursuing tax minimisation policies as to attract unwanted and damaging publicity or unwelcome legislative changes. So it's not an absolutely black-and-white picture. However, the requirement to promote the long-term success of the company would certainly appear to include running in a tax-efficient business as a part of being financially successful.
Another point is that the shareholders are answerable to shareholders (often moderated through non-executive directors) and, as such, if they are appointed with the aim of maximising shareholder return, that is what they have to do. Also beat in mind that these shareholders are often institutions representing the investments of ordinary people in such things as pension schemes.
One thing to note is that the 2006 act rather widened the scope of director's duties to wider interest groups. It superceded the 1985 act, where the duty to the shareholders was even stronger.
Re: Scary Stuff
ANPR average speed cameras do not rely on the number plate recognition for proof of speeding. They are backed up with photographic evidence. In that respect they have the same issues as fixed location speed cameras. So miss-identification can still happen, even with humans reading plates, not to mention the problem of cloned plates. However, the system is not dependent on the number plate recognition algorithms for convictions as they would certainly not be sufficient evidence.
Also, it doesn't require 100% accuracy, or anything approaching that to be a deterrent to speeding. It only requires a credible chance of being caught. Even if there was only a 10% chance of being caught (which would require only about 30% accuracy), how many people would be willing to take the chance, especially as you might go through hundreds of such sections every year (long roadworks often have several of these sections).
Reply to another fantacist...
The wholesale cost of a copper local loop is fractionally over £7 per month. Quite who you take the retail package from, is up to the consumer, and there are deals around which are a lot less than £15. Also, that retail price includes VAT, so that's about £2.50 off the price you quote.
In any event, the wholesale line cost includes all the costs of rates, maintenance of ducts, poles, the building where all this terminates and much else. That's not to mention recovering the costs of capital expenditure and, dare I mention it, some return to the shareholders who bought this stuff off the government. In practice, phone revenue used to cover much of these costs, but no longer.
So the long and short of it is that, whether it's fibre or copper, the fixed plant costs will remain and aren't going to be substantially cheaper than utilising the copper. Indeed, given the high costs of running fibre to premises, the costs would go up. If it was cheap to do, then you'd see other operators queuing up to run in local fibre loops. It isn't.
Re: Energy density
"El Reg is not at fault here though. Whoever wrote that press release either was being willfully misleading or ignorant of the matter at hand."
Well, I'm afraid they are. What you raise are excellent points that occured to me too, and it doesn't take a great deal of knowledge to note that Ah is not a unit of energy storage. Regurgitating tracts of press release without asking obvious questions is surely closer to churnalism, than journalism.
Smaller is better...
Of course you can increase capacity by increasing the disk diameter. Indeed, until not so long ago 5.25 inch drives were commonly available. Going back rather further, hard drives could be as large as 14 inches.
However, there are serious problems with doing this. Firstly is the simple issue that you have to spin these disks slower in order to maintain dimensional stability, Larger platters will distort more at high RPM due to the higher peripheral speeds and (therefore) increased forces involved. Go back three decades, and large disks were effectively limited to 3,600rpm, over four times slower than the fastest current enterprise drives, which currently run at up to 15,000rpm. (Whilst 15K drives are nominally 3.5 inch form factors, in practice, the actual platter diameters are smaller to allow this speed to be reached), There's also a more subtle reason why larger disks have to be spun slower. Quite simply, even if a 5.25" platter could be spun at 15K rpm, bit density would have to be reduced as it wouldn't be possible to write the data fast enough as there's insufficient time to polarise the substrate. Reducing the bit density would reduce the capacity so would, at least in part, undo much of the value of increasing form factor.
Large format drives take longer to seek to any given track as there's further to travel, and achieving the required level of accuracy for high track density becomes increasingly more difficult with larger form factors.
Then there is the issue that disk drives are essentially serial storage devices in that they can only read and write one track at a time (albeit with relatively fast access to a given track). It's not possible to put multiple read/write heads into an HDD due to vibration, airflow and other issues (other than old-fashioned fixed head drives, not feasible with modern bit densities). Note this applies even if you increase the form factor in the other dimension by just adding more platters - which also makes the mass to be moved higher.
The upshot of all this is your multi, multi-TB megadrive is going to be horribly slow due to very high rotational latency and seek times. Even the sequential access will be slow in terms of the amount of time it would require to read the entire drive as you'd be stuck with approximately the same transfer rate as we see currently. In fact tape has an advantage for sequential access as it is at least possibly to add more heads to increase the sequential data rate by using parallelism (which is why sequential data rates on modern LTO formats are so much higher than for disk drives - they effectively read/write 16 tracks in parallel).
So the large format drive is dying out for good reason. Already 3.5 inch drives is in slow decline as the access density (IOPs per TB and total device read time) inevitably worsen with increased areal density. (It's inevitable as capacity increases linearly with areal density, sequential data rates to the square root of areal density and IOPs are essentially fixed at a given RPM).
Would UPS/Auxiliary Power be cost justified?
A more pertinent point is whether the type of work a super computing facility performs merits the expense of a UPS and auxiliary power supplies. After all, by its very nature, compute-intensive work is essentially batch-based and, arguably, the money that would be spent on UPS, auxiliary power and the required accommodation is better invested in more powerful computers. That way, even if there is the odd outage of a day or two it's likely more work can be carried out. Of course, there is still the staff cost during the outage, but even if they can't find other work to do, the more powerful computers should increase their productivity during normal working.
Note, this is a very different model to mission-critical transactional systems where a business (and customers) can be heavily disrupted, or even stopped by an outage. In that case full UPS and auxiliary power is likely to be justified due to the disruption.
Re: These are not the pixels you are looking for.
"HDMI encoding/decoding in minimal bandwidth"
What one earth are you talking about? HDMI does not have bandwidth issues (provided the cable meets the required specification, in which case you'll probably get no picture at all or, less likely, some very obvious picture breakup and/or sparklies). It's a digital transmission system which passes through the full video frame data rate. It has absolutely whatsoever to do with the lossy compression and encoding of video data. All that decoding is done before it gets to the HDMI cable.
Incidentally, for what it's worth, as you are an analogue person, analogue colour signal transmissions were also heavily compressed using PAL, Secam or NTSC due to the way colour was encoded into the video signal. It's one of the myths of analogue folk that somehow their preferred method of transmission somehow contains more information when, in practice it's the reverse. Just Try squeezing an HD analogue video stream into the bandwidth used by a digital HD stream...
Aspirations, but not much else
This deserves a new category beyond mere vapourware, Perhaps aspirationware or some such term. It's essentially a lost of idealised requirements, almost all of which have been around for years. Any number of attempts have been made at abstracted, virtualised storage models, and it's proved to be an extremely difficult nut to crack. Merely chucking into the mix a few bits of technology which may, or may not be appropriate does not make for a solution to the problem. At the heart of things is the extremely difficult issue of producing a generalised, robust configuration model which addresses the relationship between storage and application and how it's accessed. At the moment, there are multiple paradigms and levels of abstraction of entities from blocks up to objects, and it's not clear that there's any obvious single model.
When there is a product in the offing, then let us know. As it is, this is not much more than another hyped buzz-phrase,
There are several reasons why a direct source of light is preferable to a diffusely reflected one, One is that the quality of the image is highly dependent on the projection surface. Unless it is specifically designed for projection with neutral qualities and, especially with projection so far from the normal, absolutely flat, the image quality will be compromised. It is very prone to colour shifts, loss of contrast etc. Indeed the contrast ratio will be severely impacted by the use of a white wall unless projected in absolute darkness as the black level will be severely compromised by reflections of ambient light,
The second is brightness. With diffuse reflection, light will be scattered in all directions and, consequently a much lower proportion will reach the viewers eyes. With a "normal" projection onto a screen at right angles, it's possible to engineer the surface of the screen to reflect most light back to a narrower angle and therefore make the image brighter with more contrast, That's simply not possible with a painted wall and where the light is projected to arrive at such a shallow angle.
Of course, it's possible to overcome some of this by making the projected image much brighter by using ever more powerful sources of illumination, but as those who are used to such systems will know, they are power hungry, get hot and invariably require cooling fans, which emit noise.
There's a reason why direct emission display systems dominate in normal households over projection systems, and that is simply they are more efficient, brighter, usable in ambient light conditions and don't impose major environmental constraints. Keep these sort of systems for those that can afford dedicated home theatre spaces.
Two supervolcano eruptions in the last 75,000 years is not inconsistent with a supervolcano blast every 100,000 years assuming that such events are random in nature, The wording of the article in this respect is rather unfortunate in that it rather implies some regularity and might have been better expressed as a probability of 1 in 100,000 of a supervolcano eruption in any one year.
If such events can be treated as purely random, then it would follow a Poisson distribution and it would therefore be expected that there will be some periods when there are more frequent occurrences than the long run average (and some periods when there will be fewer). Of course, the likelihood is that it's not completely random as, presumably, an eruption in one place might well release pressure elsewhere, but it's even less likely that there is some sort of global clock which dictates the timing of such events at regular intervals.
Nb. the article also implies that the eruptions are on some sort of timetable, albeit in some apparently (weak) jocular sense about the time we have to colonise other planets.
Re: It Will Not Die
"You will have to pry it out of the cold dead fingers of a couple of accountants I know."
The difficulty will surely be establishing if an accountant is alive or dead...
Re: Interest in debt is their own fault
Now I've looked up turnover tax (plenty of references out there), as I suspected, VAT is categorised that way. Indeed, in South Africa small companies can elect to pay a turnover tax and make themselves exempt from VAT. From reading further, it is, as I suspected, necessary to deal with issues of unfairly favouring vertically integrated companies by having complex systems which identify stages of production or systems which cascade turnover tax through the system. VAT does this already of course.
Here's just one link of many
Note that this is not generally a replacement for corporation tax, but (like VAT), a supplement. I suppose, in theory, the UK government could choose to extend the VAT system and effectively set corporation tax to zero (subject, of course, to being compatible with international treaty obligations, including those implied by out membership of the EU).
So this is for those, like Robert Long 1 who doesn't seem to recognise that a turnover tax is essentially a cost addition and will be paid by the consumer as it will essentially impact the same people who pay VAT, as is obvious by any rational analysis. It is backed up by pretty well every informed link I followed.
Re: Interest in debt is their own fault
A turnover tax would also be a consumption tax, just like VAT. It would be an operational cost, and insofar as the market allows, will just get added onto the price charged. Hence the customer will pay.
Corporation tax is different in that it is, in principle at least, indirectly paid the owners of profitable companies. However, it's never quite that simple. No doubt prices might be a tad lower if corporation tax was abolished (in a competitive market) as companies could operate with lower profit margins, but they'd be driven up by the imposition of a turnover tax. The impact would be highly variable depending on the nature of the company's business and profitability. A marginally viable company could be driven under, unless the turnover tax could be recovered in higher prices, whilst a highly profitable company would gain as the turnover tax would be less than was paid on corporation tax (assuming the whole package is fiscally neutral).
The point of all this is that a turnover tax would act very much like VAT, as it's a charge against economic activity, not profit.
Re: Interest in debt is their own fault
In a sense turnover is already taxed - it's called VAT. Of course if you are planning to change the entire corporate tax system such that it's turnover rather than profits which are taxed then it poses some very difficult questions. For instance, what rate would apply? The big supermarkets have huge turnovers, but very low profit margins whilst some companies have much lower turnovers and high profit margins. Put in a turnover tax at the same rate for high turnover, low profit margin companies, and the prices of the commodities they sell (like food) would undoubtedly go up as reductions in corporation tax would not compensate. On the other side, high margin, low turnover companies (like ARM) would pay much less tax. Trying to adapt turnover taxes with different rates for different market sectors would be a nightmare.
Also, turnover taxes would inherently favour vertically integrated companies as it would be nigh on impossible to include internal trading as part of turnover whilst that's clearly visible if one company is buying products and services from another. Of course you could just extend the scope of VAT with multiple rates (as you'd have to include things like food) as that doesn't artificially favour vertically integrated companies (as VAT paid to suppliers can be reclaimed).
That's not to say that countries aren't moving to systems of raising tax on economic activity rather than on profitability, but if you go down that route completely, it will greatly aid profitable companies and punish the less profitable.
And the point is?
Heavens, straight out of the kindergarten school of financial journalism.
There are some corporations and private equity outfits which have engaged in practices which have artificially loaded their subsidiaries with expensive debt (often by loans from parent companies based in tax friendly regimes), but there seems to be no suggestion that this is what has happened with Vodafone. This is not a matter of "sympathy" over debt, it's simply how taxable profits are calculated. If interest on loans is not allowed to be used when calculating taxable profits, then you'll simply see private sector investment virtually disappear.
If there is evidence of artificial tax avoidance schemes, perhaps the journalist who wrote this might provide the evidence. As it is, there is nothing in the article.
Re: What Google really wants?
If the following is true, the possibility of Intel fabbing ARM cores at 14nm is there, albeit that it would require a complete redesign to fit Intel's fabrication rules and techniques. However, that's pretty well universal, albeit maybe not so drastic - ARM cores aren't simple die patterns which are reused by the fabricators. They are all customised to each foundry's design and production principles.
Re: Wot ?
Indeed, and then it's compounded by "keep in mind that this an industry that still relies in part on the PDP-11."
So what's the implication here - that somehow uranium isotopes become obsolete like computer systems? I'd venture that leaving highly enriched uranium in obsolete bombs is what would have been disturbing.
Fighting the Hydra
This rater shows Intel's problem. They aren't fighting one competitor, they are fighting many, all of whom have the freedom to implement ARM in whatever systems architecture they feel fit. It's something that flows and oozes into each gap in the market. It also guarantees low prices as designers are faced with multiple options from different vendors. It means real competition on both price and innovation.
Re: Why bring up guns?
Indeed. It's hardly as if improvising a working gun using fairly basic workshop machine tools is difficult. There are plenty of plans around for those.
Indeed, there are some parts of the world where "cottage" gunsmiths make a living out of producing AK-47 clones using relatively simple workshop tools. I'm not sure that the introduction of 3D steel printing will make much difference to that market.
Not perfect, but...
I was considering this, but two things put me off. Firstly is what looks to me like excessive cost. Knock approaching £100 off that, and it makes more sense, and is surely closer to the economic rate. Secondly is the need for special I/O drivers. I know this is because of the inability of SATA to support multiple addressable devices down a single connection (a shame as PATA allowed two devices), but it strikes me that another approach could have been taken, and that is to present the HDD and SSD a single, concatenated device. Yes, it would be necessary to partition the device manually (and might pose some interesting firmware issues - especially for bad block mapping), but I'm sure some software tools could be developed to make this easier. Such a device would not require special device drivers.
Also, why do reviewers fixate on maximum SSD throughput rates as the important feature for PC performance. For the vast majority of users it is not. What really matters is the vastly increased random I/O rate which is intimately tied up with the much, much lower latency of SSDs. It's that which gives you the vastly better responsiveness of a machine with the OS (and more active directories) mapped to SSD - which I find easy to do by mounting file systems into appropriate places in the system disk file hierarchy. It's also low latency which means you don't have to wait 10 minutes when MS decides to dump a massive system update on you. OK - there may be a few power users where the ability to copy large files at multi-hundred MBps rates, but then they probably need an all SSD solution anyway
In fact the energy source for stars during their main sequence (as the Sun most certainly is on) does not come from gravitational collapse to any significant effect, Indeed, Lord Kelvin, who was, of course, unaware of the existence of the existence of nuclear energy, calculated the age of the Sun based on the energy available from gravitational collapse during its formation and the known power emission rates of the Sun. He came up with a maximum age of 300 million years, with a most likely figure closer to 100 million years. Of course this is a far shorter time than what we now know to be the current age of the Sun (about 4.5bn years). Gravitational collapse did, of course, provide the power source for "igniting" the nuclear reaction at the Sun's core, but it isn't powering it as such. The Sun is in a position of dynamic equilibrium with the outward force resulting from processes caused nuclear reaction at it's core balancing the gravitational force trying to make it collapse.
Thus gravity at this stage of the Sun's life is essentially a containing force, and not something which is powering anything on Earth to any substantial degree. There will be times later in the Sun's life cycle when gravitational collapse will come into play (for instance, to provide the energy source for "igniting" the helium-carbon reaction). However, the Earth will be long, long past the point where solar energy will be powering food plants.
nb. from another reply, the direct charging of batteries using solar cells is vastly more efficient than the path from solar to plants to humans to muscle power to electrical generators. An Ni-Mh cell will typically hold about 4-5,000 joules, or about as much energy as could be stored by lifting 10Kg 40-50metres. Such a battery can be fully charged in a few hours in tropical sun using a small, cheap (20x20cm) solar panel. I think more can be gained by making battery and solar cell technology cheaper and more robust.
Of course this thing isn't powered by gravity at all. It's a storage system using (gravitational) potential energy. It's actually powered by food calories. It would be interesting to know just what the thermodynamic efficiency of the whole cycle would be. That includes the efficiency of the device, the human body, the production and harvesting of food etc. Not great I expect.,
Of course, the device only stores about 200 joules (about 800 joules per hour running "flat out"), so that's only about 1kcal per hour, so it's tiny amounts - but, then so is the power output. I can't help but think a small rechargeable battery and solar cells would be a better approach,
nb,. just what sort of device can be recharged sensibly at 1/10th of a watt?
Re: That covers "non-endurance related failure"
In most cases, maintenance contracts for enterprise storage normally include HDD replacements. Unless there's something specific in EMC's contracts for SSDs with respect to total writes, I would expect it to be the same in which case SSD endurance ought to be covered (especially as replacements can be preemptive),
The patent system was never invented for any purpose justice. It was meant to be a pragmatic, economic instrument designed to encourage investment and innovation by granting a period of monopoly in order to realise a return, Once the operation of the patent system acts to stifle innovation and competition it is not operating correctly, Of course, these days it is often used just as a method of restricting competition or something close to extortion in the case of patent trolls.
nb, of course, prior to the patent system government (or, more often, crown) granted monopolies were very often granted as means of rewarding political friends.
Re: You missed the point
No you can't. At least not in my experience. I've already put a hybrid drive in my laptop which uses SSD as a cache (a Momentus XT), and it performs nothing like my desktop, where I have a separate SSD for the system disk and use 2TB HDDs for data. Yes, it's better than a normal laptop drive, but it still struggles with things like system updates. Admittedly, the hybrid drives have nothing like 128GB of flash, but I, personally, would like to manage what is placed where. Cache algorithms are all very well, but they tend to fail where there are periods of intensive activity (such as system updates or software installs) which are relatively infrequent, yet cripplingly slow when they occur.
Also, if you assign all that 128GB in a "traditional" cache arrangement (as the hybrid drives do), then you lose that much space. If you go for a more sophisticated approach to avoid that loss of space (which effectively migrates data seamlessly between slow and fast store), then you have to put up with the overhead and the unpredictability.
Of course separating your data and OS is an essential of this approach, but frankly it's good practice to do so, In fact, when I get a new machine, my first approach is always to re-partition to get functional separation. That makes it much, much easier to run robust backup and restore regimes. That separation is good practice, whether you are running a massive server or a laptop.
I had been wondering just how long it would take for a product like this to appear. I had even been wondering if it was technically possible to even present two devices down a single SATA connection.
Perfect for my purposes as it roughly echoes how my desktop is configured.
Commercial reality, not technical capability
The primary reasons that Intel are so uncompetitive in the mobile space isn't because they lack the technological capability. It's essentially a commercial issue. Intel is simply far too big to be sustained by a mobile market which has grown on the back of the incredibly cost-effective ARM eco-system. ARM has a small fraction of the market capitalisation and cost-base of Intel by charging relatively small royalities for use of its technology, and this has driven massive growth. Quite simply, even if Intel did manage to duplicate the ARM eco-system, with all its third party support and flexibility, it would only garner a fraction of the income it derives from the x86 market.
There is a certain irony here. Intel managed to do much the same thing to other processor architectures by offering a more cost-effective option and leaving any survivors with what are niche markets. Intel itself may come to be dependent on just such a niche, albeit lucrative market. The mass market for processors embedded into virtually everything is structurally incapable of supporting a market with the sort of margins that Intel became used to.
The lesson is, once you lock in a high cost-structure company, you are always vulnerable to those that are not. It's no doubt something that the folk at ARM are well aware of.
- Does Apple's iOS make you physically SICK? Try swallowing version 7.1
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Video Snowden: You can't trust SPOOKS with your DATA
- Review Distro diaspora: Four flavours of Ubuntu unpacked