2006 posts • joined Wednesday 10th June 2009 11:28 GMT
Not just in Oz
I once watched an outsized load - some sort of big pressure vessel - being manouvered through the middle of Saint Albans, here in the home counties of Blighty. It probably wasn't outsized at all by Oz standards, but this is an ancient town centre with listed and very valuable real estate on both sides. Moving awkward buildings out of the way, intentionally or otherwise, was definitely not an option.
It was preceeded by a flotilla of police vehicles, checking the route, and a parking enforcement vehicle, that lifted anything parked where it shouldn't have been out of the way . The big load crawled along at about walking pace, leaving a trail of devastation. OK, I exaggerate. Its driver really was very skilful, even inch-perfect. It left a couple of depressed kerbstones on one corner, and the gas and water companies were rather busy digging up that road in the weeks thereafter.
By the way, somewhere in the Midlands is a graveyard of huge steel cylinders. They're failed precision castings for paper-making machines. If the surface has any imperfection, the cast is a reject, but it's so friggin' huge and strong that no-one knows how to chop them up for recycling. So they are towed off as outsized loads, just as if they were perfect, and unloaded ino a field to rust in peace.
Yes, this is definitely the point which matters. These (unelected, unaccountable) banks and credit companies are demonstrating that they can shut down any e-organisation that needs paying customers or supporters any time they choose. Soon, as physical cash fades into history, that'll be any organisation, e- or otherwise.,Today, wikileaks. Tomorrow ... who knows? They already have enough power to take out a small country.
A reminder from the past
First they came for the Communists / And I did not speak out /Because I was not a Communist
Then they came for the Socialists/ And I did not speak out /Because I was not a Socialist
Then they came for the trade unionists/ And I did not speak out /Because I was not a trade unionist
Then they came for the Jews/ And I did not speak out/ Because I was not a Jew
Then they came for me/ And there was no one left/ To speak out for me
In so many ways it's hard to know where to start.
Lithium Batteries contain (surprise) Lithium, not rare earths.
Batteries are electrochemical devices. The electrochemistry of all the Rare Earth elements is almost identical, which is why it took so long for chemists to work out how many such elements there are. NiMH batteries can utilize any rare earth or mixture thereof.
The physical-electronic properties of the rare earths are as spectacularly diverse as their chemistries are similar. For a good red phosphor, only Europium will do. For a magnet, you need Neodymium (or Samarium for a weaker but higher-temperature magnet). Other rare earth elements have other specialist uses.
Imitation is the sincerest form of flattery!
... which is why software patents should be scrapped, and why Jobs should have been happy rather than angry.
The right form of protection for software is copyright, as for books. Copying chunks of code is illegal. Creating new code that offers the end-user a similar look and feel, or which solves the same problem using the same mathematics, should be completely legal. Imitation ....
I'm sure that a combination of an accelerometer and a heat or ionization detector could trigger self-destruct of a satellite only after its re-entry. Making that fail-safe (in other words, reducing the odds of a premature explosion to the infinitessimal) is surely no harder than making a reliable satellite and its (potentially explosive) launcher in the first place. Although you would need some interesting studies to make sure that multi-year exposure to space and space radiation didn't degrade the explosive into instability (studies that I'd wager have already been done by the military, and the results filed "secret" or above).
The problem is economics. This plan would make the satellite quite a few kilogrammes heavier (the weight of a sufficient quantity of explosive). This would increase the launch cost very considerably and/or reduce the available mass for the useful payload. Better to rely on favorable statistics, and pay out compensation to any very unlucky person (or their heirs), at least until death by falling satellite happens for the first time.
Using their monopoly while they can
They know that the high price of REEs has stimulated the development of many new mines outside China. Most are still a couple of years from production. At that time the Chinese monopoly will be broken, and the price of REEs will plummet like that of many other commodities that were in short supply in the past.
The Chinese may then switch to trying to flood the market to depress prices and put the newcomers out of business. Methinks that would fail (and if they are smart they won't try). My reasoning is that there must be a lot of latent demand for REEs at a lower price. An IT example is the lead-acid battery in UPSs, with an annoying 3-ish year life. If NiMH batteries didn't cost 3x more, they'd use NiMH instead for an effective-infinite UPS battery life (and lighter weight as well).
Rare earths - despite the name - aren't particularly rare. It's just that until the 21st century, there wasn't much demand for them, and the Chinese built themselves a position as near-monopoly supplier (based on foresight, cheap labour, and a willingness to inflict severe environmental damage in the vicinity of their mines which wouldn't be allowed anywhere in the West).
LInux kernel bugs
Don't be too complacent. Kernel vulnerabilities have in the past existed, that would allow a vulnerable kernel to be root-kitted without a human doing anything ill-advised as root. Other such vulnerabilities almost certainly exist at present. A smart black-hat will scour the code for such, and when he finds one, keep quiet about it while targetting it for root-kit delivery.
Trying to deal with possible infection from inside a compromised operating system -- any system -- is a bad idea. Offline scanning, booted off trusted read-only media, is the way to go. There is just one problem with this ... absent write-protect switches on hard drives, the offline scanner itself becomes a perfect vector for malware distribution, if it can be compromised.
We can't win. Two-plus billion years of evolution has been playing the same games, and the parasites always come out on top.
One of Linux's strengths is actually the same as the one that higher organisms have come up with - diversity, rather than a monoculture of identical clones. The logical next step will be building kernels and root-mode code from source through some sort of compile-time randomizer, so that every installation has a different memory footprint, despite performing identical high-level functions.
Forgive me, but I thought that OLED display screens were printed circuitry, at a much higher density than this? Hundreds of thousands of polymer light emitters and thousands (at least) of transistors addressing them.
And even if it is preinstalled ...
Even if it's pre-installed, the first thing any corporate customer will do is boot into his corporate image install environment, and blow away whatever crap came pre-installed to the disk. That's the only way of being sure that the system doesn't have random malware, backdoors, and extra unknown security weaknesses pre-installed. (It also frequently makes the system boot five minutes faster. Time is money! )
And if your requirement is to have a boot menu so that the user can choose between Windows and Linux?
As I'm reading it, secure boot enabled - LInux cannot boot. Secure boot disabled - Windows 8 cannot boot. Utility of such a system in my environment: zero.
And of course, the last thing one wants to do is to leave the BIOS itself unprotected, for the user to poke at all the other setttings, boot unapproved media, etc. etc.
If it's going to be straightforward for a Linux sysadmin to generate the appropriate certificates from his Linux system (including a custom-patched, modified kernel) and install these certificates into the system BIOS so that Linux can secure-boot, then that's (just about) OK.
I'm also thinking there's a danger that even if this is do-able, it'll take ten minutes per PC, which it won't be possible to automate like the rest of the deployment process because the only way to interact with a BIOS is by prodding its keyboard. Ten minutes times 200 PCs equals most of a man-week.
To a first approximation: Probability of someone being hit equals (area of one human being) times (human population of planet) divided by (surface area of planet Earth) times (number of fragments of the satellite expected to reach ground)
One should really use the population and area of that part of the planet over which the satellite flies. Since the excluded part of the planet is mostly polar, and fewer people live near to the poles, it'll raise the probability somewhat. above the simple calculation above.
I'll leave the choice of human area as an exercise for the reader. Plan (standing), Plan (sitting), or front elevation? Or an average of them? Then you may want to think about the considerable ground-relative horizontal velocity component of the fragments, and the probability of a random human being in the shelter of a wall, tree, mountain, or other object able to bring the fragment to a stand-still before it hits them.
Common-mode manufacturing failures
Oops. That's both WD and the unfortunately named Seagate hit by the same natural disaster.
Anyone know where HGST do their assembling? Hoping it's not Thailand. Hoping that the answer for Samsung is Korea.
Last thing like this I can remember was the time a factory in Japan making ultrapure (radioisotope-free) resin for encapsulating DRAM chips caught fire. DRAM Manufacturers thought that they had second- and third- sources, but those sources were subcontracting manufacture back to that same company in Japan! RAM prices spiked skywards for a few months.
As the tech gets higher, the risk of this sort of common-mode failure also gets higher.
I just checked the maths - found (to my surprise) that the 1 in 2000 chance was a perfectly reasonable estimate. (The chances of you personally getting hit is of course about seven billion times smaller).
Which begs the question: given the large number of meteorites that reach the earth's surface every year, why is it so rare for anyone to be struck by one? (Maybe, we spend most of our time under roofs, and most meteors reaching the ground are small enough and slow enough not to penetrate a roof). Anyone know for sure?
Two Tribes ...
One can group these UIs into two. What I regard as "classic" window managers that make good use of a 1920x1200 screen or multiple screens, and (to be kind) "tablet" window managers that work better on a device that fits in your pocket. Unfortunately, fans of the latter seem hell-bent on forcing the tablet interface on those of us with a big monitor (or in some cases, two or even four of them).
"Classic": Windows NT, Windows 2000, Windows XP, Windows 7, Gnome 2, XFCE. The Linux ones have workspaces, which makes them N times better in my book (N = number of workspaces you use).
"Tablet": Windows 8 (judged from above), iPhone, iPad (OK, these two *are* tablets!), Gnome 3, Unity.
I've missed out the various versions of KDE and Macintosh UI on purpose, they seem to straddle the divide somewhat (I don't much like either, but I'd take them if the only other choice was "tablet").
In the Linux World Red Hat Enterprise 6 still uses Gnome 2, so that's another five years minimum of support for it. Centos and Scientific Linux are free-beer clones. I'll be very surprised if a Gnome-2 fork doesm't emerge soon, that can be installed as yet another alternative window manager on Fedora and Ubuntu, leading to peace between the "classic" and "tablet" fans in the Linux world.
You don't need a shield against the radiation, which wouldn't be at a lethal level, and in any case nearly half the planet's surface would be screened against the gamma radiation by thousands of miles of rock. You do need a shield against the subsequent global ecological collapse caused by (a) the destruction of the ozone layer and (b) the conversion of a significant amount of the Earth's atmosphere into NO2 and the subsequent nitric acid rain.
A working (ha!) sealed biosphere II with acid-resistant exterior and UV-blocking windows would suffice, provided it was on the right side of the planet when the big flash happened.
Maybe not quite so simples
I think you'd need a hub at the centre as well, for the communications dish that needs to stay pointed at Earth. That dish would have to be moved to the side when you re-join the three sections for orbital manouvering. There would be some weight penalty for the cable, and some fuel cost in generating and then shedding the angular momentum each time you re-join the modules into one.
I doubt any of these is more than an engineering challenge, especially if the G-force needed for keeping astronauts healthy is significantly less than 1G.
The biggest problem would be what to do in a solar storm. A rotating capsule would make it impossible to use the rest of the supplies between the atronauts and the sun as shielding. They could re-join into a single craft and orient it pointed at the sun with the crew quarters in the shadow - but there's that fuel cost every time you have to do it.
That's not the why of fruit!
The fruit is intended to tempt an animal to eat it, seeds and all. The seed survives its trip through the animal's digestive system, and is deposited elsewhere, along with a nice little lump of manure. Plants particularly appreciate omnivores ... the manure of an omnivore will be richer in nitrogen.
Tomato seeds survive not just the human digestive tract but the metropolitan sewerage system, and cause serious problems when they arrive and germinate in the filter beds.
I've always wondered what sort of animal it is that eats a mango complete with the massive pip. Water buffalo?
Lots of plants spread themselves by extending tendrils or roots which then develop into another plant. Some even detach the clone and let it float away down a river. If there isn't a plant which grows clonelets with barbs to hook onto a passing animal's hair, I'll be very surprised.
But these offspring are all clones, and if that was the only form of reproduction, the parasites would catch up and wipe out the species. Plants, like animals, need sexual reproduction to shuffle their genes and keep ahead of the parasites.
No-one has mentioned hypervisors yet.
If you can boot a hypervisor, then you can run LInux, Windows, whatever under it. If you can't, then you are cut off from a lot of technologies that I expect will break out of the datacentre onto the desktop, as network bandwidth and hard disk sizes increase.
To take just one example: if you want to secure your data in a corporate environment, you want the hard disk behind locked doors in the datacentre or a data-safe-closet. Given a Gbit or faster network, that's easy. Boot a hypervisor on the desktop, then boot the disk in the datacentre across the network.
Perhaps the BIOS of the future ought to BE a hypervisor? Just as long as it's open to all client O/Ses, of course.
If only ....
"most people have wised up to crooks running unsolicited "security scans" that turn up a multitude of bogus problems on their machines."
If only they had.
If only they just turned up bogus problems, rather than actively creating very real ones!
I'd expect it to have completely crap SVGA 2D on-chip graphics. So not so much "not available", as "not suitable for". (Of course if it has enough PCI-X lanes you could integrate it with a third-party graphics chip, but you'd probably lose most of the power advantage by doing that)
BIOS write-protect jumper or switch, seconded. Best design would be like the reset switch on desktops - you'd have to hold it down while you powered on the system to enable BIOS flashing, and you couldn't accidentally leave it enabled.
Another insanity is trying to run anti-virus software within a potentially infected and subverted operating system. The right approach would be to boot off a DVD-ROM, download up-to-date virus signatures from the vendor and then scan the disks. Since the on-disk operating system is not active, there is nowhere for a rootkit to hide (except maybe in the BIOS, hence the need for mechanical protection).
Samizdat - "subversive" literature hand-typed and passed on and re-typed, chain-letter style, played a big part in bringing down the former Soviet Union.
What chance that pirate sites can ever be suppressed, or even inconvenienced?
When did you last have to replace a CPU or a DIMM?
Silicon chips are amazingly reliable. I see a DIMM that failed in service once or twice a year (8 or 16 chips per DIMM, 2 or 4 DIMMs per PC, about 400 PCs). And I suspect most of those failures are with the soldered joints onto the PCB, or with the connector.
I've seen a failed CPU twice in twenty-plus years. (Maybe a few of the old boxes that went straight to the scrap-heap were CPU failures rather than MoBo failures, but either way they'd lasted well into obsolescence).
At this level of reliability, a stack of 100 will still be acceptably reliable. Possibly, more so than the same 100 chips soldered onto a board (which you don't repair anyway in most cases).
Seagate drives aren't crap. I have quite a number of them (ST3100525AS mostly, 1Tb and 1.5Tb). No particular problems. I've experienced more failed WD drives, though probably not a greater percentage, and I don't look after enough drives to have statistically reliable numbers.
One Seagate drive started reporting an increasing number of errors through SMART (Reallocate Pending). It was in a RAID array so I swapped it out to be safe. Seagate's online warranty process was the best I've ever used, and I had a replacement drive in days.
ALL drive manufacturers have at times shipped batches of problem drives. If you bought many with near-consecutive serial numbers (or many same-spec PCs with near-consecutive serial numbers which amounts to the same thing) then you may experience a high failure rate that does not generalize to other customers whose drives were manufactured a month earlier or later. It's usually not the drive manufacturer's fault that they contain a faulty component.
ALL hard drives that you buy are in some sense prototypes. By the time they've been proved reliable in service (say five years) they are also obsolescent. Accelerated ageing tests can only get a manufacturer so far. Occasionally, the envelope may be pushed too far, and again all manufacturers have occasionally shipped a drive model with less than the hoped for reliability.
The worst experience I ever had was with a batch of Samsung 40Gb drives that went from working to brick in the blink of an eye (4-5 years back). Despite this, I'll gave Samsung the benefit of the doubt, and recent Spinpoints have been fine. Most hard drive problems I've experienced haven't been so suddenly terminal. They've shown signs of distress (I/O errors, or SMART reallocations) and I've managed to rescue all data off the failing drive.
Google are the only organisation I'm aware of that has published drive failure statistics for statistically significant numbers of drives. They said that they could find no evidence that any of the major manufacturers made drives that were significantly more or less reliable than the others. Their problems were with batches, not with manufacturers.
Whenever I'm constructing a RAID-1 (mirror) I always pair drives from two different manufacturers, to minimize the risk of common-mode failure (a bad batch of drives).
this sounds pathetic even by the merkin standards!
That's basically what the Merkin judge said in legalese, so make that extra-pathetic.
Could happen to anyone.
Windows, Mac fans should ask themselves ... if the same thing happened to the kernel repository inside the corporate HQ of your favorite company, would you ever get to hear about it?
More likely, the company would keep the exploit a secret. For similar reasons, you don't know what procedures are in place to detect such an exploit and recover from it. What you do know is that the number of developers eyeballing the code is way smaller. You also know that there are documented cases of the company being informed about a security-realted issue, and choosing to do nothing about it for months or even years until the issue leads to real-world exploitation with malicious intent.
Still feeling smug, looking through your security-by-obscurity glasses?
BTW the greater risk by far is not corruption of the code base by penetrating the repository, but corruption of the code base by corrupting a contributor. Submission of a well-concealed backdoor in a legitimate patch. Again something that could happen to anyone, but probably easier to do to closed-source project with relatively few people having access to the code, than with an open-source project with many times more developers.
It's all a rip-off
Personally I question the sanity of anyone who thinks that an item of clothing that costs say £4 in generic form from some far-East sweatshop, becomes worth twice, ten times or a hundred times more if it has some bloody label prominently attached to it.
Anyone who thinks otherwise is free to part company with as much of their cash as it takes. Just bear in mind that the more the label says it cost, the more I think "you are a moron" when I see it!
Black and White?
Is that a monochrome picture? If it's in colour, I'm surprised that the Earth doesn't look more blue-green-ish when you average it into a few pixels.
What's the point?
This'll be unpopular, but I have to ask what's the point of the ISS?
It wasn't a waste of money doing it. we've learned quite a lot from the process. But as far as I can tell, one of those things is that there's no particular point left keeping a few men in an orbiting laboratory.
I'd suggest that the money should be spent developing robotics and telepresence (Waldo-onics?), so we can still service useful things like orbital telescopes, without having to lug a human life-support system into orbit.
Sad, but it looks true to me.
type 1a supernova
One way of looking at it, is that a big enough diamond isn't around forever, or even for very long.
re: replies really don't need a title
This one doesn't really need a text
Jupiter is a very dark brown dwarf
Jupiter emits more radiation than it recieves from the sun. Fusion at a very low rate is the probable source of the excess heat. Jupiter's core is believed to be mostly hydrogen in its theoretically predicted high-pressure metallic form.
The Earth also emits more heat than it receives. In this case we have good reason to believe that the sources are radioactivity and tidal friction, and possibly also ongoing crystallisation of the Earth's solid inner core from its liquid outer core.
For the Earth, the excees heat may have been the difference between a living planet and a snowball, in the early days when the sun was somewhat cooler and the moon was a lot closer.
Can't remember what the appropriate statistics are for N experiments with a binary outcome (success | fail) but my gut feeling is that the above is insufficient data to prove (within 2 standard deviations) that the shuttle was significantly more reliable.
Anyone care to supply the maths?
We don't know much about what was on the surface of the earth before large parts of it got covered with liquid water, but there's no evidence that life can exist without water.
All terrestrial life still extant shares a common basic biochemistry, with features such as RNA coding for proteins built from a common set of amino-acids, ATP energy-transport, lipid membranes, and an aqueous support medium. There must have been simpler life-systems leading up to this system (think scaffolding), but we have no evidence of what it might have been. I think it extremely unlikely that whatever it was, it did not require liquid water to function. Water is a lowest-common-requirement for all the more complex subsystems.
I'm guessing that complex organisms find it hard or impossible to evolve in the atmospheres of gas giants or cool brown dwarfs. So they might remain at the single-celled stage "forever" until the brown dwarf no longer provides liquid water. Or until some exceptionally unlikely event happens, and multicellular or even intelligent life arises in the dark cold tail-end of a dying universe.
BTW if you envisage galaxies as having been "mined-out" by interstellar-scale intelligences, then think of a brown dwarf ejected from its galaxy and drifting forever alone and undetectable through one of the voids in intergalactic space. That would, in fact, be a more stable environment than one stil in a chaotic orbit around the centre of a galaxy.
The most complex form of life on the planet is surely some sort of insect. Butterfly: Egg, caterpillar, chrysalis ... complete dissolution of the caterpillar to a sort of living soup, and re-birth as a butterfly. Or spider-hunting solitary wasp. Somewhere in the egg is a program which allows it to hunt and paralyze spiders without becoming prey, dig a burrow, install the spider, lay an egg. I wish someone could tell me where and how.
Natural Uranium not a risk
There's no risk attached to natural Uranium, above that of it being a somewhat toxic heavy metal like (say) Lead, and somewhat radioactive like Thorium (of which gas-lamp mantles are still made!) .Uranium is pretty useless stuff outside of the nuclear industry, although armour-piercing bullets and shells are often made of depleted uranium (what's left over when the enriched stuff is made) because it's nearly as dense as gold. Refining Uranium refers to getting rid of impurities and creating pure Uranium Oxide "yellowcake" or metal. Enrichment is the worry.
To make an A-bomb or nuclear fuel out of natural Uranium you have to concentrate the U235 isotope, which requires a huge investment in a centrifuge plant and associated technologies. A lesser degree of concentration creates LEU - lightly enriched uranium - which can fuel a reactor. A much greater degree of enrichment is needed to create HEU that can be turned into a bomb. There's no cheap or simple process that can perform this enrichment (thank God), so fears of nuclear terrorism revolve around theft of existing nuclear weapons or bits thereof.
Same reason not many chip companies have fabs.
It's the cost of establishing the facility to make LEU. Cheaper to buy it in from countries that have already built an enrichment plant, as well as less likely to cause your country to be perceived as a threat.
Threats aside, it's the same with high-tech chips using the latest 28nm and 20nm technologies. A fab costs many billions. Debugging a process likewise. It's cheaper and less risky to design your silicon and let one of a few fab operators make it for you, unless your volumes will be enormous.
Not so bad ...
Relying on one monopoly supplier of anything is a bad idea.
However, if there are a number of countries with LEU-production facilities, then it becomes much less of a risk. What chance of simultaneously alienating all of them to the point that they'll *all* renounce their treaty and contractual obligations? And if a state is worried that this might happen, well, what conclusion should one draw from them thinking that way?
Uranium is quite a common metal
There's a lot of Uranium out there. I doubt there's any country that couldn't feasibly mine Uranium within its own borders if it wanted or needed to. Any nation that's not land-locked could also extract it from seawater. No-one does that because it's cheaper to mine it, and many countries don't mine it because it's cheaper to buy it.
Unfortunately, a uranium-fuelled reactor inevitably creates Plutonium as a by-product.
Fortunately, it creates not just the Pu isotope that can be made into bombs, but other isotopes of Pu that are more radioactive and less fissile. Pu chemically separated from a reactor isn't good for making nuclear bombs.
MOX made purely from dismantled warheads could have Pu-239 separated from it. Therefore, before giving it to suspect nations, MOX fuel would beed to be partially used in a reactor or include a blend of mixed-isotope Pu separated from used reactor fuel. Both ways the fuel would be rather "hot" and dangerous to transport. Best to burn up the unwanted cold-war warheads ourself, and offer only LEU to states wanting to build reactors.
Regardless, I'd much rather see warheads turned into reactor fuel, than see them sitting around in storage, waiting to be turned back into warheads.
Last refuge of life?
Interesting to note that such a "failed star" might become the last refuge of life in a dying universe, tens or hundreds of billions of years from now. Like the sun, they stay warm by nuclear fusion, but at such a low rate that they'll probably be the last places left where liquid water (and therefore life as we know it) can exist.
They ought to be shipped disabled!
A phone ought to be shipped disabled. The retailer could enter its IMEI into an activation database upon sale. At first network connect it would check its own status (sold | stolen) and take appropriate action if stolen. Something like waiting a few days for activation clearance to propagate, and then locking down its own firmware so as to require a return-to-factory reset, or even irreparably burning itself out.
If taken outside the UK it would never be able to check its activation status, so it would never work.
Burnt bridges, again
If anyone is criticicising them for trying something new, they shouldn't be.
What they deserve to be flamed for, is telling the world that the completely different and (IMO) horrible new thing is merely a new release of the old thing, and inplementing it in such a way that one cannot run the new thing and the old thing on the same system.
I can install KDE, XFCE, and one release of Gnome on the same system until I decide which I like the best. On a multi-user system I can install them all and let each user make that choice for him/her self.
But I cannot install Gnome 2 and Gnome 3 on the same system. The Gnome folks have deemed the change to be an upgrade, not a new product, even though in effect they've taken away a car and given us a boat. Once a distribution embraces Gnome 3, Gnome 2 becomes un-usable thereon. Which is why I now hate Gnome developers.
Hard to clean?
It *ought* to be simplicity itself. SATA has boasted a drive-erase command for quite some time, that logically writes the whole drive to zero in one command. Because of the nature of the beast, it takes the drive firmware quite a while to actually do that ... but it's non-interruptible by power-failing it, the erasure resumes as soon as power comes back. Also one has to take it on trust that the drive firmware really does also zero the bad sectors that have been reallocated (or not care, since it's beyond most ordinary hackers to read them).
So implement that command on SSD firmware (or use it, if it's there). I'd expect that drive-erase on flash memory ought to be an awful lot faster. If I remember right, a flash erase operation is much faster than either read or write.
Count me out
There's a huge difference between putting most of my transactions (by value) on my credit card (I do) and not carrying any cash at all. I find it very hard to believe that anyone really doesn't.
How do you do an office whip-around with plastic? How do you give a bit of pocket-money to a child? How do kids do sponsored things for charity in future?
And then there's the elephant in the room. The audit trail. Dumb cash doesn't have one, so there's nothing for journalists, private investigators, jealous spouses, bunny-boiling exes, etc. to make trouble with. Say hello to e-cash without any alternative, and (whatever they say), say goodbye to financial privacy.
Yes, I know it's possible to do trail-less e-cash. I'm just quite certain that a few years after the old sort of cash disappears, the new sort will be modified to introduce a trail, to "protect the children" or "help fight terrorism" or any of the other usual suspects.
The problem is burnt bridges.
What annoyed me most, and presumably what's annoyed Linus most, is that the Gnome developers created Gnome 3 in such a way that you can't install both Gnome 2 and Gnome 3 on the same (probably multi-user) system. They presented the utterly different UI of Gnome 3 as if it were a mere new release of Gnome 2. It was exactly the same as what Microsoft did with Vista - except Microsoft had a financial reason for shafting experienced XP users, whereas with Gnome it must have been something like arrogance and pride.
So yes, I really hope that someone goes back a good release of Gnome 2 and renames all the entities that clash with Gnome 3, creating a "Gnome classic" fork which can then be maintained indefinitely, while Gnome 3 developers carry on pleasuring themselves. "Maintained" shouldn't be a lot of work, because we don't want any radical changes. In particular, if any UI changes are introduced, they should be small and incremental, so that whatever way you are used to working, carries on working.
As for the big picture, one of the strengths of Linux is multiple UIs that you can install and choose between at login. I'm sure Linus isn't flaming the Gnome people because they've created something utterly different, which he hated. He's flaming them because they smashed and burned the old UI that he liked while they were doing it.
I'll use XFCE if I have to - at least it has workspaces - but I'll miss Gnome 2 if it does die rather than getting reincarnated.
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Special Report How Britain could have invented the iPhone: And how the Quangocracy cocked it up
- Analysis Microsoft's licence riddles give Linux and pals a free ride to virtual domination
- Massive! Yahoo! Mail! outage! going! on! FOURTH! straight! day!
- Bring it on, stream biz Aereo tells TV barons – see you in Supreme Court