1554 posts • joined Wednesday 10th June 2009 11:28 GMT
One of that alleged minority is Linus.
However, in the Linux world you can pick and choose between many desktop UIs. My grouse about Gnome 3 is not that it sucks. Rather, that they developed a radically diffrerent UI without forking the source, so that if you have Gnome 3 installed on your system you can't also install or use Gnome 2.
Anyway, for now XFCE will have to do, or the Scientific Linux 6 distro (which will be sticking with Gnome 2 for many years to come). Then there's Cinnamon, which looks promising (a classic UI running atop the Gnome 3 libraries). This is what Gnome devs should have done in the first place, and only then started playing around with tablet-style UIs layered on the same library, as optional alternatives to the classic UI.
They fixed that ages ago
If you don't know about starting a shutdown, you do know about the on/off button on the front of the box. Once it became a soft button rather than a power switch the problem was solved. Briefly press the button, and Windows (or Linux) shuts down cleanly.
UI not OS
This thread is about the UI not about the OS. The XP UI remains pretty much the same as the 2000 UI and not so very different from the NT4 UI. In my book that's no bad thing - it's become second nature over the years, so I can think about what I'm using the computer *for* rather than having to think about *how* to use it. There may well be a host of incremental improvements that I don't recall, which again is how it should be. None were big enough to annoy me!
And now they've thrown all that away and given us "7" which is gratuitously different. And now they are planning to make 8 gratuitously different squared, so I won't even have a chance to make friends with "7" (which I am starting to think isn't actually all bad if you've got enough Gbytes of RAM and a modern graphics chip).
Yes, XP the OS was pretty dodgy in the early days, and maybe the 7 kernel is better than XP. It's just extremely annoying that Microsoft links changing the kernel to forcing a new UI down our throats!
Invisible start button?
But it's got a "hot corner" that works like a start button? How does this differ from a start button, apart from you have to know it's there because you can't see it?
Maybe you don't have to click it either? Oh great, so there's a corner of the screen that you have to be careful to avoid with the mouse because all sorts of annoying or even dangerous things happen if you don't.
Then someone will work out how to paint it with an icon, give it an option to require a click to un-mask the stuff underneath, and they'll call it an app and sell it to you.
Unlikely, not impossible
One of the advantages for the electricity company is remote reading, so obviously they emit RF.
I'd expect the level to be much less than a mobile phone, and the scope for causing interference to be extremely slight unless your PC / router/ radio / whatever is less than a metre from the meter (for example fitted on the other side of the wall behind your desk)
It features in Vinge's "The Peace War", published 1984. No rifling, low muzzle velocity. His design was/is still a bit in the future, it was "fire and forget" using image processing to stay locked on what it was fired at.
If you care one jot about security you shouldn't choose sticking with any version of Fedora. They maintain it (as in security updates) for less than a year after the next version ships. Most of the other choices get maintained rather longer, but I don't think any are promising you the 9 years or so you're being promised with RHEL6.
It also means you can avoid Gnome 3 until 2020, by which time it might even be usable.
No, I don't mean a nuke attack.
I mean, will some crim work out that a bit of kit involving a large HV capacitor, a spark gap and an antenna might generate enough of an EMP to knock out a smart meter at point-blank range, leaving no evidence of why it died, and these scenarios:
1. It cuts off the power to the premises, whose owner then collects compensation, or
2. it doesn't cut off power, and the company has no way of knowing how much power has been used.
Search FB & TW
You mean, use Google? (There can't be any bigger data miner, not even in .secret.gov)
Wait and watch
I fear this will contain various chunks of patented bogosity so that people end up storing their data in something that cannot be read except bu buying Microsoft products, or perhaps by paying Microsoft hefty technology licensing fees.
For the same reason I very much doubt you'll ever be able to use ext4, btrfs, etc. on a Microsoft server.
The physics of chemistry is universal. There's no other element that has even half the versatility for building complex molecules that Carbon does. Single bonds, double bonds, rings with de-localised electrons, stereochemistry where one carbon is bonded to four different groups or the cis/trans arrangement around a double bond or the sequence of groups around a ring ....
Alien biochemistry may look nothing like ours, but I'll eat my hat if it's not based on Carbon, with Hydrogen, Nitrogen and Oxygen in important supporting roles. I also think it highly likely it'll have proteins (loosely defined as long chains of amino acids, CO-NH bonded).
Radio broadcast obsolete
6. Radio of a type capable of detection across interstellar distances becomes obsolete within a couple of centuries of its invention.
We know that's true because it's happening all around us. Analogue broadcasting (with easy-to-detect carrier frequencies) is being turned off. Digital signals are far more efficient, meaning far harder to detect at interstellar distances. Also for how long will radio broadcasting exist at all? The Internet is starting to replace broadcast in developed countries.
I doubt that 22nd-century Earth will be radio-detectable from tens of light-years out, let alone hundreds or thousands. (I'm assuming technological progress continues ... but it's also true if we blow ourselves up).
No hinge would be even better
It should be a two-part design. Tablet. Keyboard+mouse. Bluetooth link. Software that adjusts the user interface so that you don't have to touch the vertical screen to accomplish anything when the keyboard/mouse unit is present. In "enjoyment mode" just take the tablet and leave the keyboard behind.
OK, it's probably trivial to make it so that the two parts clip together for transport and for keyboard use on the move - you might call this "netbook mode". But at home there would be a plastic stand for mounting the tablet vertically above the level of one's desk, and put keyboard wherever the user finds most comfortable.
Perfectly good science
What's wrong with observations that fit within the current best theory? Bad science would be if nobody bothered to look.
Of course it's much more exciting on the occasions when observation shows something not accounted for by the theory, but the scientific merit of the observation is no different.
Geostationay - a bit slow.
Geostationary orbit is fine for high-latency emergency communications between meatware systems. It's also OK for transferring gigabytes of data using large buffers.
But it takes light about a quarter of a second to get up to geostationary orbit and back again (i.e. a network packet round-trip time of half a second). Skynet will need low-altitude satellites, or more probably will just hijack our fibre-optic cables here on the ground.
As for aliens, they must have FTL technology or they wouldn't be here!
Diagonal versus width
The size of a screen is its visible diagonal measurement.
Not sure whether there is a standard for the "size" of a laptop, but I've always assumed they were referring to its width.
Therefore it's quite easy for a screen to be "bigger" than the machine it's in. The maximum "size" of a screen is gigen by Pythagoras: sqrt( width^2 + height^2 ). Thes subtract the parts of the diagonal that aren't display.
> "I used to like Maxtor until seagate got their grubby hands on them".
Maxtor was the company whose drives gave me so much grief that I came to avoid them, albeit with a full understanding that the probabilities favoured the hypothesis that I'd just been unlucky. My worry was the inverse of yours!
Seriously, you'd need an absolutely huge number of drives to start drawing conclusions about manufacturers in general. What's clear is that every manufacturer has on occasion shipped bad batches, caused usually by a component supplier shipping substandard components . Also that some models turn out to age less well than the manufacturer hoped. This is inevitable, given that the tech moves so fast that by the time a drive is known to be reliable in service, it's also obsolete.
This is a marketing wheel that keeps turning. First someone else cuts the warranty to save money and everyone else follows. Then someone increases the warranty to drive sales and everyone else follows.
I wonder what fraction of warranty failures are actually returned. Most people know that time is money. I don't bother for an 18-month-old drive if a replacement costs £30 or less. Well, not for the first one. I'd stick it on a shelf for the remaining warranty time just in case it was the first of a batch of lemons. And buy the replacement from a different manufacturer because I'm not going to let anyone explicitly profit from shipping a dud.
Odd they don't seem to want people buying green drives, though. Wonder why? I'd have thought slow and cool would mean more reliable!
Maybe not so boring
Can it be gamma rays *all* the way up? At some energy you'd have a photon with more energy than the entire observable universe. Maybe the entire universe, if it's finite. This isn't pedantry at all, if the whole universe did start very small in the big bang.
Some (exotic, unproved) theories suggest that photons of sufficient energy may travel slower than low-energy light. Slower-thn-light photons, sort of a counterpart to faster-than-light neutrinos? Observations of the next supernova may throw some , er, light on this.
It's named after Satyendra Nath Bose. http://en.wikipedia.org/wiki/Satyendra_Nath_Bose
Unfortunately that doesn't tell us how he pronounced his name. Anyone know the right pronunciation? I'll guess at halfway between English "s" and "z"!
Lots of unexplained observations remain.
We already know lots of stuff outside the standard model.
Gravity, for a starter. One of the most stunning numbers in physics is 10^38, the ratio of the strength of the Electromagnetic force to the Gravitational one. You don't really understand anything about matter, until you appreciate that the electromagnetic force between an electron and a proton is that many times stronger than the gravitational force between the same.
Then there's dark matter, inferred from the observed nature of galaxies and the impossibility of binding them gravitationally if what we see is all there is. (There are good theoretical reasons why it can't be lumps of non-luminous ordinary matter, sized somewhere between marbles and Jupiters).
And "dark energy", needed to explain why the universe appears to be not only expanding but accellerating.
We might get lucky and spot a particle or three of dark matter in the CERN detectors. (Oddly that may be more likely while the LHC is down than when it's up). Or careful observations of the things we know it can manipulate may help pin down a better theory that in turn will guide our observations. (It's easier to find a needle in a haystack if you come to suspect it may be magnetic).
And if those neutrinos really are going faster than light, it's time to tear up all the theories and start again.
That's an impressive spec. How long before the standard desktop PC drive becomes 2.5in and the 3.5in format dies out for anything smaller than a Terabyte or two? Lower power consumption, as fast or faster, saves space, intrinsically greater mechanical shock resistance, (probably) quieter. Will depend on the drive cost, but 2.5 format has an intrinsic advantage because it's lighter, less raw material and lower shipping cost. Days when 2.5in always meant smaller capacity or slower than 3.5in (or both) are clearly gone.
Yes, I know this one is an enterprise drive. I'm looking ahead ... anything in the Enterprise market today will be on the desktop of tomorrow.
The survey is meaningless unless it told the punters how much extra they'd have to pay for a "Smart TV". I imagine that if the don't-knows were left to guess, they'd think a few hundred quid (cost of a Smartphone compared to a twenty-quid dumb one).
The other big issue is the UI. If the "Smart" bit means that you have to work through heaps of menus before you can get to watch BBC1, that'll lose you a lot of sales to houses where one of the TV-viewers is a granny. Translation: can't read the labels on the remote control at all, has difficulty reading text on the screen, and doesn't understand the concept of navigating menus with four directions and an OK button.
In a sane world there would be an internet standard for TV control functions exported to the LAN, and then you could write or buy an applet of your own choice to steer any compliant TV, PVR etc. from your tablet or smartphone. It might even accelerate sales of tablets, if one of those apps was "Grannyvision". OK, not that name, but a UI designed for the visually-impaired who aren't too good at learning new tricks at their age. Won't ever happen.
The law is an ass?
"plaintiffs have failed to allege facts or articulate a theory on which Sony may be held liable"
Fraud? Theft? Selling you a product based on an advertized feature which they then take away is one or the other or both, in the everyday non-legal meanings of the words.But I guess their 666-page shrink-wrapped license that you can't read until after you've accepted it has served its intended purpose.
Anyway, just keep boycotting everything Sony, and spread the word.
"such as would be mandated by any government seeking to block citizens’ access to a particular class of Website, whether over concerns about decency or piracy"
Is that really what they are proposing? Just how hard would it be to pass on ip addresses of blocked sites down other channels (including old-fashioned paper samizdat)?
I'm starting to wonder, if DNS didn't already exist would we bother to invent it? We manage to use the telephone system fine with just numeric addresses. No global distributed directory, just various un-coordinated look-up tables with various degrees of localisation, specialisation and automation.
There's also the tantalizing possibility that they are offering us a glimpse of new physics, in particular that the force of gravity may not be quite as Newton and Einstein thought.
They're off-course by a tiny but measurable and unexplained amount.
There are of course various hypotheses about why this is, other than new physics. We can't tell, because they weren't built as fundamental physics experiments. Perhaps some new probes should be sent after them, that are designed to probe the nature of gravity.
I'm wondering how long it will be before the entire virtual memory mapping of a modern desktop CPU is completely subvertable by code loaded into its GPU. Or indeed, whether that's happened already. And whether soon, those programs will be loadable by any entity that has network access to the system.
Hoping that's a black-helicopter speculation. Fearing otherwise.
How illegal is this in the EU?
I'm thinking that if it is confirmed that this software has been installed on any phones sold in the EU, then it will be curtains for carrier IQ and serious financial damage to any network that supplied a phone with it installed. The EU is hot on privacy ... and right now it needs every last cent it can lay its hands on.
It'll run Linux and Python. In my opinion if someone doesn't take to Python like a duck to water, s/he'll never be a programmer. What more could you want?
(Java? Perl? Ruby? Occam? Fortran? C++? Not my choices for a first language, but those as well, and more).
$$$ because they are thinking globally. Outside the EC most people probably don't have the faintest idea what a £ sign denotes (assuming it even displays right!), and little more idea what the GBPUSD or GBP/local exchange rate is. Global trade works in US$. Even the BBC World Service converts prices to US$.
Why "Stlll ...". They've thought to boot off a socketed SD card rather than anything soldered down or hard to reload. So download the image if you didn't make a backup, find a PC, borrow a camera-card USB thingy if you don't have one, and re-load the boot media.
Or if you mean burn out the hardware with a 13.6 volts 1000A down a digital IO line or something like that, it's part of the learning process. At £35 one can afford a mistake or two. (What do beam tetrodes cost these days ... never mind).
Point me at a better alternative
Show me a developer board with decent display capability and audio, a network port, and USB, that runs a well-known Linux distribution, out of the box and usably fast, for less than twice this price.
The plug-computing things like pogo-plug and guru-plug missed the mark by not having any AV or accessible user-hackable IO lines. (They also ran too hot).
I expect if RasberryPi shows any sign of taking off, there will be "me too" products soon enough!
NB a hardware hacker's board
It's a board with a collection of digital I/O lines that you can interface homebrew electronics to. A PC is notably lacking in this department. You used to be able to bodge some things onto a parallel port, but now PCs don't have a parallel port. Otherwise you choose between USB (too complex) or buying a digital I/O card (not cheap, especially if it's a notebook PC rather than a desktop).
Schools may or may not be keen to let their students do real programming, but they surely draw the line short of attacking a PC's motherboard with a soldering iron!
Hadn't thought of that. And you've got hardware I/O lines to play around with for low-latency non-packetized inter-board communications. And you won't need air-con, even if you are playing around with many tens of them costing no more than a workstation-class PC.
Fun with DIY computer architecture research? Or massively-multi-monitor VR walls?
Ever heard of a box?
I did visit the RasberryPi web-site. A box to put it in is planned, though it won't be available on day one. Then it'll be a neat little (tiny) box with a USB hub dangling from it. Ugly.
I'll also be paying more for the hub separately than it would cost integrated into the Pi. But that's all argued over above. If the added cost of unpopulated PC board real-estate really is enough to deter sales of the cheaper one-USB version, then ugly it'll have to be. (It's not much real-estate. USB hub chips are tiny and since the single/quad USB socket is either/or, they'd share most of the PCB location). Anyway - since it's less than the cost of a couple of pints, the money (unlike the aesthetics) isn't an issue for me.
I did get it ...
I understood the point about getting the cost down. In my dreams it would have boasted SATA but I fuly understand why it doesn't. I'm not an electronics engineer so I may be complertely wrong about the following, but I'll fill in my thought process.
A USB hub chip is obviously a stand-alone chip plus support components connected to a computer system by four wires. Usually it's packaged in a little bit of plastic and the four wires are the PC-to-hub USB A-B cable. Therefore I would have thought it possible to design this system's circuit board so it can be populated either with a single USB connector, or with a USB hub chip and a quad USB connector. If I'm right the extra cost of the cheaper version would be the price of the extra PCB area needed for the (missing) quad connector abd USB hub chip. It's bad idea if (a) I'm wrong and it's not possible, or (b) if the extra cost of a small bit of unpopulated PCB on the cheaper version is enough to put off a significant percentage of customers. I doubt the latter.
Anyone with more detailed knowledge of board design or economics care to make a well-informed comment?
Assuming the USB is USB2 (someone confirm, please) then you could plug in a USB Ethernet adapter for the second port. Of course that does add £15 - £25.
It's also a shame that there are few USB ports. Keyboard, mouse, disk and memory stick or DVD drive, I'd have liked 4. One could plug in a USB(2?) hub, but that's ugly compared to integrating the hub chip and , say, 4 connectors. How much would that cost - less, surely, than the £5 you can buy such a hub for. I'm sure it wouldn't complicate the board, just (maybe) make it a little larger.
Even so, if I can buy one as shown for under £30, I probably will. If it winds up at £100, I probably won't. I'll probably hack it into / onto a monitor, tapping the monitor's PSU which can almost certainly supply an extra watt.
Apparent equilibrium is rarely simple.
>All it can show are the limitations of the mathematical model and data collection...
There are two sorts of equilibrium conditions. The intrinsic, such as a passenger jet in straight level flight. And the dynamic, such as a modern fighter jet doing the same.
The former has swept-back wings with wingtips higher than their point of attachment to the fuselage (when the plane is level, obviously). A moments thought shows this means if it gets tilted down on one side, the wing on that side generates more lift than the other one and the plane levels itself. If it rotates about the vertical axis, the wing on the outside of the turn generates more drag and straight-line flight is resumed. This is intrinsic stability.
A modern fighter jet has wings that sweep forwards and downwards. It couldn't fly, were it not dynamically stabilized by the millisecond, a computer cancelling out every random twitch before its intrinsic instability can send it tumbling out of the sky. A complex feedback system, instead of a simple one.
Why? The passenger jet's stability means that it cannot change direction so easily or quickly. A change of direction means fighting its own tendency to maintain straight level flight. A fighter's manouverability is a matter of life and death, and its intrinsic instability gives it a huge edge.
I've deliberately used a simple and man-made example. Nature has evolved the same solution in many contexts and at many levels. Nature's apparent equilibrium conditions are almost never static. Instead they are chaotic with feedback, or "edge of chaos" self-optimisation to the prevailing conditions prone to avalanche-like changes if those conditions change.
Try to model a system on the edge of chaos and you won't get deterministic answers. You may get statisticaly significant ones if you run for long enough or do enough runs with different small random perturbations to each. You can forecast tomorrow's weather. You may be able to forecast the climate of the next century given changes to the atmosphere (or maybe not). What you can't ever do is forecast the weather a year hence.
Model the behaviour of the component parts even slightly incorrectly and your overall results may be dramatically "wrong". But in another sense, they aren't. The real world is unstable. Fighter pilots know not to push the envelope to the limit, until there is hostile incoming. We'd do well not to push ecosystems to the (unknown!) limit of apparent stability, because we don't and can't know what happens when the apparent equilibrium is punctured. And the system starts with the biochemistry and genetics of bacteria, and mixes levels with a vengeance, like the sort of code you absolutely never want to be in charge of maintaining.
Like a brain does.
Thistledown / Eon
And then as your technology advances, you make the inside bigger than the outside. MUCH bigger.
I wonder if Greg Bear got that idea from Dr. Who? It's like a Tardis, but ... bigger.
Sub-c starships that work will be the size of coke cans (or smaller), packed with virtual reality and (maybe) hardware for bootstrapping real reality from a comet after arrival. Note: one subjective year in VR equalling tens of thousands of real years is an advantage - the journey is fast enough not to get boring.
A variant is dumb machinery that bootstraps a big computing substrate and interstellar comms upon arrival. Then you just beam your virtual self to another star system at the speed of light., while also going nowhere at all. You can build a galactic-scale civilisation this way. I like the idea that you press a "send" button and then have to look out of the (virtual) window to find out if you are the copy that stayed at home or the copy that travelled thousands of lightyears.
Then there's the generation ship the size of a moon, but I have grave doubts that one can keep its inhabitants from destroying their ship in interstellar space, given that the journey will be hundreds of generations long. And if they can invent tech that will look after itself over millennia, they'll surely hit on the VR trick before building a generation ship?
Super-c starships can be anything you care to imagine, because they're about as likely to exist as time machines (for much the same reasons).
The best story about time travel (impossibility of) used the idea that the universe has to intervene actively to prevent causality violation and its own unravelling. One side in a war has worked this out, and tempts the other side to get itself destroyed by trying to build a time machine. Too late, they find out that the universe's idea of minimal local intervention is to make the sun explode prematurely. It's a big universe.
Size isn't everything
In "Accelerando" Charlie Stross makes out a good case that if we ever build a starship in a c-limited universe, it'll probably be the size of a coke tin.
The ship from the Transcend in Vinge's "A fire upon the deep", which was causing the locals to be fretful and slightly fearful, was five feet long. Relay was destroyed a few weeks later, along with the Old One (a remarkable twelve years old) and his ship, by something much smaller, nastier, and older by billions of years.
(The sequel has just been published).
A better idea
Agreed. I don't think they've quite thought this through.
Suppose something completely trashes our civilisation. Might be nuclear war, a small asteroid, a bad mistake in a bio lab, blind chance / mother nature / God (take your pick) unleashing the mother of all human flu virii. Human genetic diversity is unusually low - we went through a genetic pinch-point ~80k years ago. We're far more vulnerable to plagues than (say) chimps.
After the few survivors have climbed back to agriculture, iron and cities after 500, 5000 or 50,000 years, they'll have lost just about everything we know today and will have to discover it all the hard way. Paper won't last that long exposed to the elements, digital media neither (anyway they won't have readers). How can we help, just in case it happens? At little cost, unless a very pessimistic billionaire gets involved?
Ceramics are far less destructible. We have the technology to print a lot of small detail onto plates etc. for little extra cost compared to humdrum decoration. (We're using it to print marble-effect tiles, no two alike! ). What about an encyclopaedia ceramica? Sell as many as possible as novelty items, and in the far future if someone digs them up they will still be legible. Plates and tiles might even last long enough to reach the next intelligent species, if this one goes extinct.
Trouble is, they'll also have lost English as a written language, and cracking an extinct language without a Rosetta stone has never been done. So another interesting interesting part is to devise a code that will let them read the stuff we're trying to leave them with.
Hopefully, it'll never be needed. However, it has the makings of a fun project - what to write and how to code it, to save the future 10,000 years of pre-electrical civilisation? What to attach it to, that will be produced in large numbers and be noticeable by the naked eye after that sort of time buried? And so on.
As you say you CAN do it with XP!
"Yes, you can do this with Windows XP" and in fact I have many of my user's systems set up this way, must have been lucky not using the buggy softwares you refer to. Anyway, all MS needed to do was the necessary stuff to make it work better. Introduce 7-style escalation, if you really think that's a better way to accomplish this sort of thing than the (safer) sudo / "run as administrator" approach. Either way, no need to tear everything up and start again. Especially not at the user interface level. Maybe inside the kernel where only systems programmers tread.
The rest of your comments seem to be making my point as well. XP was indeed quite poor when it first shipped. It has been improved *incrementally*, as you acknowlege. Facilities have been added without tearing up the framework. If it's true that the XP kernel was an irredeemable mess by SP3, then by all means rewrite it. I explicitly pointed out that it's possible to ship a new kernel under the hood, without the user even having to notice the change. That's incremental improvement.
I'd have welcomed Windows 7 if it had preserved interfaces from XP, changing only the ones which had to be killed because the XP way was an irredeemable security weakness or suchlike. Instead they ripped everything up and started again. Loads of costs in lost productivity while you replace your old skill-set with a new skill-set (for everyone from secretaries to sysadmins). For example, many of the applets in the 7 control panel are recognisably the same ones that ran under XP, so why move them all around under a brand new control panel UI so you have to learn to find them all over again? WHY???
If they did the same thing to a car they'd have put the accelerator where the brake is at present, and replaced the wheel with a joystick. It might even be a better way to design a car, if cars didn't already exist. Car manufacturers will never make such changes, becase with cars that would kill people, not just cause lost productivity and unnecessary annoyance.
BTW I don't regard this specifically as a Microsoft problem. I'm equally scathing about whoever was responsible for Gnome 3 on Linux (which destroys Gnome 2 - you can install as many different window managers as you like on Linux, EXCEPT not Gnome 2 and Gnome 3 on the same system).
Algae are so easy compared to plants.
Algae are easy to grow. Any pail of water left in sunlight will turn green. Add some nutrients (sewage, say) and they'll turn it into thick green slime. Most people have created a pail of geen slime at some point in their lives, and it's a shame that the usual reaction has been "Yeuuuch!"
Algal photosynthesis is much more efficient than multicellular plantlife - there's no need to for them to maintain structural integrity and internal transportation networks, which holds a plant back. Algae are the plant equivalent of bacteria. Exponential growth until limited by nutrient supply.
Algae don't need fresh or clean water. They grow in the sea, in raw sewage, in toxic effluent. (They'll actualy eat sewage and produce much cleaner water) This means you can put algae farms in a desert (most sunlight) and culture tham in brackish or salt or toxic groundwater that's not a valuable and depleting resource like freshwater groundwater is.
If you can bio-engineer them to go 20x faster given 20x CO2 supply, you can get to burn coal twice. Once in a power station, the second time around in a car (as algal bio-diesel). I'd also observe there are lots of natural CO2 and carbonated water sources bubbling into the atmosphere.
Mere selective breeding will get you to algae that produce pretty good diesel fuel when you squash and filter the green soup. (What's left over is nutrient for the next generation of algae).
As you can guess I'm a fan of algae, along with solar panels. Technological civilisation doesn't have to grind to a stand-still when the oil runs out, and we don't have to continue raising atmosphereic CO2 for the next century either (with potentially dire and irreversible consequences for the climate).
Try Centos or Scientific Linux? Everything on the DVD for every minor release, yum update once you have a good network connection or a local mirror set up.
Certainly an old version of Red Hat Linux (say) will carry on working as well as it did on day one as long as there is hardware that it can run on.
However, five years after the next major version is shipped the security patch stream will dry up, and you'll discover that upgrading rather than rebuilding from RHEL5 to RHEL6 isn't supported. You can try ... most folks who have tried recommend that you don't. I've occasionally longed for a major version upgrade wizard or suchlike, but I can quite see why they don't think it's economically viable to supply and support the same. Too large a phase space to test well in advance.
The real question is what did / does Vista or 7 bring us that we hadn't already got with XP?
Answer #1 - Nothing, but it gets lots more money for Microsoft.
Answer #2 - a less bugridden kernel. Maybe. But one can upgrade or completely replace a kernel without trashing the entire user interface. Why do Microsoft keep doing that? (Not just with the O/S, but Office and the other products as well).
Answer #3 - same as #1. They make lots of money out of books, training courses, (re)certification exams ...
Answer #4 - well, at least making all those old PCs obsolete makes sure new bigger faster ones get developed. Good for Intel, gamers, number-crunching scientists, and anyone who runs Linux on PCs that will be thrown away because they can't run Windows 7.
OK, I'm happier now. I'm a number-crunching scientist who runs LInux when he can.
You mean, researching bolts from the blue?
So go for LEDs, which also turn on instantly. They will last very many years and will save you several times the (somewhat high) purchase cost in electricity. Light quality not quite as good as halogen, much better than CFL. Buy "warm white" unless you really like a brighter-than-daylight effect.
- Analysis BlackBerry Messenger unleashed: Look out Twitter and Facebook
- Comment Mobile tech destroys the case for the HS2 £multi-beellion train set
- Nine-year-old Opportunity Mars rover sets NASA distance record
- Things that cost the same as coffee with Tim Cook - and are WAY more fun
- IT bloke publishes comprehensive maps of CALL CENTRE menu HELL