2142 posts • joined 10 Jun 2009
Radio broadcast obsolete
6. Radio of a type capable of detection across interstellar distances becomes obsolete within a couple of centuries of its invention.
We know that's true because it's happening all around us. Analogue broadcasting (with easy-to-detect carrier frequencies) is being turned off. Digital signals are far more efficient, meaning far harder to detect at interstellar distances. Also for how long will radio broadcasting exist at all? The Internet is starting to replace broadcast in developed countries.
I doubt that 22nd-century Earth will be radio-detectable from tens of light-years out, let alone hundreds or thousands. (I'm assuming technological progress continues ... but it's also true if we blow ourselves up).
No hinge would be even better
It should be a two-part design. Tablet. Keyboard+mouse. Bluetooth link. Software that adjusts the user interface so that you don't have to touch the vertical screen to accomplish anything when the keyboard/mouse unit is present. In "enjoyment mode" just take the tablet and leave the keyboard behind.
OK, it's probably trivial to make it so that the two parts clip together for transport and for keyboard use on the move - you might call this "netbook mode". But at home there would be a plastic stand for mounting the tablet vertically above the level of one's desk, and put keyboard wherever the user finds most comfortable.
Perfectly good science
What's wrong with observations that fit within the current best theory? Bad science would be if nobody bothered to look.
Of course it's much more exciting on the occasions when observation shows something not accounted for by the theory, but the scientific merit of the observation is no different.
Geostationay - a bit slow.
Geostationary orbit is fine for high-latency emergency communications between meatware systems. It's also OK for transferring gigabytes of data using large buffers.
But it takes light about a quarter of a second to get up to geostationary orbit and back again (i.e. a network packet round-trip time of half a second). Skynet will need low-altitude satellites, or more probably will just hijack our fibre-optic cables here on the ground.
As for aliens, they must have FTL technology or they wouldn't be here!
Diagonal versus width
The size of a screen is its visible diagonal measurement.
Not sure whether there is a standard for the "size" of a laptop, but I've always assumed they were referring to its width.
Therefore it's quite easy for a screen to be "bigger" than the machine it's in. The maximum "size" of a screen is gigen by Pythagoras: sqrt( width^2 + height^2 ). Thes subtract the parts of the diagonal that aren't display.
> "I used to like Maxtor until seagate got their grubby hands on them".
Maxtor was the company whose drives gave me so much grief that I came to avoid them, albeit with a full understanding that the probabilities favoured the hypothesis that I'd just been unlucky. My worry was the inverse of yours!
Seriously, you'd need an absolutely huge number of drives to start drawing conclusions about manufacturers in general. What's clear is that every manufacturer has on occasion shipped bad batches, caused usually by a component supplier shipping substandard components . Also that some models turn out to age less well than the manufacturer hoped. This is inevitable, given that the tech moves so fast that by the time a drive is known to be reliable in service, it's also obsolete.
This is a marketing wheel that keeps turning. First someone else cuts the warranty to save money and everyone else follows. Then someone increases the warranty to drive sales and everyone else follows.
I wonder what fraction of warranty failures are actually returned. Most people know that time is money. I don't bother for an 18-month-old drive if a replacement costs £30 or less. Well, not for the first one. I'd stick it on a shelf for the remaining warranty time just in case it was the first of a batch of lemons. And buy the replacement from a different manufacturer because I'm not going to let anyone explicitly profit from shipping a dud.
Odd they don't seem to want people buying green drives, though. Wonder why? I'd have thought slow and cool would mean more reliable!
Maybe not so boring
Can it be gamma rays *all* the way up? At some energy you'd have a photon with more energy than the entire observable universe. Maybe the entire universe, if it's finite. This isn't pedantry at all, if the whole universe did start very small in the big bang.
Some (exotic, unproved) theories suggest that photons of sufficient energy may travel slower than low-energy light. Slower-thn-light photons, sort of a counterpart to faster-than-light neutrinos? Observations of the next supernova may throw some , er, light on this.
It's named after Satyendra Nath Bose. http://en.wikipedia.org/wiki/Satyendra_Nath_Bose
Unfortunately that doesn't tell us how he pronounced his name. Anyone know the right pronunciation? I'll guess at halfway between English "s" and "z"!
Lots of unexplained observations remain.
We already know lots of stuff outside the standard model.
Gravity, for a starter. One of the most stunning numbers in physics is 10^38, the ratio of the strength of the Electromagnetic force to the Gravitational one. You don't really understand anything about matter, until you appreciate that the electromagnetic force between an electron and a proton is that many times stronger than the gravitational force between the same.
Then there's dark matter, inferred from the observed nature of galaxies and the impossibility of binding them gravitationally if what we see is all there is. (There are good theoretical reasons why it can't be lumps of non-luminous ordinary matter, sized somewhere between marbles and Jupiters).
And "dark energy", needed to explain why the universe appears to be not only expanding but accellerating.
We might get lucky and spot a particle or three of dark matter in the CERN detectors. (Oddly that may be more likely while the LHC is down than when it's up). Or careful observations of the things we know it can manipulate may help pin down a better theory that in turn will guide our observations. (It's easier to find a needle in a haystack if you come to suspect it may be magnetic).
And if those neutrinos really are going faster than light, it's time to tear up all the theories and start again.
That's an impressive spec. How long before the standard desktop PC drive becomes 2.5in and the 3.5in format dies out for anything smaller than a Terabyte or two? Lower power consumption, as fast or faster, saves space, intrinsically greater mechanical shock resistance, (probably) quieter. Will depend on the drive cost, but 2.5 format has an intrinsic advantage because it's lighter, less raw material and lower shipping cost. Days when 2.5in always meant smaller capacity or slower than 3.5in (or both) are clearly gone.
Yes, I know this one is an enterprise drive. I'm looking ahead ... anything in the Enterprise market today will be on the desktop of tomorrow.
The survey is meaningless unless it told the punters how much extra they'd have to pay for a "Smart TV". I imagine that if the don't-knows were left to guess, they'd think a few hundred quid (cost of a Smartphone compared to a twenty-quid dumb one).
The other big issue is the UI. If the "Smart" bit means that you have to work through heaps of menus before you can get to watch BBC1, that'll lose you a lot of sales to houses where one of the TV-viewers is a granny. Translation: can't read the labels on the remote control at all, has difficulty reading text on the screen, and doesn't understand the concept of navigating menus with four directions and an OK button.
In a sane world there would be an internet standard for TV control functions exported to the LAN, and then you could write or buy an applet of your own choice to steer any compliant TV, PVR etc. from your tablet or smartphone. It might even accelerate sales of tablets, if one of those apps was "Grannyvision". OK, not that name, but a UI designed for the visually-impaired who aren't too good at learning new tricks at their age. Won't ever happen.
The law is an ass?
"plaintiffs have failed to allege facts or articulate a theory on which Sony may be held liable"
Fraud? Theft? Selling you a product based on an advertized feature which they then take away is one or the other or both, in the everyday non-legal meanings of the words.But I guess their 666-page shrink-wrapped license that you can't read until after you've accepted it has served its intended purpose.
Anyway, just keep boycotting everything Sony, and spread the word.
"such as would be mandated by any government seeking to block citizens’ access to a particular class of Website, whether over concerns about decency or piracy"
Is that really what they are proposing? Just how hard would it be to pass on ip addresses of blocked sites down other channels (including old-fashioned paper samizdat)?
I'm starting to wonder, if DNS didn't already exist would we bother to invent it? We manage to use the telephone system fine with just numeric addresses. No global distributed directory, just various un-coordinated look-up tables with various degrees of localisation, specialisation and automation.
There's also the tantalizing possibility that they are offering us a glimpse of new physics, in particular that the force of gravity may not be quite as Newton and Einstein thought.
They're off-course by a tiny but measurable and unexplained amount.
There are of course various hypotheses about why this is, other than new physics. We can't tell, because they weren't built as fundamental physics experiments. Perhaps some new probes should be sent after them, that are designed to probe the nature of gravity.
I'm wondering how long it will be before the entire virtual memory mapping of a modern desktop CPU is completely subvertable by code loaded into its GPU. Or indeed, whether that's happened already. And whether soon, those programs will be loadable by any entity that has network access to the system.
Hoping that's a black-helicopter speculation. Fearing otherwise.
How illegal is this in the EU?
I'm thinking that if it is confirmed that this software has been installed on any phones sold in the EU, then it will be curtains for carrier IQ and serious financial damage to any network that supplied a phone with it installed. The EU is hot on privacy ... and right now it needs every last cent it can lay its hands on.
It'll run Linux and Python. In my opinion if someone doesn't take to Python like a duck to water, s/he'll never be a programmer. What more could you want?
(Java? Perl? Ruby? Occam? Fortran? C++? Not my choices for a first language, but those as well, and more).
$$$ because they are thinking globally. Outside the EC most people probably don't have the faintest idea what a £ sign denotes (assuming it even displays right!), and little more idea what the GBPUSD or GBP/local exchange rate is. Global trade works in US$. Even the BBC World Service converts prices to US$.
Why "Stlll ...". They've thought to boot off a socketed SD card rather than anything soldered down or hard to reload. So download the image if you didn't make a backup, find a PC, borrow a camera-card USB thingy if you don't have one, and re-load the boot media.
Or if you mean burn out the hardware with a 13.6 volts 1000A down a digital IO line or something like that, it's part of the learning process. At £35 one can afford a mistake or two. (What do beam tetrodes cost these days ... never mind).
Neat idea, but ...
Are you really telling me that if they offered, you wouldn't take the $100M cheque from Google?
Point me at a better alternative
Show me a developer board with decent display capability and audio, a network port, and USB, that runs a well-known Linux distribution, out of the box and usably fast, for less than twice this price.
The plug-computing things like pogo-plug and guru-plug missed the mark by not having any AV or accessible user-hackable IO lines. (They also ran too hot).
I expect if RasberryPi shows any sign of taking off, there will be "me too" products soon enough!
NB a hardware hacker's board
It's a board with a collection of digital I/O lines that you can interface homebrew electronics to. A PC is notably lacking in this department. You used to be able to bodge some things onto a parallel port, but now PCs don't have a parallel port. Otherwise you choose between USB (too complex) or buying a digital I/O card (not cheap, especially if it's a notebook PC rather than a desktop).
Schools may or may not be keen to let their students do real programming, but they surely draw the line short of attacking a PC's motherboard with a soldering iron!
Hadn't thought of that. And you've got hardware I/O lines to play around with for low-latency non-packetized inter-board communications. And you won't need air-con, even if you are playing around with many tens of them costing no more than a workstation-class PC.
Fun with DIY computer architecture research? Or massively-multi-monitor VR walls?
Ever heard of a box?
I did visit the RasberryPi web-site. A box to put it in is planned, though it won't be available on day one. Then it'll be a neat little (tiny) box with a USB hub dangling from it. Ugly.
I'll also be paying more for the hub separately than it would cost integrated into the Pi. But that's all argued over above. If the added cost of unpopulated PC board real-estate really is enough to deter sales of the cheaper one-USB version, then ugly it'll have to be. (It's not much real-estate. USB hub chips are tiny and since the single/quad USB socket is either/or, they'd share most of the PCB location). Anyway - since it's less than the cost of a couple of pints, the money (unlike the aesthetics) isn't an issue for me.
I did get it ...
I understood the point about getting the cost down. In my dreams it would have boasted SATA but I fuly understand why it doesn't. I'm not an electronics engineer so I may be complertely wrong about the following, but I'll fill in my thought process.
A USB hub chip is obviously a stand-alone chip plus support components connected to a computer system by four wires. Usually it's packaged in a little bit of plastic and the four wires are the PC-to-hub USB A-B cable. Therefore I would have thought it possible to design this system's circuit board so it can be populated either with a single USB connector, or with a USB hub chip and a quad USB connector. If I'm right the extra cost of the cheaper version would be the price of the extra PCB area needed for the (missing) quad connector abd USB hub chip. It's bad idea if (a) I'm wrong and it's not possible, or (b) if the extra cost of a small bit of unpopulated PCB on the cheaper version is enough to put off a significant percentage of customers. I doubt the latter.
Anyone with more detailed knowledge of board design or economics care to make a well-informed comment?
Assuming the USB is USB2 (someone confirm, please) then you could plug in a USB Ethernet adapter for the second port. Of course that does add £15 - £25.
It's also a shame that there are few USB ports. Keyboard, mouse, disk and memory stick or DVD drive, I'd have liked 4. One could plug in a USB(2?) hub, but that's ugly compared to integrating the hub chip and , say, 4 connectors. How much would that cost - less, surely, than the £5 you can buy such a hub for. I'm sure it wouldn't complicate the board, just (maybe) make it a little larger.
Even so, if I can buy one as shown for under £30, I probably will. If it winds up at £100, I probably won't. I'll probably hack it into / onto a monitor, tapping the monitor's PSU which can almost certainly supply an extra watt.
Apparent equilibrium is rarely simple.
>All it can show are the limitations of the mathematical model and data collection...
There are two sorts of equilibrium conditions. The intrinsic, such as a passenger jet in straight level flight. And the dynamic, such as a modern fighter jet doing the same.
The former has swept-back wings with wingtips higher than their point of attachment to the fuselage (when the plane is level, obviously). A moments thought shows this means if it gets tilted down on one side, the wing on that side generates more lift than the other one and the plane levels itself. If it rotates about the vertical axis, the wing on the outside of the turn generates more drag and straight-line flight is resumed. This is intrinsic stability.
A modern fighter jet has wings that sweep forwards and downwards. It couldn't fly, were it not dynamically stabilized by the millisecond, a computer cancelling out every random twitch before its intrinsic instability can send it tumbling out of the sky. A complex feedback system, instead of a simple one.
Why? The passenger jet's stability means that it cannot change direction so easily or quickly. A change of direction means fighting its own tendency to maintain straight level flight. A fighter's manouverability is a matter of life and death, and its intrinsic instability gives it a huge edge.
I've deliberately used a simple and man-made example. Nature has evolved the same solution in many contexts and at many levels. Nature's apparent equilibrium conditions are almost never static. Instead they are chaotic with feedback, or "edge of chaos" self-optimisation to the prevailing conditions prone to avalanche-like changes if those conditions change.
Try to model a system on the edge of chaos and you won't get deterministic answers. You may get statisticaly significant ones if you run for long enough or do enough runs with different small random perturbations to each. You can forecast tomorrow's weather. You may be able to forecast the climate of the next century given changes to the atmosphere (or maybe not). What you can't ever do is forecast the weather a year hence.
Model the behaviour of the component parts even slightly incorrectly and your overall results may be dramatically "wrong". But in another sense, they aren't. The real world is unstable. Fighter pilots know not to push the envelope to the limit, until there is hostile incoming. We'd do well not to push ecosystems to the (unknown!) limit of apparent stability, because we don't and can't know what happens when the apparent equilibrium is punctured. And the system starts with the biochemistry and genetics of bacteria, and mixes levels with a vengeance, like the sort of code you absolutely never want to be in charge of maintaining.
Like a brain does.
Thistledown / Eon
And then as your technology advances, you make the inside bigger than the outside. MUCH bigger.
I wonder if Greg Bear got that idea from Dr. Who? It's like a Tardis, but ... bigger.
Sub-c starships that work will be the size of coke cans (or smaller), packed with virtual reality and (maybe) hardware for bootstrapping real reality from a comet after arrival. Note: one subjective year in VR equalling tens of thousands of real years is an advantage - the journey is fast enough not to get boring.
A variant is dumb machinery that bootstraps a big computing substrate and interstellar comms upon arrival. Then you just beam your virtual self to another star system at the speed of light., while also going nowhere at all. You can build a galactic-scale civilisation this way. I like the idea that you press a "send" button and then have to look out of the (virtual) window to find out if you are the copy that stayed at home or the copy that travelled thousands of lightyears.
Then there's the generation ship the size of a moon, but I have grave doubts that one can keep its inhabitants from destroying their ship in interstellar space, given that the journey will be hundreds of generations long. And if they can invent tech that will look after itself over millennia, they'll surely hit on the VR trick before building a generation ship?
Super-c starships can be anything you care to imagine, because they're about as likely to exist as time machines (for much the same reasons).
The best story about time travel (impossibility of) used the idea that the universe has to intervene actively to prevent causality violation and its own unravelling. One side in a war has worked this out, and tempts the other side to get itself destroyed by trying to build a time machine. Too late, they find out that the universe's idea of minimal local intervention is to make the sun explode prematurely. It's a big universe.
Size isn't everything
In "Accelerando" Charlie Stross makes out a good case that if we ever build a starship in a c-limited universe, it'll probably be the size of a coke tin.
The ship from the Transcend in Vinge's "A fire upon the deep", which was causing the locals to be fretful and slightly fearful, was five feet long. Relay was destroyed a few weeks later, along with the Old One (a remarkable twelve years old) and his ship, by something much smaller, nastier, and older by billions of years.
(The sequel has just been published).
A better idea
Agreed. I don't think they've quite thought this through.
Suppose something completely trashes our civilisation. Might be nuclear war, a small asteroid, a bad mistake in a bio lab, blind chance / mother nature / God (take your pick) unleashing the mother of all human flu virii. Human genetic diversity is unusually low - we went through a genetic pinch-point ~80k years ago. We're far more vulnerable to plagues than (say) chimps.
After the few survivors have climbed back to agriculture, iron and cities after 500, 5000 or 50,000 years, they'll have lost just about everything we know today and will have to discover it all the hard way. Paper won't last that long exposed to the elements, digital media neither (anyway they won't have readers). How can we help, just in case it happens? At little cost, unless a very pessimistic billionaire gets involved?
Ceramics are far less destructible. We have the technology to print a lot of small detail onto plates etc. for little extra cost compared to humdrum decoration. (We're using it to print marble-effect tiles, no two alike! ). What about an encyclopaedia ceramica? Sell as many as possible as novelty items, and in the far future if someone digs them up they will still be legible. Plates and tiles might even last long enough to reach the next intelligent species, if this one goes extinct.
Trouble is, they'll also have lost English as a written language, and cracking an extinct language without a Rosetta stone has never been done. So another interesting interesting part is to devise a code that will let them read the stuff we're trying to leave them with.
Hopefully, it'll never be needed. However, it has the makings of a fun project - what to write and how to code it, to save the future 10,000 years of pre-electrical civilisation? What to attach it to, that will be produced in large numbers and be noticeable by the naked eye after that sort of time buried? And so on.
As you say you CAN do it with XP!
"Yes, you can do this with Windows XP" and in fact I have many of my user's systems set up this way, must have been lucky not using the buggy softwares you refer to. Anyway, all MS needed to do was the necessary stuff to make it work better. Introduce 7-style escalation, if you really think that's a better way to accomplish this sort of thing than the (safer) sudo / "run as administrator" approach. Either way, no need to tear everything up and start again. Especially not at the user interface level. Maybe inside the kernel where only systems programmers tread.
The rest of your comments seem to be making my point as well. XP was indeed quite poor when it first shipped. It has been improved *incrementally*, as you acknowlege. Facilities have been added without tearing up the framework. If it's true that the XP kernel was an irredeemable mess by SP3, then by all means rewrite it. I explicitly pointed out that it's possible to ship a new kernel under the hood, without the user even having to notice the change. That's incremental improvement.
I'd have welcomed Windows 7 if it had preserved interfaces from XP, changing only the ones which had to be killed because the XP way was an irredeemable security weakness or suchlike. Instead they ripped everything up and started again. Loads of costs in lost productivity while you replace your old skill-set with a new skill-set (for everyone from secretaries to sysadmins). For example, many of the applets in the 7 control panel are recognisably the same ones that ran under XP, so why move them all around under a brand new control panel UI so you have to learn to find them all over again? WHY???
If they did the same thing to a car they'd have put the accelerator where the brake is at present, and replaced the wheel with a joystick. It might even be a better way to design a car, if cars didn't already exist. Car manufacturers will never make such changes, becase with cars that would kill people, not just cause lost productivity and unnecessary annoyance.
BTW I don't regard this specifically as a Microsoft problem. I'm equally scathing about whoever was responsible for Gnome 3 on Linux (which destroys Gnome 2 - you can install as many different window managers as you like on Linux, EXCEPT not Gnome 2 and Gnome 3 on the same system).
Try Centos or Scientific Linux? Everything on the DVD for every minor release, yum update once you have a good network connection or a local mirror set up.
Certainly an old version of Red Hat Linux (say) will carry on working as well as it did on day one as long as there is hardware that it can run on.
However, five years after the next major version is shipped the security patch stream will dry up, and you'll discover that upgrading rather than rebuilding from RHEL5 to RHEL6 isn't supported. You can try ... most folks who have tried recommend that you don't. I've occasionally longed for a major version upgrade wizard or suchlike, but I can quite see why they don't think it's economically viable to supply and support the same. Too large a phase space to test well in advance.
The real question is what did / does Vista or 7 bring us that we hadn't already got with XP?
Answer #1 - Nothing, but it gets lots more money for Microsoft.
Answer #2 - a less bugridden kernel. Maybe. But one can upgrade or completely replace a kernel without trashing the entire user interface. Why do Microsoft keep doing that? (Not just with the O/S, but Office and the other products as well).
Answer #3 - same as #1. They make lots of money out of books, training courses, (re)certification exams ...
Answer #4 - well, at least making all those old PCs obsolete makes sure new bigger faster ones get developed. Good for Intel, gamers, number-crunching scientists, and anyone who runs Linux on PCs that will be thrown away because they can't run Windows 7.
OK, I'm happier now. I'm a number-crunching scientist who runs LInux when he can.
Algae are so easy compared to plants.
Algae are easy to grow. Any pail of water left in sunlight will turn green. Add some nutrients (sewage, say) and they'll turn it into thick green slime. Most people have created a pail of geen slime at some point in their lives, and it's a shame that the usual reaction has been "Yeuuuch!"
Algal photosynthesis is much more efficient than multicellular plantlife - there's no need to for them to maintain structural integrity and internal transportation networks, which holds a plant back. Algae are the plant equivalent of bacteria. Exponential growth until limited by nutrient supply.
Algae don't need fresh or clean water. They grow in the sea, in raw sewage, in toxic effluent. (They'll actualy eat sewage and produce much cleaner water) This means you can put algae farms in a desert (most sunlight) and culture tham in brackish or salt or toxic groundwater that's not a valuable and depleting resource like freshwater groundwater is.
If you can bio-engineer them to go 20x faster given 20x CO2 supply, you can get to burn coal twice. Once in a power station, the second time around in a car (as algal bio-diesel). I'd also observe there are lots of natural CO2 and carbonated water sources bubbling into the atmosphere.
Mere selective breeding will get you to algae that produce pretty good diesel fuel when you squash and filter the green soup. (What's left over is nutrient for the next generation of algae).
As you can guess I'm a fan of algae, along with solar panels. Technological civilisation doesn't have to grind to a stand-still when the oil runs out, and we don't have to continue raising atmosphereic CO2 for the next century either (with potentially dire and irreversible consequences for the climate).
You mean, researching bolts from the blue?
So go for LEDs, which also turn on instantly. They will last very many years and will save you several times the (somewhat high) purchase cost in electricity. Light quality not quite as good as halogen, much better than CFL. Buy "warm white" unless you really like a brighter-than-daylight effect.
If the light lasts
If the light-source lasts a good many years, there is little point in making it separable from the fitting. If the light-source is compromised by so doing, it's worse than pointless.
This is the case with LED light-bulb replacements. It's hard to effectively heat-sink a GLS bulb-shaped object that's expected to illuminate in most directions. This is why you can't get an LED equivalent to a 100W tungsten bulb. Better, I think, if the LEDs are permanently attached to a metal structure that's both an attractive light fitting and a half-decent heatsink.
What they do need to invent is a ceiling-rose replacement onto which light fittings clip for both mechanical and electrical support, which does not require any electrician or DIY skills to detach and attach a new fitting. Something like lighting track but with only one attachment point rather than a slot. Would need to be a UK or Euro standard so every manufacturer adopted it.
BTW does anyone know where I can buy an LED uplighter to replace one with a 300W linear halogen?
But there is some ...
But there is some light of that colour. It's a lumpy spectrum with too much green, but there are no black extents in it, and even if there were, you'd see the dots that colour as dark.
I suspect if there is a problem, it may be the converse of what you have stated: that by making all the red dots (say) look duller than they would be in natural light, and the green dots brighter, it may enable a red-green colour-blind person to read what he should not be able to. (Just like using blue light to make obvious the spy-code dots printed in yellow on white by many inkjet printers).
If you were having difficulty, I suspect you are indeed colour-blind, and maybe you were getting an insufficient amount of help from the crappy illumination, just enough to make you think that you ought to be able to read the thing if the illumination were better. If you have normal colour vision, the "hidden" letter stands out like a yellow crocus in a flowerbed of purple ones. Unless, of course, you can't distinguish yellow from purple!
Isn't that the light system that you can't scale down? Makes a wonderful replacement for sunlight in really big spaces, but you can't have less than a kilowatt source?
If so it may be something that farmers can use, but it isn't going to compete with CFLs and LEDs in environments where any more than a few tens of watts would be serious overkill.
Didn't politicians once legislate that Pi = 3 in law?
Idiots isn't nearly strong enough. On any rational scale, politicians would have to have negative IQs, if zero is taken to be the result of random guessing.
Hard-wired safety limiters needed?
Surely critical infrastructure ought to be designed so that there are limiter systems which cannot be over-ridden by any computer? (Or indeed, by human operators following any procedure short of using screwdrivers and wirecutters).
I once worked at a synchrotron light facility. I'll spare you most of the details, but the light is generated by relativistic electrons circulating in an ultra-hard vacuum, and the various experiments took light, X-rays, etc. out through "beam lines" which at one end were open to that ultra-hard vacuum. If any air ever got into a beam line, a series of vacuum sensors had to cause valves to slam shut faster than air could travel down the line. If that ever failed there could be expensive damage to repair and days, weeks or even months of down-time while the vacuum was re-established.
What controlled this emergency safety system? Hard-wired relay logic (with its power-fail fail-safe). No digital decision-making. This system could not and should not be overridden. If it tripped when it shouldn't have, it was an annoyance. The converse, a disaster. Relays are fast enough, fail safe on power failure, and have extremely high noise immunity so they never "glitch". The right technology for the job!
One line was operated by a big computer company who refused to use relays. They had a special multiple-redundant computer system doing the job. They claimed it couldn't fail. One day, it did: spectacularly so. The facility was down for weeks and the big computer company was on the hook for all the bills. Hubris and Nemesis at its best!
Abuse of monopoly
I expect that might be viewed as illegally monopolistic behaviour. A hardware manufacturer isn't competing when it supplies competitors. It's just supplying, which is its business. What it supplies, is deterrmined by what it can make and by what its customers ask for.
Yup, that icon looks like a lawyer to me.
"Absolutely no sign" that a super-erruption is looming?
I thought there was quite a lot of evidence that the magma chamber underneath Yellowstone park is currently filling, causing the ground above it to rise. When that ground cracks, all hell will break loose. Of course that's not evidence that it'll errupt next year or in the next 100 years or 10,000 years, but it's rather less reassuring than "absolutely no evidence".
The geological evidence shows a fairly regular 600,000 year cycle for Yellowstone erruptions, and the last erruption was ... about 600,000 years ago. Odds-on the next erruption will be within 50,000 years meaning there's about a 1 in 1000 chance it'll happen in my lifetime.
If the opposition has been smart, they'll not just have constructed their facility a mile underground, accessed by tunnels (multiple) going sideways into a mountain with blast doors and several changs of direction. They'll not just have stocked it with enough supplies to wait out having all the entrances collapsed. They'll also have put it in a massive concrete shell, decoupled from the rock of the mountain by air bags.
Unless you can generate enough of a shockwave to cause the mountain to move by more than the width of the airbags, they'll completely decouple the precision quipment from any shockwaves travelling through the rock. Methinks your chances of damaging such with conventional explosives are infinitessimal, and your chances of managing to do so with half a megatonne of nuclear explosion are slim.
Is USA SAC HQ still under that mountain in Colorado? If so, they think like I do.
I've always wondered why (outside of the movies) no consideration seems to have been given to a launch where the first stage of aceleration uses external electric power to drive the rocket up a ramp. I'd have thought that there are considerable advantages to firing the rocket when it's already travelling at (say) 300mph along a ramp up the side of a mountain.
I've got to ask - how many bits/second can they send to Voyager and how many can it send back to us?
I find it depressing that people don't insist on ECC even for low-end servers. For anything that's a repository of valuable data, it's really important that the data doesn't get corrupted in transit!
RAM doesn't often fail, and usually fails hard enough to be noticed quickly (before the back-ups have been recycled). Not always, though. LAst year I saw the results of a slightly flakey piece of non-ECC RAM on a busy filesystem, and it was not at all pretty.
Reactor launch not a hazard
Launching a reactor is far less of a hazard than launching a radioisotope power unit. The latter is highly radioactive at launch. The former would be non-radioactive at launch. Its fuel rods (enriched uranium) would be slightly radioactive, but vastly less so than the isotope power unit. They only become highly radioactive once the reactor is activated and has been in operation for some time, at which point we can be sure it won't be returning to Earth.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base