The great leader
... has taken note that should he ever decide to invade Australia, he should do so in Winter.
2642 posts • joined 10 Jun 2009
... has taken note that should he ever decide to invade Australia, he should do so in Winter.
An orbital death spiral is exactly what will happen to every orbiting system in the universe, given enough time. "Enough" is a lot (hint: many powers of ten times the current 15-billion-year age of the universe). Gravitational wave emission becomes humanly measureable only where nature throws stellar or greater masses around in really tight orbits at significant fractions of the speed of light.
You aren't upset about the electromagnetic death spiral that will overtake orbiting electrically charged bodies at a rate many orders of magnitude faster, are you?
It always seemed obvious that a GW detector near a big particle accelerator should be picking up feint but repetitive pulses when the accelerator was running.
The key number to bear in mind is the relative stength of the Gravitational force and the Electromagnetic one. Electromagnetism is about 10^42 times stronger. This is one of the most staggering numbers in physics!
In fact, you can deduce this from thinking about the everyday world. You can dangle a few kilogrammes on a fine thread. Attracting in one direction, the entire Earth. Balancing in the other direction, electrostatic forces between the atoms in the thread. Which are themselves mostly self-cancelled within the atoms (electron charge cancelling proton charge): the attraction between atoms is the result of small asymmetries of charge distribution, which is still sufficient to hold molecules and crystals together against the pull of an entire planet.
The same number is the reason you aren't going to observe gravitational waves for anything less than extreme cosmological situations where there's a star or a galaxy's worth of mass moving close to the speed of light.
Not detected, therefore limited. Not enough so at this time to cast doubt on GR. Enough so to cast doubt on some models of galaxy formation (in particular, on the formation of multiple large black holes in their cores and subsequent merger dynamics of these hypothesized large black holes).
A thought I had was that the cosmic era of large black hole mergers may be over (i.e black holes in galaxy centres interact strongly enough that they merge into one within, say, a billion years of the galaxy forming. Since all neaby galaxies formed longer ago than that, mergers won't be happening any more, bar rare events such as colliding galaxies.
One might even be able to apply the Anthropic principle to this (would a universe with frequent mergers of large black holes in our cosmological neighbourhood be compatible with mamalian life on a planet's surface? I'm thinking about cosmologically nearby Gamma-ray busts. )
It's quite possible to transmit electricity 1500 miles, with modern UHV DC technology. That's the distance from the UK to the Sahara desert.
It's also less bad than that 1500 mile figure suggests, because in practice you'd tend to displace Spanish electricity to France and French electricity to the UK.
The biggest problem is the (in)stability of North African states. Huge solar farms in the Sahara would be a massive capital investment, and we just don't have confidence that the natives wouldn't hold us to ransom or just scrag it. Morocco is probably the best bet, but also probably not good enough. Especially not South of the Atlas mountains.
However, I am surprised there aren't more Spanish olive farmers grubbing up their trees and planting solar panels. Not enough grid capacity? Hello EU, where are you when we need you?
Could we cede the Olive business to the Moroccans or is it too dry there?
I think the truth is "inherited a lot of design properties".
But not from the VMS codebase, just from paying the same architect.
Anyway, it's the subsequent history that matters. Microsoft consistently put marketing ahead of securirty. NT 4 blew huge holes in the VMS security model. W2K blew some more. By the time they realised security slightly mattered (around XP SP2 time), it was so completely F*cked that mentioning VMS ancestry just made one feel like crying.
Indeed, and if they stuck the Win7 GUI on top of the vastly improved OS that lies underneath TIFKAM and called it Windows 7.1 it would sell like wildfare.
And if they used the XP GUI and called it XP 2.0, it might even go like a wildfire
On on the ones I use, control/alt/delete, sign out, hit CR, power button bottom right of screen. That's three-finger salute on keyboard, mouse click, keyboard, mouse again. BLEUGH.
Or you can cheat and just press the soft-power button on the system unit.
If the USG spent a bit less on exterminating terrorists and a bit more on exterminating slime like this, the NSA might get better publicity. (And I do mean exterminate. How many many-years of human enterprise do these sub-humans waste in order to make a few bucks? I rest my case.)
Does this means if we wait a million yeare there will be a sudden phonequake and Windows Phone will overnight become awesome?
I thought HGVs were speed-restricted to 56mph. I wish the bloody things could do 70mph, then you wouldn't get a bunch of cars in the outside lane of the motorway while an HVG at 56mph overtakes an HGV doing 55mph! (And then the gradient changes, and the one doing 55 speeds up to 56 ....)
Are you willing to try tipping your 3.5 inch book driver onto its side while it is active transferring data? Repeatedly? Active is worse than just spinning. A friend lost 2Tb this way.
As for weight at the bottom, that won't help if something snags the cables. Indeed, even a flat USB drive on a table-top can be vulnerable to being pulled off the table by its cables if they get tangled with a vacuum cleaner or played with by a pet or a small child. But vertical is far, far more vulnerable.
Another triumph of marketing over simple mechanical safety. Give it a nudge or snarl its cables, and it'll fall over. What happens when an active spinning rust drive falls about six inches into its stable (flat) orientation?
Kiss your data goodbye.
Vertical is for paper books and cereal cartons, which aren't damaged by taking a tumble.
Where would a company go which had fully committed to Gnome have been when the developers chucked away the Gnome 2 interface because they were bored with it?
This is a good example for FOSS not against it!
Firstly, in the short term there's absolutely no reason to change what you've got on any particular near-future drop-dead date. Those of us running RHEL5, RHEL6 or the Centos or Scientific Linux free derivatives still have a fully maintained Gnome 2 environment, with guaranteed support for five years after RHEL7 ships.
Secondly, within six months of Gnome 3 hitting the decks, the horde of disgruntled Gnome users had fixed the problem in two ways. They forked Gnome 2 into a new project called MATE - the reactionary route. And they developed Cinnamon, a new UI overlay on Gnome 3 that was far less unfriendly to Gnome 2 fans - the progressive route. I'm happy to move to Cinnamon if / when my platform of choice (Scientific Linux) moves to Gnome 3. I've tried MATE and it works. I've stopped grousing about Gnome devs flouting OSS conventions (ie you do NOT forcibly tear up your user's old way of working, you DO fork a new project and find a maintainer for the old one), because it's gone from a huge annoyance to an irrelevance in under a year.
Thirdly, there were and are are other alternatives. KDE. XFCE. Many other other alternatives. Compare Microsoft's one and only one UI, that they tear up at a whim every few years. (Win 7 was a tear-up, Win 8 a shredder).
Finally, you have the source code. If the above hadn't happened because you were a tiny minority, you could still have maintained your chosen interface for ever, or paid someone to do that, provided your pockets were deep enough. You can't do that with Windows XP. Microsoft has the secret sauce and intends to burn it.
One could (maybe) pre-empt them by going to a lawyer and swearing an affidavit to the effect that one had NOT received any surveillance requests, and intends to repeat this process periodically unless it becomes illegal to do so. Then post the affidavit in a public place (if that's not automatic for sworn documents).
If you do receive a surveillance request it becomes illegal to swear that affidavit (i.e. perjury).
If they order you to commit a crime ... ISTR the fifth amemdment guarantees one's right not to be forced to incriminate onself, and making an untrue sworn statement is most definitely criminal.
If they make it illegal to tell the truth on oath ... the entire legal system and rule of law collapses?
I think this is to copyleft, as nuclear weapons are to bullets?
Completely agree. 15" Laptop screens should be 1680 for bog-standard-cheap, 1920 for executive / professional. Think how little a (larger) monitor or TV costs. There's no excuse.
But you couldn't resolve your pixels, because it was a cathode-ray shadow-mask colour tube. Say 0.25mm phosphor-dot spacing, 20 inches across a good one, 20 x 25 x 4 / 2 pixels = 1000 pixels. That divide by two is there because one pixel was a triangle of R,G,B dots. Yes, you got some degree of super-resolution on information encoded as luminance (a good match to your retina), so QXGA wasn't completely wasted. You can argue for /1.5 or even /1, but the display was no way as spatially clear as a 1920x1080 TFT. Analogue, not digital.
OTOH colour quality, for reproducing photographs, peaked with the last of the IIyama/ Sony/ Philips 25 inch vacuum-tube monitors and has declined since. On the plus side it no longer takes three people to manhandle a high-end monitor into place, or a meter-deep desk to support it and a keyboard.
And good big tube monitors cost a fortune back then, so a fair comparison is probably one of the newest 2560 or even 4K ones. These days you get 1920x1080 for well under £200.
Actually monitors grew to 1920x1200, and then technological convergence with TVs took away 120 of our pixels :-( :-( :-(
Maybe you should have applied to MI5 or GCHQ?
(Or maybe you did, and can't talk about it).
Interesting idea. How would you know that you were hiring a self-proclaimed brilliant hacker who never got caught, as opposed to a con-man with just enough technical ability to sound convincing, or an active black-hat trying to play you? You want references? Slight problem. The only references worth having are people who'll put your new recruit in jail as soon as you lead them to him.
And anyway, if he never got caught, how come he's willing to work for hire at all? If he's so very good, he's also retired on his ill-gotten gains.
BTW why would you want him *in* the corporate environment? It's his job to sit on the outside, being paid to tell you when he's able to exploit your systems, rather than exploiting them. It's *your* job to liaise.
So old it's proverbial: "a poacher turned gamekeeper". Or from even further back in time, "Quis custodiet ipsos custodes?"
I've no idea what the lawyers will do with this case. Probably, make a mess.
However, isn't the key point how long Google retains any sort of memory of what it "learned" by scanning my e-mails? (By which I mean purely statistical analysis thereof. Not forwarding them to the NSA, which is a separate issue, nor storing the actual e-mails after I delete them, which might actually be illegal under EU law).
I've no objection to Google delivering targetted ads based on a statistical profile that is forgotten over a week or two. If they continue to profile me cumulatively over months or (gods forbid) years, it starts to become highly intrusive. Most *people* can't remember in detail what I was doing this time last year, and that's including myself in "most people".
As for the ads, I never see them anyway. Because some advertizers insist on delivering visually intrusive graphics that make me feel nauseous, I block all adverts, and also all flash content.
Glad to see it's not just me. I'm positively allergic to any UI which makes things wobble, warp, or otherwise animate without VERY good reason. I find even the applications bar on an Imac that warps larger when you poke it with the mouse, makes me feel nauseous. (Not because it's ugly, just because my low-level visual processing doesn't get on with it).
This is also the main reason I run Adblock-plus and Flashblock: to keep moving things that I don't want to visually process, off my screen. Their content (advertizing of greater or lesser relevance) is secondary.
And possibly the reason I still haven't bought a smartphone at all.
Not a typo, surely. Be forced, or force yourself, to use Bing for a *whole* *day* with absolutely no access to any other search tool. Then you will indeed realize that you have very little to complain about with Google.
Corrosion is probably the easier part of the seawater problem.
The other part is that it's full of living organisms, and they're looking for something solid to anchor themselves to (and, often, to eat). Look at just about anything that's been sitting in the sea for a while, and it's covered with life-forms.
They still don't know how to make submarine cables that won't be destroyed by marine life within a small number of decades. Moving parts, that's harder still.
The other way to store energy from sunlight is extremely simple, and doesn't even need Solar PV cells. Concentrate the sunlight with a big mirror. Use it to melt stuff. Generate electricity using the molten stuff to generate steam and spin turbines. "Stuff" is one of a number of possible salts and metals (and maybe polymers, if a good polymer chemist sat down to think about the requirement).
For overnight electricity, pump the molten stuff into a big well-insulated tank, for use to generate steam after sunset.
Solar-Thermal doesn't get the press that Solar-PV does, but I'll hazard a guess that polished aluminium mirrors will always be cheaper than high-tech PV panels.
The trouble with a Uranium fission reactor is that it inevitably breeds Plutonium, which can be chemically separated and used to make bombs.
A Thorium reactor inevitably breeds Uranium-233. I've never been able to find out whether it's possible to build U233 bombs. Anyone know for sure? Thorium reactor advocates often say this technology is safe against A-bomb proliferation, but why? U233 is quite definitely fissile.
Last time I Googled this I found no good answers, and probably got myself an elevated NSA-profile. Anyone *know*?
Tritium will be manufactured in fusion reactors by exposing Lithum to neutrons, if we ever get to the stage of having just one working fusion reactor. In other words, fusion reactors are Tritium breeder reactors if you want them to be (and it's also hard to think of any better way of dealing with the thermal flux and the fast neutrons).
If you want more parties, you need some kind of preferential voting system, so votes for the least popular candidate are distributed according to next preferences amongst the remaining candidates - repeat until one candidate has more than 50% of the vote.
Actually, not true. The simplest and probably best system (except never tried on a large scale) is approval voting. You mark every candidate who you'd be willing to see holding office. You can vote for one, or all-but-one, or anything inbetween. The votes for each candidate are counted. The one with the most votes is elected.
One huge advantage is that several candidates from the same party can run, representing different nuances of one party's platform. There's no such thing as "splitting the vote" under his system. If you want to vote for a party, then just vote for all members of that party.
Personally I'd make one further refinement: a "none of the above" option. If "none of the above" won, all of the candidates would be disqualified from the re-run of the election that would then take place.
In the early days of PCs, the power switch sometimes protruded proud of the case. Back then it was a hard mains switch, not a soft switch. I've seen major damage caused by pushing the keyboard back against the idiot-designed case. Later, the power switch (and reset button) were always recessed. Ease-of-use is not always a good thing. Big red switches ought to come with Mollyguards. Needing three fingers for C/A/D was definitely good design not bad. It's pretty much impossible to C/A/D by any kind of mistake.
This isn't so much a walled garden as it is a thin paper ribbon.
For how long? Until the next firmware upgrade? Until someone decides to replace paper by steel and flicks a virtual switch at Samsung HQ?
Until that sticker disappears from the box, and all mention of region-locking is gone from the documentation, refuse to buy it. This is the only way to protect yourself from slav^H^H^H^Hexploitation.
Other rmanufacturers: this is your chance. Guarantee and advertize that your product will work globally, with any SIM, forever. Do it now. Strike while the iron is hot!
It's a stupid design - a spinning-rust drive balanced on its narrow edge with power and USB cables dangling. What could possibly go wrong?
A colleague had one of this design. The wire got snagged, the (active) drive fell over, and the data was all lost. External hard drives should sit in their most stable orientation, i.e. flat. END OF. If you have bought one of these, run it lying flat. Even if it looks silly that way. Vertical is for books and cereal packets.
A cynic would say the poor design is deliberate. Lusers knock their drive over, blame themselves (or their partner or their cat), and buy another one.
Looking good, right up to the price tag.
Digression. I once flew on a commuter flight about 200 miles down the US east coast on a plane that looked almost exactly like a Leyland single-decker bus with wings and a tailplane bolted on top. Square fuselage section. The least aerodynamic-looking aeroplane I'd ever seen, and that's including WW1 fighters. A prop-driven thing, unpressurized, with a howling gale coming in under the door and over my feet, so my toes were all but frostbitten by the time I arrived.
Even so, remember thinking Boston to Long Island airport by plane beats JFK and the Long Island Expressway(*) hands down. (*) Equates to the M25 on a bad day but with more potholes, and more^2 cars.
I'm inclined to think that any executive who wishes to follow Ballmer is an executive without whom Microsoft will be much better off. Conversely, those who now see an opportunity to redirect the corporation in a more fruitful direction shouldn't need any greater incentive than Ballmer's departure to stay. They've already benefitted 10% on their shares because of Wall Street's reaction. Their negotiations with the company should be all about Microsoft's future direction and their part in it, rather than about renumeration.
Don't know if there's anyone gullible enough to believe this stuff here, but the fundamental weak point in the "chemtrail" conspiracy theories is the crazy idea that any fine particulates sprayed into the atmosphere several miles up actually descend onto ground even vaguely underneath.
In fact, anywhere (even everywhere) on the planet is not just possible but probable.
I've had my car (in London) covered in red dust courtesy of sandstorms in the Sahara. Darwin recorded the same on a boat off the coast of Brazil (and it's now known, sandstorms in the Sahara are an important source of plant nutrients in the Amazon basin). I've seen and smelled woodsmoke in Minneapolis from a forest fire a thousand miles West. People in the UK can suffer allergic reactions to plants that grow only in the USA. That Icelandic volcano with an impossible name shut down aviation in Europe. Bigger volcanoes in the past have turned sunsets spectacularly orange everywhere on the planet, and have even caused dips in global temperature for the year or three it took for the dust to finally settle. The crew of a space shuttle reported that the Earth's atmosphere was "all milky" in the weeks after Pinatubo (Phillipines) errupted.
And that's all with ground-level sources of dust!! Targetted Chemtrails - ROFL.
As far as I know nuclear decay is the only easy genuine random source available
Completely wrong. Others are thermal noise (in analogue electronics), and turbulence (in airflow). Your audio input jack and hardware can be a very effective random source. For best effect, connect a thermal noise source instead of a microphone: it's trivial to build one from a few discrete electronic components, and power it off a USB port.
But even a microphone listening to background noise will do. Even if the spooks have a hi-fi uncompressed bug in your office, it won't be recording exactly the same audio stream. The least significant bit per sample will be random, which is quite a reasonable source of entropy to blend into an entropy pool. (If you stick your random noise microphone to your PC's fan grille, it'll be more than one random bit per sample).
Finally, for an entropy pool you don't need random in the sense of passing all statistical tests for a random source. It just has to be non-reproducible and not remembered by anything. So the "signal" bits of the background noise in your office also qualify to a greater or lesser extent.
I remember reports that it wasn't quite where Newtonian/ Einsteinian gravity says it should be. There were various explanations mooted other than new physics, but they seemed a bit strained. The longer it carries on diverging from its Newtonian trajectory, the more strained the other explanations become.
It's a *very* small deviation. But if it's real, Voyager may yet become most famous as first evidence of some new physics.
They already make hot pepper chewing gum as a medical aid for people suffering painful mouth ulcers arising from chemotherapy. The "hot" sensation blocks the pain sensation.
I've occasionally thought it might find a wider market as a sort of confectionery.
So... since every fool knows that the moderate doses of capsaicin as found in culinary chillies are not harmful, is this a form of S&M "torture" that's legal?
Just curious. About the law.
Not enough extra transistors to make any difference, compared to the 100 million plus transistors per SoC. It could be done with a standardised ROM containing a list of device IDs (64 bits, to avoid ever running out of IDs) and a base address (64 bits). Some address decode logic, and 128 bits (128 transistors?) per device.
Any patent on a ROM containing a look-up table surely expired in the 1970s.
No reason a ROM consumes any power at all, except when it's being read. But even ignoring that, a thousand transistors compared to many million is under 0.1%. That's way below manufacturing variability.
Always respond to anything involving the slightest chance of legal action by letter, recorded delivery. This has several advantages. They can't turn around and deny whatever they told you in their reply. They make themselves look bad (in court, or on your official complaint) if their reply is in any way evasive, inaccurate, or never arrives. And best of all, it costs them far more to process than a phone call would.
I'd suggest the same retalliation to 0870 and 09xx numbers. Put down the phone. Write a letter. Send it recorded delivery.
Depends (broadly) on whether you're talking electronics, or new antiques. Rolexes, and the whole of the expensive "classical" mechanical watch industry, are retailing new antiques.
The classic watch "user interface" is good, and auto-winding so they never stop (if worn occasionally) is also good. Electronically, you can have the same with a Citizen eco-drive (light-powered), and better timekeeping, and a longer keepalive-time while stored in a dark drawer.
I was thinking they could call their colour Shampagne, but I'm not sure how that would play with potential French customers.
I can't remember the exact year, but I remember that when I was in the market for my first scientific calculator (as a schoolkid), the competition was between the Sinclair and the Commodore SR-36 at about twice the price. I bought the Commodore (for a vaguely-remembered £59? ), and never regretted it. Everything worked perfectly right through to about 2005(*), during which time I never felt a need to buy another calculator. I think it was the keyboard which proclaimed "quality" just as the Sinclair one proclaimed "cheap, nasty".
The Commodore SR-36 was a real class act.
(*) except the rechargeable battery, whch I had to replace a couple of times (screwdriver and soldering iron required).
Outsourcing to the wrong organisation? Who chose this grocer? Sainsbury, Tesco, Asda all do free delivery on orders much smaller than £500!
I wonder who is paying for "those people"'s lunches?
It's a bus isn't it? One where the passengers are handcuffed to the seats (or something - never seen inside one!). But does that make it any less of a bus?
Isn't the growth capitalism depends on measured in money? Which is subject to inflation? I've always assumed that capitalism works just fine on somewhat illusory growth. In boom times growth is ahead of inflation, in slumps inflation is ahead of growth, and if there's a fundamental reason that this cannot continue for the forseeable future I don't know of it.
non-faulty computer or calculation design will always return the same answer, assuming other variables remain constant. A Human brain is completely different in that it is likely to return a different answer each time, even if the variables remain constant.
Straw man! assuming other variables remain constant If it's a realtime event-driven system with unpredictable and unrepeatable inputs, that is never the case. A brain is clearly such a system. One may speculate that is a large part of its superiority over a computer (though of course, an operating system is also of that nature).
it's the power, cooling, and interconnect on the large scale that needs work.
Very well said. When Moore's law finally runs out, the next major breakthrough will have to be in parallel coding.
A (human) brain has ~10^11 processing elements clocked at maybe 10Hz. A CPU has ~10^9 transistors clocked at ~10^9 Hz. By that measure a humble desktop CPU should be ahead of us by a few orders of magnitude. So what gives?
Well OK, a neuron is a much more complex device than a transistor, but a million times more complex? Unless it's in some sense a quantum computational element rather than a classical one (which cannot be ruled out), I don't think there's a difference of complexity of order 10^6. Surely not 10^9 (comparing against a 1000-CPU cluster).
Which leaves the dense-packed 3D interconnect in a brain, and even more so the way it is able to utilize its hardware in parallel for a new problem without any (or much) conscious organisation of how that hardware will be organised, what algorithms it will deploy.
The next huge breakthrough will have to be in problem definition technology. Something that fulfils the same role as a compiler, but working across a much greater range between the statement of the problem and the hardware on which the code will run. There are some scary possibilities. Success on this front may reveal that a decent scientific computing cluster is actually many orders of magnitude superior to ourselves as a general-purpose problem solver, and might also reveal intelligence and consciousness as emergent properties.
Jill (*) or Skynet?
(*) Jill is the most benign first AI I know of in SF. Greg Bear, "Slant".