1128 posts • joined Friday 21st December 2007 15:33 GMT
it could basically make alot of current encryption trivial to solve
Care to explain how?
"Alot [sic] of current encryption" is symmetric-key block ciphering with AES and the like, or stream ciphering with RC4. What QC algorithms do you have in mind to break those at significantly better than brute force?
Asymmetric key encryption is often RSA, based on the integer-factoring problem. Shor's algorithm is optimal for QC approaches to factoring, and it roughly cuts the time to the square root of the best known classical approaches. That's equivalent to cutting the key length in half. Make your keys twice as long and there's no QC advantage, even if you have a QC big enough for the problem in the first place (you don't).
For other asymmetric ciphers, such as those based on discrete log, again we must ask: what QC algorithms do you think render these "trivial"?
Re: Better idea for currency
We don't even need large numbers. Here in the US, everything anyone writes is automatically protected by copyright, and thanks to Berne + WCT this extends to numerous other countries automatically. So we can just assign value to small written works and then use fractional shares of them as currency.
I'll pay you five Reg upvotes for rights to your scheme. I'm good for 'em - just check my profile.
Re: Betting against the MAN
Not that I like Nixon, but by that point we were on the gold standard in name only. The dollar's value relative to gold had been repeatedly adjusted.
But there's nothing magic about gold. It has very little practical use, so it really amounts to yet another currency that has value only because we agree it does.
And a commodity-backed currency can be converted into a fiat currency by the guarantor at any time. Since the government has a monopoly on violence (that's what makes it a government), it can't be compelled to honor its guarantee to convert the currency back into the commodity that supposedly backs it. It's all a combination of good faith (for ordinary users of the money) and risk assessment (for speculators), regardless of the notional source of value.
Moneys historically have come from two sources: top-down money created by governments, and bottom-up money created by independent economic agents (generally ordinary folk). Top-down moneys often start by the government standardizing amounts of divisible, portable commodities: so the king stamps his likeness on equal-size chunks of a precious metal, for example, to say "this is really one pound of silver". Bottom-down moneys often start as tokens that represent contracts, like the Mesopotamian clay balls; what begins as a set of specific contracts gets pared down into a smaller set of standard ones that are then traded for nominal value (I give you a goat in exchange for two lamb futures, which I then hand over to my landlord for rent on my fields).
In either case, though, the value of the money ultimately rests on widespread agreement to treat it as a representation of value. Top-down and bottom-up moneys are both vulnerable to losses of confidence; governments fall, people succumb to financial panic, etc.
Sometimes a top-down money becomes a bottom-up one, or vice versa. In Somalia, the formerly government-backed currency became worthless, then started going back into circulation because people wanted some kind of token of value for regular household-economic purposes. Equivalently, there's nothing all that unusual about a government deciding to impose regulation on a bottom-up currency. That's what governments do - they govern things. These days, they mostly do that by majority consent (which actually takes the form of majority ignorance and indifference), because that's a hell of a lot cheaper than doing it through violence; but they can always whip out the violence when they think it offers a good return.
There, there. Deep breaths. It'll be OK.
Re: My two cents
The Avengers featured several racks of Exadata kit in the opening scenes in the garish red racks. Yay product placement.
It's surprisingly effective. All of my customers who are costumed superheroes are looking at Oracle for their next big IT purchase.
"Look, we have all these statistics on our supervillain battles and world-saving activities, and we're trying to analyze them to identify cost-saving synergies and unrealized opportunities. We think Red Stack can help, though we're a bit worried about the name. Are you sure the Red Skull doesn't have anything to do with this?"
It will change the beautiful journey we call life." A bit heavy, but it's a cute sentiment.
You misspelled "cloying, trite, and vapid".
Personally, if I find the Google "search experience" significantly changes my life, I'll take that as a sign that I'm living wrong.
I can't say anything else described in this keynote interests me particularly, either. Mostly more social-networking guff. I can tell those damn kids to get off my lawn without Google's help, thanks.
men with three letters after their name poking around in your private life
Those dentists are a caution, eh?
Factoring is almost certainly not NP Hard.
Right. And for those reading along at home, Shor's algorithm, cited by Eadon as a quantum algorithm for improved performance on NP Hard problems, is an integer-factoring algorithm, and is thus almost certainly irrelevant to the claim that QC offers improved performance on NP Hard problems.
The arXiv link in Eadon's second post is a bit better - it claims sub-exponential growth in time and energy for certain AQC (adiabatic quantum computation) algorithms for NP-Complete problems (specifically TSP, but they're all isomorphic anyhoo). It looks a lot better than the D-Wave press release (it's all about the Hamiltons, as DAM requested), but I admit I just read the abstract and skimmed a couple of pages, and I have no idea what's been done since the paper was written (2006) in this area.
But: Solving NP-Complete problems quickly is no guarantee of faster performance on NP-Hard ones, in general. An NP-Hard problem can be derived in PTIME from an NP-Complete problem - not the other way around. NP-Hard problems are potentially "harder" than anything in NP. NP-Complete is "the worst of NP", informally; NP-Hard is "NP-Complete and worse stuff". The Halting Problem is NP-Hard but not NP-Complete.
Also, remember that stuff in PTIME may still take an infeasible amount of time (and ditto storage for stuff in PSPACE). Matt Skala wrote a joke once in his comic strip about a PTIME algorithm of ridiculously large order - I don't recall the exponent but it was so big that any non-trivial problem was going to be intractable - and then mentioned in the comments that it was based on a real algorithm he'd discovered, which did something useful (clustering points in n-space, I think) in theory, but not of course in practice.
Given the published research I've seen on QC, adiabatic and otherwise, I'm inclined to think DAM is correct: D-Wave have an analog computer that's optimized for certain kinds of problems. Analog machines can converge very quickly on probably-correct solutions for many sorts of problems. Often they're limited more by space requirements than time requirements (as with eg Adleman's DNA computation experiment).
Ah, yes, seamless design. Because removing tactile and visual information previously available to the user is the best way to improve usability.
Idiots. Even a passing familiarity with, say, Don Norman's work would have shown them why this is a terrible idea - and he used to work for them.
Re: XML is bloated.
ASN.1 - now there's a bloated pile of crap for you.
"Oh, the BER are ambiguous, so we'll have to use the DER. OK, should I send this string as a PrintableString? A UTF8String? A T61String? An IA5String? GeneralString? VisibleString? UniversalString?"
And that's just DER; there are 8 sets of encoding rules (9 if you distinguish between UPER and CPER). Was that really necessary? And DER, which seems to be the most widely used encoding (it's the one I run into most often, e.g. for accursed X.509), is pretty ghastly, what with its bit-level binary format; it's a pain to decode by hand and a single-bit error in the wrong place often makes it impossible even to guess at what the message was supposed to be. (It's reminiscent of SNA and similar bit-twiddling protocol families in that respect.)
Thanks, I'll take XML. At least it's often human-readable in traces, and possible to process and edit with generic text-processing tools.
Of course there's ASN.1 XER (XML Encoding Rules), which combines the worst of both worlds.
(Also, ASN.1 was hardly the first structured data format. Various CSV variants, GML, and S-expressions all preceded it, for example, and XDR is contemporaneous.)
 While SGML, the standardized version, only appeared a couple of years after the first ASN.1 standard, its predecessor GML had been around for more than a decade at that point.
Re: All very true, but..
Presumably it can kill/injure st short range
As can knives, arrows, spears, and a tremendous variety of blunt instruments (including ones that collapse
or are otherwise concealable).
The main worry seems to be invisibility to X rays & customs
And those can also be made of plastic or ceramic, to bypass metal detectors and (modulo density of material and sensitivity of detection systems) x-ray machines. (I'm really not sure what "invisibility to ... customs" might mean. The last time I traveled internationally, the customs officials seemed to have normal eyesight.)
I'm having trouble conceiving of any situation where a Liberator would be the best choice, or even a particularly good one. I can legally bring materials for making much better weapons, and in some cases the weapons themselves, onto a plane; I could illegally smuggle many others on board with little chance of detection. Ditto for most other "secure" locations where a DIY plastic gun might conceivably be taken.
Re: I think its well priced for the first practical electric saloon
Good grief, cars are expensive, aren't they?
I paid $1 for the car I'm driving these days. Of course, once you add taxes and insurance, maintenance, and so on it's a bit more.
Eventually it'll get to the point where it's not worth coaxing any more miles out of it, and I'll probably have to part with a few hundred dollars for another one. Oh well.
Of course it helps to live in a state where you're allowed to put pretty much anything on the road.
 On the "is this vehicle entirely unusable unless I fix this?" plan.
Re: CR reviewers focus on driving.
that's why their car reviews always include information about frequency and cost of repairs as well as resale value when calculating the ratings for other cars
Not to mention efficiency, comfort, safety, convenience, fit and finish... The "focus on driving" claim above is unsupportable. CR reviews have often rated cars with mediocre handling relatively highly, and the road-test portion of their reviews starts with "ride quality and comfort" before moving on to handling.
They put the Tesla in their "luxury car" category (reasonable), which means somewhat different expectations - that group should have excellent fit and finish, for example. But if anything they put less of an emphasis on handling for luxury cars than they do for, say, sedans; the Hyundai Genesis is currently their #8 luxury car, with an 87 rating, and they complain about its "unsettled" ride and say the responsiveness "isn't in the category of a sports sedan". CR is not Car and Driver.
Re: such a waste
While the corrrespondent clearly means "would have", better grammar in the chosen context would have been "had". There is in the text a god awful two-fold grammatical error.
I'm no prescriptivist, and I'll note that the use of "would have" for the subjunctive mood rather than the conditional has become very common at least in US English. For example:
"If Alice would have gotten her gun, she would have shot Bob's ass."
is now a common construction here. Traditionally, of course, "would have" was used to indicate the conditional, and "had" for the subjunctive:
"If Alice had [or 'Had Alice'] gotten her gun, she would have shot Bob's ass."
So from a purely descriptivist point of view, the "subjunctive would" has arguably become so common that it's merely dialectical, and not a nonstandard usage or grammatical misapplication.
But I agree it grates, particularly when combined with the of/have homophone error.
 Such as it is, in English. Some grammarians argue that (modern) English has no subjunctive mood proper, and the use of the "would have" copula and similar constructions should really be considered something like subjunctive adverbial phrases. Whatevs.
Re: But there ARE uses for this!
And in other news, Reg readers recite all the same applications for augmented-reality technology that have been tried or mooted for the past couple of decades. Oh, wait, they seem to have left out "virtual tour guide".
In other words, yawn. This has all been done before. Yes, there are vertical markets for AR in the workplace. No, there's no reason at this point to believe Google Glass or knockoffs thereof will significantly increase demand outside those vertical markets.
Have a day.
Re: Oh Belgium...
Except that those of us who actually use Windows 8 (or 7) haven't seen a BSOD for YEARS. Literally, years.
I have a Win7 machine that crashes with a STOP or other kernel fault at least once a month. Literally.
Sure, it's probably the fault of a driver written by some idiot at some second-tier hardware supplier. (I really ought to have paid the relatively small incremental amount for the better components - Intel wireless instead of Realtek and the like.) And if it's not that, it's likely an intermittent fault in memory or some other hardware component.
But Win7 did not magically make all PCs stable, and neither did Win8.
New languages and frameworks?
New languages such as Ruby, Node.js, and Scala were created with specific frameworks, but programmers do not code in only one language
Scala is a "new" language, in the sense used by the article, but it compiles to JVM bytecode or CIL, and so uses the Java or .NET framework. Scala is completely interoperable with all of the many languages that also compile to one of those two intermediate languages.
Re: Yes and no.
think waving your arms manically around like a demented Tourette's sufferer is a piss-poor way of controlling anything
Oh? And how do you control your orchestra, pray tell?
You wouldn't want the young'uns accidentally getting an eyeful of "Angry Northern Amateur Housewives HD" during the X factor
It's better than the alternative, if the alternative is letting them watch "X Factor".
Oddly, Schmidt has some technical chops - he is the co-author of the original lex, after all. And he worked at Bell Labs and Xerox PARC, which are not generally places for the completely clueless. But somewhere along the way he seems to have decided to abstract his thinking away from technical niceties in favor of making vague pronouncements about what IT ought to do, rather than considering what it is practical and plausible to do with IT.
 That is, the lexical-analyzer-generator of that name often included with UNIX distributions. Obviously he did not invent the lexical analyzer per se. And no, lex isn't a work of genius; many CS students throw together a lexical-analyzer-generator as part of coursework, for example. (I myself wrote one from scratch in scheme, and have done a couple of special-purpose one-off generators in C for DSLs that didn't need full-on lex-style treatment.) But it requires some understanding of moderately technical concepts.
Re: On course for UK - Oz in 30 minutes
+4g is fucking weird if you have no experience of high G.
According to Wikipedia, one common carnival ride exposes riders to 3.5g. I dubious many people would be able to distinguish between brief experiences of 3.5g and 4g, and I've never heard a carnival ride described as "fucking weird". But no doubt YMMV.
That said, a couple of online references I found with a quick search suggest acceleration for commercial aircraft is rarely higher than about 1.5g (accidents aside). I expect 3g, even for short periods, would be disconcerting for many passengers and dangerous for some in relatively poor health or otherwise less able to tolerate it. For your average business traveler, probably not an issue, but liability would be a concern if this were commercial technology.
Re: I can tell you that this stuff is obscure. Really obscure.
I think AC @22:10 captured it pretty well. It does help to have some familiarity with Hamming and Reed-Solomon codes, and/or with the application of group theory to error correction. (Groups are simpler structures than fields, and you can use them to create error-correcting codes that operate similarly to R-S, PMDS, and SD codes, so they give you the idea with a bit less mathematics to understand.) I generally dig out my copy of Gersting's textbook Mathematical Structures for Computer Science when I need a refresher on this sort of stuff, but surely there are good introductions available online.
My CS degree included a couple of discrete mathematics classes and one in linear algebra - I did a quick skim of the paper, and I didn't see anything more esoteric than the stuff we covered there, if memory serves (this was back in the '80s). Do they not teach that sort of thing these days? (I realize many, probably most, Reg readers don't hold CS degrees - IT and CS are not at all the same thing - but surely a good number do.)
Re: When are we going to get Encrypted Mail ?
When are we going to get Encrypted Mail ?
Having 'encrypted for transmission' built into email programs sure would help me feel safer.
So why not use an email client that supports encryption, digital signatures, and the like? There are plenty of options for many popular MUAs, such as Thunderbird.
Now, no doubt they can un-encrypt it with some little effort, but at least they would have to try.
Oh, there's some doubt, if you use well-tested algorithms and good security practices. While the tinfoil-hat brigade (and Hollywood) would have you believe that the government can decrypt arbitrary messages merely by frowning at the screen while hex dumps flash by and the Charismatic Leader shouts instructions, the more likely situation is that governments do not possess magical decryption techniques. And even if they do, they have better mechanisms for getting your data (or mine), since they can make our lives miserable. Chances are, they don't care about your email, and increasing the work factor even a little bit will send any curious snoops elsewhere for lower-hanging fruit.
Re: You can try already with JCT : Do calculations on encrypted messages
Yup. Unless I missed something, this new release is a better-performing alternative to Gentry & Halevi, using a similar but somewhat different approach. It's all ultimately derived from Gentry's original work at IBM and his PhD dissertation, and he's worked on these algorithms and most of the ones that have been proposed for fully-homomorphic encryption (as far as I'm aware). So the JCT would be a good way to study the ideas; HElib is useful if you want to try putting it into production.
Neat stuff, in any case. The mathematics might be a bit daunting for non-practitioners, but folks with a CS background ought to be able to puzzle out the ideas in general terms.
 Assuming both schemes stand up equally well under attack, which remains to be seen. They rely on the hardness of somewhat different problems, and of course there may always be implementation flaws that expose weaknesses not part of the underlying algorithm.
 Actually multiple approaches, according to the abstract for the BGV paper. You can use RLWE without bootstrapping, RLWE with bootstrapping as an optimization, or LWE. These are variations of the Learning With Error problem, which involves approximating one function from a group of functions, given samples of the function's input and output, some of which are incorrect (noise).
Hotmail was also quite unique in that it seemed to be pre-spammed. All you had to do to get your fix of penis pill offers and marriage proposals from attractive Ukrainians called Alina was sign up and log in, whereupon you'd find a selection of crap that had arrived before the welcome email.
It's possible spammers were polling the Hotmail SMTP servers for new valid user IDs, using the RCPT and/or EXPN commands, and candidate names derived from lists of existing email addresses and heuristics for generating new ones. At one time this was apparently common practice; I remember discussions of it on BUGTRAQ or one of the similar lists. (A simple countermeasure is to put a small delay in processing RCPT and EXPN, so it becomes less feasible to test candidate addresses.)
Re: American credibility?
There I fixed it for myself
And received three downvotes. We'll have no editing here at the Reg!
The internet was full of porn even then, you just had to download multiple files from newgroups and stitch them together
Plenty was available via FTP, too. And no doubt Gopher, and probably other Internet protocols. And gateways to BBSes, from which it could also be downloaded directly using XModem and its successors, Kermit, uucp, etc. Probably someone was serving porn via IND$FILE.
only photos though, no video.
Porn video was also available online pre-WWW (though perhaps not on Usenet binary groups, as even low-res compressed video would have been terribly large for many links of the day when uuencoded). Danged if I can remember a specific reference (it was something I was only tangentially aware of, when the occasional acquaintance felt the need to demonstrate some discovery), but I have a clear recollection of horrible, grainy, audio-free, VGA-resolution video (about as bad as VCD, but VCD apparently only appeared in 1993) of unattractive people engaged in what appeared to be stunningly unrewarding exertions, while employed at a location I left in late 1991.
Re: I don't have a problem with mp3
I don't have a problem with MP3, full stop. But that's because I care about content, not fidelity or audio quality. If it's recognizable, that's good enough for me. (And sorry, Alistair, but you're wrong about what "anyone would choose to play audio that mattered to them". Your way is not the only way in which audio can matter.)
Of course, I also think color was only a minor improvement in TV picture technology, and I have no use whatsoever for HD. I realize many consumers do value audio and picture quality, and that's fine; but it'd be nice if they realize not everyone does, and stopped belittling those for whom mobile-phone sound, or whatever, is perfectly suitable for their needs.
Re: Just how many do you need to register?
With "Gambia", that's because the country's full name is "Republic of the Gambia", so named because it's a republic that stretches along the banks of the Gambia River.
Tangential anecdote: Back around 1990 I read a student essay written by a young woman from The Gambia, who referred to it as "the sort of country where everyone knows everyone else" (or words to that effect). While TG is the smallest country on the African mainland, and the population was smaller in 1990 than it is now, that seems like a bit of an exaggeration. But with an area of about 0.5 Waleses, it is smaller (and less populous) than Connecticut. So it really needs that definite article.
Doesn't more nodes generally equal greater physical distance? I cannot imagine that it doesn't.
I'm 18 hops from systems in the msu.edu domain, less than 15 miles from me. I'm 6 hops from a machine on our corporate network that's physically 3700 miles (geodesic - a bit shorter if you tunnel through the crust) away.
I've seen corporate networks where there were more routers between machines in the same building than there were between most of those machines and some Internet sites.
It's likely that the average geographically-distant sites will be a handful of hops further than the average local site. But the relationship is far from linear, and there are a lot of exceptions. So topological distance is not well-correlated to physical distance.
Re: There are other ways to unlock your door.
Hide-a-key. Neighbor has a key.
This whole thing is a solution in search of a problem, as far as I'm concerned. That doesn't mean it isn't of interest to others, but I'll have a home automation system when they shove it into my cold, dead hands.
I have AT&T now. Last thing I want to do is to buy any more services from them.
Ditto. Even ff I were interested in this Internet of Things nonsense, I wouldn't want AT&T involved. Just today I had to cancel yet another pointless service they'd helpfully added to my mobile account. And at least once a week I get the same offer from them, by mail, for AT&T U-verse or whatever they're calling it now. They've sent me probably fifty or sixty of those letters; even at business rates that's an absurd per-customer marketing expense.
Re: I'm just waiting
You can set osx to [have focus follow the mouse cursor]
Windows too. In XP "focus-follows-mouse", aka "implicit focus policy", could be enabled with the misnamed "X Mouse" feature of TweakUI. In Vista, Win7, and the past few releases of Server, it's been an option in the Ease of Use / Accessibility control panel. I don't know whether it's still an option in Win8.
Unfortunately some Windows applications misbehave under the implicit focus policy. Visual Studio 2008 (which was a showcase of poor UI ideas) was one; it had various windows, such as the Properties dialog, that would automatically unmap themselves when they lost focus, unless they were "pinned" in the manner of some older Sun GUIs. When the dialog was first mapped, it wasn't pinned; and under implicit focus, it immediately lost the focus, and unmapped itself before you could do anything. Impressively stupid. It was possible to work around this by setting a focus-change delay; that made applications like VS2008 usable, but it was an annoying compromise, since sometimes the focus would lag behind where you expected it to be.
 As the Motif window manager had it, back in the day. I don't remember what older WMs made this configurable, or what they called it if they did.
Back to probability class, Andrew
He also uses a technique called objective Bayesian analysis. Conventional subjective Bayesian analysis relies on highly subjective uniform priors, aka deliberatively-informative "expert" priors, as parameters.
Well, that's a glorious oversimplification and misrepresentation. Hell, various Wikipedia articles have better two-sentence treatments of the distinction.
First, there's nothing more "conventional" about subjectivist interpretations of Bayesian priors than about objectivist ones. That adjective is pure pathos - it's an attempt to bolster your argument by presenting Lewis' as somehow daring or groundbreaking merely because he adopts an objectivist stance. Weak.
Second, it's completely untrue that a subjectivist interpretation of priors "relies on ... subjective ... priors", much less ones that are "highly subjective" (more weaseling), and particularly not on uniform ones. Indeed, Jaynes' classic three-card-monty example of an objectivist prior uses a uniform prior; Jaynes' point is that the use of the uniform prior is justified merely by the restricted information available to the observer, and is so is not subjective. (That is, any rational observer would conclude that the prior probability of the winning token being in any one of the third slots is 1/3. That's not a belief, he argues; it's the only sensible choice.)
The difference between objectivist Bayesians and subjectivist Bayesians is that the latter say: hey, when you choose a prior probability, you're doing it based on what you believe is the appropriate value. There's no formal way to arrive at a "correct" prior. The objectivists, on the other hand, say, no, often there is a formally-correct way to arrive at a valid prior, based only on the structure of the problem and the limited factual information available to the observer.
It's a philosophical distinction that affects how the model is set up, so it's certainly appropriate for Lewis to include it in his title and describe how he arrives at his priors. But the article does its readers a disservice by treating the distinction as 1) supporting Lewis' argument or granting him any special authority, and 2) some sort of failing on the part of subjectivists. And, of course, if you're not going to explain a technical point correctly, it would really be better to avoid trying to explain it at all.
Re: @Steve Knox - Do they realize what they just said?
Oh, by way, there is no compromise possible on freedom.
Every single person who doesn't reject the social contract disagrees with you. Compromising on freedom is the essence of law, and of any form of government other than complete anarchy.
Really, that has to be one of the dumbest slogans I've seen in years. And perching on a pile of corpses in an attempt to claim the moral high ground (especially in the name of "end-user digital freedoms", which in the context of this article would appear to constitute mostly cheap entertainment for the privileged) is not only unpersuasive but ugly.
Re: Anything is still possible
Feel free to adopt that motto if it makes you happy, but it's no consequentially different from simple subjective solipsism - believing that nothing outside the conscious self can be known. Of course in that case anything is always possible; all of your perceptions could be hallucinations.
But there's little point in taking solipsism as a substantive foundation - of trying to act based on it. And there's equally little point in trying to act on the belief that physics is different in parts of the universe we can perceive. Maybe so, but so what?
Formal and empirical results that challenge our models are interesting, because they suggest there's something new to know, and perhaps to do. Theories that challenge our models are sometimes interesting, because investigating them might lead to new formal or empirical results. Saying "maybe our models don't hold under some conditions that we have no empirical access to, and no formal mechanisms to describe" is chit-chat.
if I understand it right, you would have to get near the collapsing neutron star first, in order to make use of the warped space/time and slingshot yourself to an equivalent point on the other side?
I admit I was thinking the same thing, but I've never gotten around to reading Haldeman's Forever War (I really should some day; I know it's a classic), so I don't know if he addresses this point.
According to one article, the closest known neutron star is 250-1000 ly away, so we'd still be needing the big suitcase.
Personally, my favorite fantasy space-travel mechanism is still Larry Niven's proposal to turn the entire solar system into a vehicle, by wrapping the sun with a big ol' ring of electromagnets (possibly in concert with building a full-on ringworld, because hey, while we're dreaming...), and then turning it into a really big ramjet aimed orthogonal to the ecliptic. Niven suggests that by the time you've thrown out an appreciable amount of solar mass (as reaction mass), you're scooping up enough interstellar hydrogen to make up for it. Now you have a "generation ship" that has the best life-support system anyone's ever seen. Just maneuver it into the neighborhood of an interesting stellar system, then use conventional rockets to visit. Space caravan!
Re: Stupid question
that amount of metal will have some compressibility
Any rod of metal has "some compressibility". The rigidity of a solid is the result of electromagnetic interactions, which are quite willing to redistribute mechanical force orthogonally to the axis of compression. So unless your "rod" is a single layer of atoms, it can always be compressed - it will just expand to the side as necessary.
And since mechanical force travels by propagating EM interactions, it can't travel down the rod faster than EM can propagate along it. There's no need to invoke compression for this thought experiment; the OP's assumption that the far end of the rod moves immediately is simply wrong. Atoms at the near end move, which decreases their distance to their nearest neighbors, which increases the electromagnetic force between them and their neighbors. Once that force propagates to the neighbors, the neighbors are accelerated by it and move in the direction that the rod is being pushed. Nothing happens immediately.
Viable GSM competition?
While this may not be good news for many MetroPCS customers - typically people who want a low-price contract that's mostly good in their local area, because they don't travel much - it might help T-Mobile improve coverage and build out their network, and finally provide some decent competition in the US for GSM phones.
I'd like to drop AT&T, which doesn't offer a plan that comes close to matching my usage (so I pay far more than I should), but if I want a GSM phone (and I do, since I travel to Europe on occasion) and I want non-roaming service in my home (which I do), they're the only choice.
I live in a city less than 20 miles from the state capital, but I'm not in range of a T-Mobile tower. Apparently Catherine Zeta Jones  riding a motorcycle around the country has not, in fact, done much to improve their network.
 And now I'm thinking of Darling Buds of May, of blessed memory. Ah, youth.
Re: If only...
I think you've just described the "telephone"... :-)
I think you've just explained the joke.
I'm sure the budget would stretch to a torx screwdriver and a sheet of emery cloth.
I'm sure that's not "appropriately certified". It doesn't matter whether it's effective; the point of the original post is that it has to be certified.
Rather than dismantling the drives and hand-sanding the platters, it'd be a lot faster to throw a drive in a vise and cut a slot through it with an angle grinder. That doesn't make the entire surface of each platter unreadable, if you have your magnetic-force microscope handy, but it clearly raises the cost high enough that an attacker is extremely unlikely to be motivated to try to recover any information. But again, not certified. Hell, these are drives that have suffered hardware failure; I doubt there's any information on them that's sufficiently valuable to even justify trying to swap the controller board and get the drive going again.
This isn't an information-protection problem. It's an auditing problem.
Re: Internet of Fail?
I like the idea of everything connected
Why? That's a sincere question - like many of the other commentators, I can't imagine having any use for most of what's being touted here.
I don't want it to become a problem because the logic circuit in my dryer failed or a previously unknown programming error 'bricks' my dryer
Already an issue with many appliances, even without Internet connection. I've replaced logic boards in my washer three times (all under warrantee, so the vendor's more than lost their profit on the unit). It's a lousy design, but it's also a common one - the manufacturer of this washer OEMs to several of the major US brands, and apparently to several European ones as well.
(If I lived alone I'd have bought a cheap washer, but I'm not the one who does the laundry around here.)
 The washer has three logic boards with custom ASICs. The boards cost around $200-$300 each if you buy them through the appliance vendor's parts division; you can get them directly from the manufacturer for about half that. Dispensing with the unnecessary electronic controls would get rid of one board. Alternatively, they could use an off-the-shelf fanless Linux box connected to a single board with a USB adapter, some A/D logic, and a handful of relays to control everything.
"Exciting"? I suppose it is. I'm so excited I'd like to punch whoever came up with the damned thing.
Touchscreen controls remain the worst idea for driver ergonomics ever, right behind "unified" control systems like BMW's idiotic iDrive. Making it more difficult and attention-intensive to operate secondary functions is precisely the wrong thing to do.
I'm going to have to buy a new car in a year or two, and I am really not looking forward to it. (I wouldn't buy a new car at all, if left to my own devices, but marriage is a series of compromises...) It's going to be difficult to find anything that suits our needs and doesn't have one of these moronic "infotainment" systems.
Re: If you can't create tech, criticize it
It doesn't help that Chirgwin misrepresents the situation by failing to distinguish between the browser and standalone environments. This vulnerability does apply to both, but the "click yes" bit is only relevant to in-browser execution, obviously.
This is a serious vulnerability (and it's very similar to a number of the others found by Gowdiak and Security Explorations, so there is little excuse for Oracle engineers to have failed to find it already, unless Oracle is understaffing the effort - which is entirely possible). Once again, it's due to the combination of privileged classes that can violate the security model, and the ability to reflect into those classes. That's a deadly combination. But it's inherently no worse than anything an attacker can do by persuading a user to run native code, or by leveraging any exploit that permits arbitrary code execution.
Getting rid of Java is committing exactly the same error Oracle is making: attacking the symptoms rather than addressing the underlying problem. Attackers have myriad ways to get uninformed and incautious users to give them access to their machines. Stick your finger in the Java hole and watch as the water comes over the top of the dike.
Another data point
... to support the belief that whatever the advantages of Open Software may be, marketing is not among them.
Really, the jokes nearly write themselves. Ah, yes, the wild fly! Free and agile! Friend of dross and corruption! Annoying! Short-lived!
Not to mention what we might do with some of the other meanings of "fly".
Re: The high courts have ruled...
YouTube ... should eliminate all posts containing copyright protected works
Which is precisely what they do, when they're informed of such via a DMCA takedown notice. That's precisely what they're required to do by the law, you ninny.
Re: Re "famous Neil Gaiman"
In future, I will do as newspapers do and write an accreditation before the name - such as "Harry Potter star Richard Griffiths" or "Transformers The Movie voice actor Orson Wells".
A bold move by noted accreditationist Alistair Dabbs!
Even if your results say so, many of the studies relied on report correlation, not causation. How can you be certain it's not just statistical noise that flagged gene X for disease Y?
Not to mention epigenetic effects, which this sort of testing completely ignores. The more we learn about gene expression, the more we realize just how hugely overstated the discourse around genetic destiny is.
And then there's the possibility that the lab made an error in testing or reporting.
Didn't Orlowski write an article some time back bashing this sort of genetic testing as modern snake oil? A bit overstated (as usual), perhaps, but not so very far off. Certainly before I'd take any significant action on any information obtained this way, I'd want 1) a second test from a second lab, and 2) substantial consultations with a geneticist who's up on current research (including epigenetics and environmental factors) and a specialist in the relevant area.
Of course, people routinely make life-changing (and -ending) decisions based on all sorts of misinformation, superstition, and outright stupidity, so the genetic-testing route is hardly the worst one they could take.