Re: Strange how
In Yahoo's case, anti-relevant might be appropriate.
It is a pity, since Yahoo was founded on some interesting ideas, including a general information model that was quite clever.
4869 posts • joined 21 Dec 2007
In Yahoo's case, anti-relevant might be appropriate.
It is a pity, since Yahoo was founded on some interesting ideas, including a general information model that was quite clever.
Congratulations - it's not trivial to satisfy Poe's Law so precisely.
including anything relating to: employment eligibility, promotion, or retention; credit eligibility
"It's a Fair Isaac drone! Look rich, everyone!"
Though actually using drones would probably make FICO more transparent. At least you can see drones.
at some point, we have to recognize that war and bombs aren't very precise things
Or to put it more succinctly: Complicated things are complicated.
Ethical positions that fit on a bumper sticker may be comforting, but they don't solve many problems in the real world.
Literal SLC can only store zeroes.
It's not a problem. You just socket them in pairs, with the other one wired backward so it only stores ones. Then you use one chip for your zeroes and the other for your ones.
It's those pesky No Level Cell chips that are hard to use.
(I think browsers are better about this than they were ten years ago.)
They may be better, but the CRL and OCSP mechanisms are still fundamentally broken. They help, but they don't help much.
the attacker must install their own root cert on the victim's computer (corporate PC, or via malware, or via dumb PC manufacturers) - unless they've obtained the private key for a "real" root cert...
It's enough simply to compromise a CA that's trusted by the user agent. You don't need the private key for one of the CA's roots or intermediaries (though that does the job). Get the CA to issue you a certificate for a well-known site, signed by a root/intermediate that's trusted by browsers, and you're home free.
And CAs have been compromised many times - that we know of. And those are just the major ones. Of all those little regional CAs in the browser trust list, how many even have auditing practices sufficient to have a decent chance of knowing whether they've been attacked?
I'm pretty sure the poster was talking about the underlying protocols (TCP/IP, UDP) whose development indeed was funded by DARPA. Actually ARPA a the time;
The agency was renamed in 1972, shortly before the first TCP/IP specification was published. So if you want to be completely correct, TCP/IP development was funded by ARPA and then by DARPA.
The great achievement of the "one guy at CERN" was making the data on the internet approachable by the average guy, and in a way that scaled up without central control
Actually, we already pretty much had that. For the first couple of years HTTP and HTML didn't do much that Gopher / Veronica / WAIS / etc didn't also offer. HTTP and HTML succeeded for a few reasons: hypertext had better usability than separate documents and menu-style links; even with character-mode user agents HTML offered some basic presentation markup; HTTP (prior to the barbarism that is HTTP/2) is easy to drive by hand for experimentation and debugging.
Most importantly, though, the time was right. Graphical workstations were becoming common enough (helped by academic efforts like the Andrew Project and Project Athena) that it made sense to create graphical user agents. Not many people had NEXTstations, but quite a few had some sort of X11 box, so NCSA Mosaic (and to a lesser extent other early GUI browsers like Erwise, Spyglass Mosaic, and Viola) became a showpiece for the web. It wasn't much more functional than Gopher+WAIS, but it was prettier.
It extends to other vulnerabilities. Anything that gets you QSEE access can be leveraged to take over the kernel.
And "a bug that Google patched" means "a bug that still exists in most Android devices, which will never be updated", of course.
Simples - because you are not actually root or admin, just what MS decided to allow you to touch and nothing else.
With administrative access on a Windows system, you can elevate to ring 0 (modulo hypervisor intervention on a virtualized system), so you can do anything the hardware will let you do.
Even without elevating to ring 0, stock Windows administrative privileges include debugging any process, including down into the kernel. Kernel debugging on Windows is standard practice for device driver development, security research, etc.
Well, yes. On the other hand, it also describes his problem accurately. He likes to make promises he can't keep. Whether he ever intended to keep them is, from the "poor messaging" perspective, irrelevant. Keeping his mouth shut, or at least speaking in the usual corporate vagaries, would be much wiser.
Those who forget rhetoric are doomed to annoy us with their complaints.
I'm disappointed no-one recognised the photo
I knew it was familiar, for what that's worth, but I didn't bother reverse-searching it to pin it down. And I have heard the story of Chung Ling Soo. In any case, props for a fine cultural reference.
A fine piece by Ms Stob as usual, but ow, that link to that Mashable piece about Philippe Kahn... There are some impressive howlers there, considering they had to be packed into 700 words.
Like, oh, "In 1997 — when the Internet was just four years old". Try 14, kid - and that's assuming we're only talking about the TCP/IP Internet; double that figure if you include the NCP ARPANET (which is arguably more accurate, since it was an internet and the direct ancestor of the Internet).
I almost feel bad for complaining about technical errors in other Reg articles after reading that Mashable piece. Almost.
Borland spun off the only interesting and useful part of their company as Code Gear so that the part still called Borland could concentrate on navel-gazing.
That "navel-gazing" included Visibroker, a product that still brings in millions of dollars in annual revenue. The Caliber and Silk lines also still do quite well. I'm not sure what revenues are like for StarTeam, but I know it's still under active development and has an active customer base.
But I suppose if something's not interesting to you then it's not important.
(Also, the CodeGear division was sold only a year before Micro Focus announced we were purchasing Borland. That's not a lot of time for Borland to have done anything after selling CodeGear.)
Delphi's sets are more powerful than C#'s enums.
No they aren't. They're more expressive, but equally powerful,
In 25+ years of coding, I have never cared about whether a string is empty or nil. Never!
And no one else has different requirements, eh?
Bah. The manliest plane is the XP-82 "Twin Mustang". Perfect for rugged individuals who come in pairs.
LTE antennas capable of covering around 80 kilometers on the ground
Eh? Covering around 80 square kilometers? a radius (or diameter) of 80 km? A herd of 80 or so wild kilometers on the hoof? What is this claim supposed to mean?
That would be greedy bankers, and the ever widening-gulf between the rich and the poor.
Deplorable as both of those factors are, the financial system (that is, some form of capitalism with various financial markets) has weathered worse in both categories. They're a long way yet from being a threat to the financial system.
Don't confuse dangers of the financial system with dangers to it. Those are very different categories. Capitalism is hegemonic, and hegemonic power systems are very resilient.
Cryptographic algorithms attempt to keep information secret by transforming it into a stream of random numbers, making it unintelligible. These numbers, however, are not truly random since the algorithm uses a predetermined set of mathematical formulae to generate these numbers, so the information can be breached.
Wow. There is almost nothing correct in that entire paragraph.
While this paper mentions cryptographic applications of multiple-source extractors, those applications do not include "transforming [plaintext] into a stream of random numbers", except in the most tenuous sense - and that's if we gloss "random" as "pseudorandom". And "a predetermined set of mathematical formulae" is just an awkward definition for an algorithm, so that whole clause is tautological. And "the information can be breached", insofar as it means anything at all, is obvious; encryption is defined by the possibility of decryption.
Bit-wise XOR might work.
Not for cryptographic purposes. Linear combiners are not cryptographically secure. At best you'll have an output that's no weaker than the stronger of your inputs (for example in the degenerate cases where one source is always 0 or 1).
Prior may have been based on noticing that the results are more random but was it based on sound math?
Not "more random". That's completely wrong. The goal of an extractor is to provide an output stream that meets various statistical criteria, preserves as much entropy of its inputs as possible, is feasible to implement and use, and is amenable to analysis proving these attributes. What an extractor cannot do is produce an output stream that is "more random" than its inputs. You can't create information entropy using a deterministic process; you can only move it around.
There is a vast amount of research in this area - the paper has four pages of citations, nearly all to relatively recent work. The approach described in the paper is significantly better than the best previously published results.
But I agree with you, There is definite prior art on this.
Care to cite it? Who has previously published a feasible approach for generating a two-source extractor with good properties, using sources with min-entropy lower than 0.499n (or the other previously-best approaches, such as Raz's where one source only needs min-entropy ~ log n, but the other needs better than n/2)? I'm sure we'd be glad to hear about this lost treasure.
The sad fact is that this result is not new at all.
The sad fact is that you're posting complete bullshit, and clearly did not look at the paper. Nor did the eight idiots who upvoted you.
So what's wrong with my method?
It was used in the original Netscape SSL implementation, and broken shortly thereafter. It's not suitable for cryptography.
it might be possible to craft two chunks of psudo-random and get something much closer to an actual random number
No, that is not possible, assuming you're talking about a deterministic procedure and causality is still in force.
What is possible is combining two low-quality sources into a less-biased, less-predictable one. The goal is to do so with poor (low min-entropy) sources, in a practical fashion (i.e. while preserving as much entropy as possible, minimizing bias, using reasonable resources, etc). This paper represents a significant advance in that area.
Why don't you look at the paper and find out?
In the time it took you to write that post, you could have glanced at the abstract and answered your own question.
Argh argh argh.
The paper provides the mechanism for combining the two sources. That is what it is about.
It is not claiming that combining two weak entropy sources is a novel idea. In fact, as anyone who ever reads such things would expect, it goes into the extant research at considerable length.
I wonder how they do this
The link to the paper's right there in the article. Section 1.2 in particular describes most of the theory behind it. It's not trivial.
There are a few key ideas, such as Ramsey graphs, which we might say intuitively are graphs that are "sufficiently messy"; and resilient functions. A resilient function is a Boolean function of many inputs (bits), such that even if several of the inputs are fixed (influenced by an attacker), the output is still unpredictable with high probability. The key to their approach is a feasible method of generating a suitable resilient function.
I'd imagine that it is well known that as you add more and more streams of random numbers together the closer the resulting series becomes to being random drawn from a normal distribution.
That's not well known, because it's not true.
First: If you actually have "streams of random numbers", then combining them in some fashion (let's take "add" to mean any mechanism for combining them, and not just the arithmetical operation) could reduce entropy and produce a less statistically random output. That could be because the streams are correlated in some way, or because some of the streams have lower entropy, or because the combining mechanism is lossy.
Second, if in fact you mean pseudorandom streams, then some combining mechanisms can create output that has better statistical randomness; but no deterministic mechanism can create true randomness, and some combiners will, again, produce worse output, not better.
Compute two 64-bit numbers seeded by different values, and now you have a random 128-bit number.
No. First, what you have is pseudorandom; and second, linear combinations of LCPRNGs and other common PRNGs are not suitable for cryptographic purposes. That was demonstrated decades ago.
Algorithm M is OK for some purposes, but you're leaking significant chunks of output from source A on each iteration, and 2**16 is not a lot of confusion for cryptographic purposes, particularly if an attacker can recover a sufficiently large stream. And in general it requires the two sources have decent statistical pseudorandomness.
The algorithm M construction is also worryingly similar to that of RC4, which makes me wonder if some of the correlations that break RC4 might also show up in M.
In any case, it's not terribly relevant to the paper cited in the article, unless and until someone does a similar analysis and shows that it has competitive entropy recovery, error (bias), resilience, etc.
Lots of people had already had the same idea,
Obviously, since we have many widely-used procedures for combining both random sources and PRNG outputs. (It's not clear which you're referring to.) Assuming by "the same idea" you mean what the OP meant - the general, and rather obvious, idea of combining random or pseudorandom sources. In computing, that goes back at least to Von Neumann ("Various techniques used in connection with random digits", 1951).
Of course the specific mechanism described the paper is not entirely novel either; we've known about Ramsey graphs since, er, Ramsey (so sometime in the 1920s). This particular application does appear to be new, and worthy of respect, despite what the Reg brain trust believe.
but the assumption from the intelligentsia was that there was no way to mathematically generate random numbers
This remains true, and irrelevant, except as background, to the discussion at hand. And it's not an "assumption from the intelligentsia"; it's a fundamental consequence of deterministic processes, at least as long as causality remains intact.
so combining two non-random streams was merely going to give you a different, but still non-random, stream.
The paper that the article refers to describes a mechanism for extracting a high-entropy bitstream from two sources, each of which have lower entropy. (It compresses the source streams, consuming multiple input bits to produce each output bit, obviously, since you can't create information entropy using a deterministic process.)
All you people had to do was click on the link and glance over the abstract. Even if you don't have the mathematical background to understand the details, phrases like "the best previous extractor" and "the best previous construction" would have clued you in: this is a new, significantly better solution to a problem that has seen considerable research. That's the sense in which it is a "breakthrough".
used the very same idea of combining two random number generators together
The very same idea? You used the combining mechanism Zuckerman and Chattopadhyay describe in their paper?
Or do you mean "I combined two [PRNGs, or more likely two instances of the same PRNG with different seeds] using some trivial mechanism"?
Because, shockingly, over the past several decades many researchers have examined various ways of combining PRNG output. We've known for a long time, for example, that any linear combination of LCRNGs is breakable.
I'm no genius and definitely no crypto expert
Clearly. Apparently you don't know enough about cryptography to understand the difference between what's being described (albeit poorly) in the article, and any half-assed algorithm for combining two PRNGs.
But then from the other comments we can see you're in good company.
the amount of up-front technological and scientific investment to start a successful biotech startup is of orders of magnitude larger than for CS
Sure, but that doesn't contradict asdf's actual claim, which is that some useful technology has been created by people who do not hold degrees.
I'm a fan of university degrees myself - I have three of the things, and nearly finished a fourth (and who knows, if I ever get the time...). And there is clearly much ignorance (some probably willful), and quite possibly some fraud, at work in the whole Theranos debacle. But asdf is correct that KA1AXY's jab at people who do not hold degrees is a faulty generalization.
What the hell are people doing with their phones that they can't get through a day on battery?
Carrying it in an area with weak signals and many dead spots does it for mine. At my (main) home, after a typical day's use the battery is still at 80-90%. Here in the Fortress of Altitude, it sometimes runs down by mid-day.
If you want easy navigation, install "mouse gestures" or a similar plugin, or get a keyboard with forward and back buttons.
And take my hands off the home row? That cure is worse than the disease.
(Alt + Left-Arrow is a lousy accelerator for the same reason, but at least those are keys I use frequently, so they're easy to touch-type. Random non-standard single-purpose only-on-some-keyboards keys like Forward and Back buttons are much more disruptive.)
Or better still, when responding to backspace, look at the cursor:
Is it in a form field? Do a backspace, otherwise use it for page navigation.
That's what most browsers already try to do.
It fails when idiot web designers create their own pseudo-form page elements using scripting. And that happens all too often.
For example, many Firefox users have complained about not being able to type a slash in a "form" on some web page, because the input areas are actually just scripted div elements, and Firefox by default uses the slash key to activate its (largely useless) quick-search feature. Firefox doesn't know the pseudo-form-element is an input element, so it intercepts the slash.
Page-element focus can be tricky anyway, particularly for users who have configured implicit focus changes (e.g. with the misnamed "X Mouse" setting on Windows, or the implicit focus policy long supported by many X window managers). And having the behavior of the backspace key change based on focus violates the Principle of Least Surprise.
I've occasionally used Backspace for navigation, but precisely because it was element-focus-sensitive I've switched to using Alt + Left-Arrow. While I'm generally not a fan of UI changes, and particularly not changes to accelerators, removing the backspace accelerator probably makes sense.
Much of what makes us Human will never translate to a machine, ever.
On what grounds are you arguing that humans are not machines?
Yes, well, we can't expect people writing for a tech site to familiarize themselves with technology, can we.
The modern DL structure uses components - layered ANNs initially trained as unsupervised RBMs, and then tuned with supervised backpropagation - that were invented at various points from the 60s through the 90s. But that modern structure, particularly the Hinton model and its refinements and the combination of ANNs and HMMs / MEMMs, has only been around for the past ten years. Then there are other newer varieties of deep ANNs like WWNs and CNNs.
Then there are other relatively new approaches that aren't DL-based, such as Bayesian Program Learning (which models the process that creates the object of interest), or new work in Distance Metric Learning approaches, meta-algorithms like AdaBoost... There's a ton of innovative and evolutionary work being done in Machine Learning and AI in general.
Of course any human can distinguish between sarcasm and irony.
It's an apt point. There are any number of studies showing how human judges disagree on tasks that people complain AIs fail at. This is certainly true in Natural Language Processing, for example, where it turns out that you can't get even a panel of expert judges to agree on assigning semantic parses (using e.g. Rhetorical Structure Theory) to complex statements, for example.
Andrew's test is just another example of the whole failed category of Imitation Game tests. The point of the IG, and Turing's entire essay, is not that we should use IG contests or other ill-defined "sure, humans can do X" acid tests to evaluate the state of AIs. It's a philosophical argument, essentially staking out a position congruent with American pragmatism (and thus rejecting metaphysics and the chancy bits of epistemology1): intelligence is what intelligence does.
The other problem with Andrew's test is that it makes AI into some singular, monolithic, all-or-nothing quality: either the machine is equivalent to some (again ill-defined) ideal person, or it's nothing at all. While it's useful to point out the many, many ways in which Google's big-data-and-deep-learning hammer fails to hit all the nails, much less deal with the screws, of human language, this business of "it can't do X so it means nothing" is not productive.
And to claim, as Andrew does, that there haven't been "any serious breakthroughs" in AI "in recent years" is just stupid. Maybe not as stupid as "smart" chat clients, but stupid nonetheless.
1I.e., all of epistemology.
C-like punctuation-heavy programming-language syntax isn't any more "old fashioned" than VB's word-based syntax is.
From the very beginning, higher-level programming language design involved choosing among these and other alternatives for syntax. And those choices were made by analogy with other programming languages and with other systems of representation.
Some early programming languages, such as FORTRAN and ALGOL, were influenced by mathematical and formal notations. They also noted the storage limitations of the machines they ran on, the line length imposed by input mechanisms such as punched cards, and other constraints. Those favored punctuation-heavy syntax. The C family falls in this category.
Other languages, such as COBOL and (to some extent) LISP, took their cues from other sources and generally preferred words over punctuation. This family came to include BASIC and Pascal and their descendants.
Then there are languages which aren't consistently one way or the other, such as various scripting languages - the Bourne shell and descendants (e.g. "case ... esac" but also "$((...))"), for example, or Perl - a mishmash of every idea Larry Wall could think of at the time.
These days, the languages I use most often are punctuation-heavy C and OO COBOL, which largely avoids the stuff. I don't think either has a clear advantage in syntax under any obvious metric, such as readability or maintainability. (OO COBOL probably does have a small readability advantage if you're reading code in some environment that doesn't do syntax highlighting and brace matching, but most developers seem to use syntax-parsing editors these days.)
Some languages, such as OCaml, arguably use punctuation constructs that are too obscure - they're hard for infrequent users to remember, and completely opaque to those who don't know the language, because they don't have analogues in, say, English or common mathematical notation. That, I think, is a design failure. But it's also relatively rare. C# has some constructs (the "verbatim string", for example) which are probably a mistake, but for the most part it sticks to things that should be recognizable to practitioners.
Tuples are prominent features of a number of programming languages, such as the ML family (including OCaml, Haskell, and F#). For the most part, they don't cause any particular maintenance problems.
Poor programmers (which appear to constitute the majority of programmers) can write unmaintainable code in any language. And will.
No worries, Death will arrive eventually, regardless of brain regeneration. If you live past the next big extinction event, there's always the expansion of the sun to look forward to. And even if you're no longer in this solar system, everything's still moving toward equilibrium.
What's certain is thermodynamics.
Then he remembered doing something stupid which had led to his accidental death.
Not accidental. Corbell had terminal cancer. This comes up a few times in the first few chapters (I don't recall if it's in the short story that Niven adapted into the novel), for example when he's receiving mRNA injections and notes that he's not afraid of needles because they were used to deliver analgesics to his original body.
I don't recall Corbell worrying too much about whether he's still the same person, to be honest. Certainly there's more existential panic in, say, Pohl's Man Plus, and that's just your standard cyborg-a-dude-up-for-Mars story.
Are you happy with the idea of murdering people in order to harvest their organs for transplant?
Hey, if you're going to murder people, you could at least recycle.
Could we have a full list of the domain suffixes owned by these domain registrars?
Why not just make a list of gTLDs that are worth using?
com, net, org
Maybe gov and mil if you're in those domains.
What was intended, sure. But only a naive intentionalist would think that's what the utterance means.
On the other hand, only someone who really doesn't understand how language works would write something like "get this stuff right X per cent of the time".
One of the biggest problems with Google's NLP work is that they labor under a model of language use that's hilariously oversimplified. It excludes the vast majority of actual human language use. Of course that's true of a lot of NLP research, but certainly not of the entire field, at least since computational discourse analysis became an area of study in the '70s.
$X Mc$X$Y predates "Boaty McBoatface" by at least my entire lifetime, so I'm pretty sure it is not going away any time soon.
As does the "-face" suffix for forming nicknames. I'm curious about the etymology of that, but a quick online search didn't turn up anything useful.
(On the other hand, I did run across the etymology of "nickname": corruption of "eke name", with "eke" in its original sense meaning "supplemental". These days "eke" is most often seen in the newer sense of "scraped together", which is a misapprehension of the original usage, viz "he was a butcher but eked out his living murdering strangers".)
And that sort of choice of priorities just makes me wonder if they really know they are the police, rather than say a millitary occupation force.
While the militarization of the police in the US (and elsewhere) is definitely a huge problem, this particular case may not be a "choice of priorities". In many cases the military hardware is being foisted on police departments through "grants" as a way of shuffling unwanted Department of Defense property around and cashing out stuff they don't want. It's a shadow budget for the DoD and a way to prop up the defense industry boondoggles.
So, a few years ago, Congresscritter X calls up the local Chief of Police and says, hey, fill out this form and we'll give you this armored vehicle plus some, er, "training" budget. Hard to pass that up - it's some extra cash for the always-tight budget, you don't want to piss off the 'critter, and you don't want to look "soft on crime", and if in some crazy situation the local populace thinks you could have used an armored vehicle you don't want to be the guy who turned it down.
But that's the offer on the table. It's not "hey, spend these fungible resources on body-cams or ridiculous military hardware". That is, for some departments it may well be, but often that's not the case.
JFTR, police departments in my neck of the woods have been evaluating body-cams from various suppliers, looking to equip all patrol officers with them.