One thing the article might have mentioned (although I'm really not sure what to make of it), is that Hiroshi Ishiguro, due to the effects of ageing, had cosmetic surgery to make him look more like his android. He apparently claims it was more cost-effective than updating the android.
213 posts • joined 9 Jul 2009
Re: So it's Core War played with "real" virtual processors between machines
The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.
That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.
Re: Pattern matching is dumb, thus anomaly detection, with history and rollback.
If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.
So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.
Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.
More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).
Just don't call it "AI" until it can design a game like Go by itself. From scratch.
You can do that? Hats off, sir/madam.
Re: And how much hardware it takes....
To be fair, human brains throw massively more hardware than any computer system in existence at doing ... just about anything. Plus they have had the benefit of aeons of evolutionary time to hone their algorithms and heuristics.
Looked at that way, it hardly seems like a fair contest.
Re: Cracked and good PR
However it's not AI
According to ... what/whose definition of AI? (not a rhetorical question).
Can the system play any other game not programmed in?
I play a fair game of chess, but am absolutely rubbish at Go. Never had the time or motivation to program it in.
Did the computer "learn" the previous matches? No, they were loaded into a database.
Correction: it learned from previous matches. Perhaps those matches were "loaded from a database" during the training phase. I used to load matches from databases for training during my chess days - we called them "chess books" back then.
Hint: why not find out how AlphaGo really works.
74 countries hit by NSA-powered WannaCrypt ransomware backdoor: Emergency fixes emitted by Microsoft for WinXP+
Re: Risk Management
Of the >140,000 million NHS yearly budget, only about 40,000 million is available for things like buying drugs, new hospitals, MRI scanners and desktop refreshes. The rest goes on wages. That's a political failure.
Yeah right, why should we pay people to do this stuff?
Re: Old joke
It was named after a German botanist, so no, it would be closer to "Fook-sia", or "Fooch-sia" with the ch similar to that in the Scottish "loch".
Sorry, no "spoilsport" icon.
Re: windows manager choice
FWIW, Fluxbox (which has been my WM of choice for a decade) is still under development - albeit at a somewhat leisurely pace. It knows what it is, and is comfortable to stay that way - which suits me fine.
Re: "the grammar is relatively simple"
What would you consider a language with complicated grammar then?
Basque, Finnish, Navajo, Adyghe, Abkhaz, Korean, Icelandic, Thai, ...
English grammar is in fact pretty simple compared even with its latinate and germanic progenitors. I recall learning Spanish, that the trickiest things to get to grips with were the imperfect past tense (which doesn't exist in English) and the subjunctive mood (which English has almost lost).
Now Afrikaans - there's a really, really simple grammar.
Re: "I would trust Mark on this," de Alfaro said in an email to The Register.
More specifically, he called them dumb f**ks for trusting him with their data - jokingly, perhaps, but in the context you have to say he had a point.
(Beer icon because it's 5.00 ... somewhere. Here, in fact. Now.)
Re: FB wants to go the same way as the MSM
Now, I only look at the MSM to learn what is the latest lie that they are propagating or which piece of information they are trying not pass on to the general public.
Out of interest, having rejected mainstream media as a source of reliable information, what are your alternative sources of information, and how sure can you be that they are any more reliable than the mainstream media?
Re: WTF is this crap
A group of cells in any brain learns to manage all its systems in the body and learns to balance itself so as not to destroy itself as things change from conception. We can do this too guys!
How? The Nobel committee is waiting to hear from you.
Re: Oh alright...
Hasta la Vista
You do realise that that translates roughly as "until we meet again"?
Big: adjective. Verb forms: embiggen, bignify. Adverbial form: bigly. Abstract noun: bignation.
Re: @LionelB Vista Capable
Yes, agreed - and apologies (and an upvote): it wasn't clear to me what you were getting at.
Re: They solved nothing. Just fairy tales.
I expect you're also one of those people who refuses to believe that any historical event took place unless you were there to see it with your own eyes.
... while trees fall silently in deserted forests ...
(Couldn't be arsed to go full haiku.)
Re: They solved nothing. Just fairy tales.
Are you sure your arse is different from your elbow? Better send a probe up there.
You don't really get science, do you?
Re: Is it a bug?
LionelB wrote earlier:
That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet.
IOW, I don't entirely* disagree with you. I just thought your analogy was crap.
*OTOH, I don't think "real" AI (whatever that means) is unattainable - always a duff idea to second-guess the future (cf. previous unattainables, like heavier-than-air flight, or putting humans on the moon). Basically, I don't believe that there are magic pixies involved in natural (human) intelligence.
Re: Is it a bug?
No, you have to go back, create test cases for every imaginable scenario, ...
Sorry, no. You seem to have a total misconception as to how machine-learning in general, and "deep-learning" (a.k.a. multi-layer, usually feed-forward) networks in particular, function. You seem to have latched onto the bogus idea that a machine learning algorithm needs to have "seen" every conceivable input it might encounter in order to be able to classify it.
In reality, the entire point of machine-learning algorithms is to be able to generalise to inputs it hasn't encountered in training. The art (and it's not quite a science, although some aspects of the process are well-understood) of good machine-learning design is to tread the line between poor generalisation (a.k.a. "overfitting" the training data) and poor classification ability (a.k.a. "underfitting" the training data).
It's a hard problem - and while the more (and more varied) the training data and time/computing resources available, the better performance you can expect, I'd be the last person to claim that deep-learning is going to crack general AI. Far from it. But it can be rather good at domain-specific problems, and as such I suspect will become a useful building-block of more sophisticated and multi-faceted systems of the future.
After all, a rather striking (if comparatively minor) and highly domain-specific aspect of human intelligence is our astounding facial recognition abilities. But then we have the benefit of millions of years of evolutionary "algorithm design" behind those abilities.
Re: "why the algorithm..such a heavy weighting on..only 2% of the footage."
@John Smith 19
Yes, deep-learning networks (usually) are just multi-layer networks - but that doesn't imply that "people could actually work out how they work". It's notoriously hard to figure out the logic (in a form comprehensible to human intuition) of how multi-layer networks arrive at an output. I believe the so-called "deep-dreaming" networks were originally devised as an aid to understanding how multi-layer convolutional networks classify images, roughly by "running them in reverse" (yes, I know it's not quite as straightforward as that).
Re: Is it a bug?
So your reply to "homoeopathy is not medicine" is "write a new treatise on it and make it better!" Yup, got it.
Sorry, but that's a fantastically lame "analogy".
Re: Is it a bug?
If you have to insert an explicit rule, it's not AI. It's a human-written heuristic.
You might well argue, though, that natural (human) intelligence is a massive mish-mash of heuristics, learning algorithms and expedient hacks assembled and honed over evolutionary time scales.
That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet. And of course it's hyped - what isn't? Get over it, and maybe even appreciate the incremental advances. Or better still, get involved and make it better. Sneering never got anything done.
Re: Is it a bug?
If it was a tiny bug it could be fixed.
What makes you so sure it can't be fixed? FWIW, I suspect it is probably not a "tiny" bug, but may not actually be that hard to fix (off the top of my head I can imagine, for example, a training regime which omits random frames, or perhaps a sub-system which recognises highly uncharacteristic frames, which might mitigate the problem).
This research may well turn out to be rather useful to Google (although I'd also be slightly surprised if they weren't aware of something similar already).
It's just taking a statistic generated from the data and finding the nearest data point to that statistic in its database and then returning it.
No, it's not doing anything like that. Please find out how deep-learning systems actually work before posting fatuous comments.
At best, it's an "algorithm".
Yes, of course it is. Computers run algorithms - that's what they do. Whether you call it an "AI algorithm" (I wouldn't) or a "pattern recognition algorithm" (I might) is a matter of what you think those terms ought to mean.
Re: why the search isn't using artificial intelligence.
There is no "general" AI ...
True, and likely to remain so for the foreseeable.
... and successful specific AI is large data sets with human curated rules
Not so true: successful domain-specific AI these days tends to be human-designed learning algorithms trained on large data sets - really not the same thing. With human-curated rule-based systems you can generally trace precisely how the system arrives at an output. With modern machine learning systems (particularly neural network-based ones) you generally cannot - the internal logic of the trained system (as distinct from the learning algorithm underlying training) is inscrutable. Call it pattern-recognition, if you like, but these are not hard-coded rule-based systems. They tend to be better at generalisation (within their domain).
Human-curated rule-based systems are still around to some extent (very 1980s), but machine learning (I won't call it AI, if it pleases) has moved on.
Re: "...you'll have to find someone else to name it after"
I kind of like "Kevin". No reason.
Re: Vegemite - FTW!
Vegemite > Marmite > Bovril
where ">" = "better than".
Note: I am of South African origin, and therefore strictly neutral on this issue. I am less neutral (but no less correct) on:
biltong >>>>>>> beef jerky
Re: “Inbreeding leads to harmful mutations” is a lay understanding of genetics.
OTOH, those mutations may be beneficial, in which case inbreeding can be good.
Yes... but it may be hard to avoid (or even detect) other potentially harmful mutations hitching a ride.
In the long run, repeated inbreeding has the effect of reducing the set of available alleles in the inbred population for any particular gene, which is generally not a good thing. Hence, e.g., the robustness of mongrels in comparison with "pure-bred" dogs.
Re: Am I a Bad Person?
the organisation is using this discovery to try to obtain more funding?
Well, duh, of course they are. You need funding to do science.
(And they are almost certainly right - not many unique objects in the universe, because the universe is rather big and full of stuff.)
Re: It's quite a small object
Well, the ether and phlogiston were, in their day, about the most plausible theories going for explaining the physical evidence as it stood. Sure, they turned out to be wrong, but establishing how a theory is wrong can be an excellent way of homing in on a better theory. To take an example, the Michelson-Morley experiment - which finally did for the ether - forced physicists (like Maxwell and Einstein) to develop new theories to account for the perplexing new evidence.
Getting stuff wrong in science is both commonplace and highly underrated. Better bad theory than no theory at all.
This is no mere listicle: it weighs in at 435 pages
Re: Today's "AI" is brute force stupidity
Your conception about how systems like AlphaGo learn is way, way off the mark.
Famously, Google's DeepMind learned how to play a range of Atari games from an input of raw pixels; in other words, it figured out how to play games just by "watching" them. Link to original paper here.
Re: Hiding in Plain Sight and Sharing NEUKlearer Plans is an Advanced IntelAIgent Movement?
In other new: prolific Reg commentard invents new language to natter away behind humans' backs.
Re: I hope "the answer" isn't EVEN MORE gummint...
As for the IoT debacle. I won't have any in my home. Even my smart TV is never connected to my network.
But but but I like my smart telly, and my mobile, they improve the quality of my life. And I've installed seatbelts on my sofa.
So, how does this play with "non-algorithmic" stuff like neural networks?
Damn good question. Strictly, neural network-type systems are not "non-algorithmic" - the network is certainly running an algorithm - but an inscrutable one. Of course the design of the network learning system will be a known algorithm, and the data used to train the network will be known, but - the actual decision-making details may well not be. Put simply, it may well be impossible to analyse how/why a network has reached a decision. I find that pretty scary:
Recruiter: I'm afraid your application was rejected.
Recruitee: Oh. Why is that?
Recruiter: Our sophisticated interview application analysis software has rejected you as a suitable candidate.
Recruitee: Oh... I see. On what grounds?
Recruiter: We can't say for sure. It's very sophisticated though, based on cutting-edge Deep-Learning techniques.
Re: The most obvious one for me...
Trying to predict all those volatile markets based on news reports ...
That's actually been around for a quite while in financial (algorithmic) trading circles, under the name "sentiment analysis". Whether it actually works is another issue. In the early 2000s I worked for a hedge fund on design of algorithmic trading systems. We investigated sentiment analysis (as it was at the time) and quickly rejected it. It didn't work. Then again, that was pre-Twitter, etc., so maybe there's better mileage in it now.
To expand slightly on "didn't work" - the basic conundrum with financial (or in fact any) prediction is this: you might think that adding more streams of information ("variables") as input to a predictive model will inevitably increase predictive power. In fact the opposite is more often than not the case. More variables means more model parameters to fit, usually resulting in poorer overall model fit and poorer prediction. To be useful in a predictive sense, an information stream has to overcome this effect; if it doesn't, you are basically just throwing noise into your model. But how to tell whether an information stream will turn out to be useful or just noise - especially when the "rules of the game" are continuously changing, so that you may only base prediction on limited historical data? The answer to this is... voodoo... or "suck it and see" (which can be costly).
... and quarterly data and the way other brokers are buying and selling
That's standard algorithmic trading and is ubiquitous, because it does work (or at least it's easier to make it work).
Re: I wonder why the Christian Right don't want to hunt for life eleswhere
You 'aving a turkish, my son?
Much of the data I've read about takes that big yellow ball we orbit out of the maths.
Absolute nonsense. I don't know what you've been reading, but you may want to broaden your sources considerably.
Just because we're skeptical of the cause doesn't mean we don't believe there is a change.
I should hope not. Nobody (sane) these days - not even most climate change "deniers" - seriously disputes that climate change is happening. The debate centres entirely over whether human intervention is a driving factor. You're flogging a dead horse.
Re: NASA back to space
Right, because other planets are so much more interesting and crucial to humanity than the one we live on. (Hint: it is actually quite useful and informative to study the earth from space.)
Re: Occam's Razor with fractally serrated edges..
The Niels Bohr model of the atom worked up to a point, then it was totally replaced by something else that explained all the previous experiments... You couldn't say it was incorrect until Quantum Mechanics showed up and proved it wrong.
Pedantic correction: You couldn't say it was incorrect until experimental evidence showed up and proved it wrong; then quantum mechanics provided an explanation for that experimental evidence (although that may not have been the exact historical sequence of events).
Apart from that, agreed: scientific theories are continually revised/refined/replaced to explain new "edge-case" evidence, frequently subsuming older theories in the process.
@Destroy All Monsters
Fascinating article, but I came away with the impression that MOND theories are at least (if not more) speculative than Dark Matter/Energy theories. Nor, judging by that article, does MOND come across as more data-driven than DM/DE. There is, of course, bound to be resistance to MOND on the grounds that it breaks General Relativity (but then again perhaps General Relativity needs to be broken).
Guess we'll have to wait and see if either approach has legs.
Also, is there really that much grant-related mileage in multiverse theories? They seem to represent, perhaps, more a philosophic than scientific standpoint. From the point of view of science they may well be, as you say, "sterile crap", in the sense that they are not "useful" - they don't seem to make verifiable predictions beyond standard quantum theory. From a philosophical viewpoint, though, they do seem to furnish infuriatingly consistent interpretations for the crazee world of quantum phenomena - where even more traditional interpretations stretch intuition beyond breaking-point.
Re: So,Tesla was right after all?
Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.
Which is pretty much what Albert Einstein (a contemporary of Tesla) did in his development of general relativity. Except that his "structure" turned out to have a pretty damn strong relation to reality.
You may well turn out to be correct, but the fact that neither dark matter nor dark energy can be (currently) observed does not necessarily point to that conclusion. Many, many phenomena in physics were predicted long before they could be detected (gravitational waves, as predicted by general relativity, being a recent example).
Scientists get their maths wrong and so rather admit it, they make up dark matter.
Meanwhile back in the real world, scientists get their models wrong (because they're mortals working with limited information), admit it, and amend their models.
Relax - you don't have to do that.
Re: Anon for reasons - Basically to avoid the SJW'ers
If you put a little effort into searching you will have no trouble finding something a bit more acceptable.
No doubt; but I don't see my 15 year old son doing that (should I urge him only to access tasteful, non-exploitative porn?) Anyway, my point is that an overwhelming majority of porn is degrading and humiliating towards women. If that's demand-driven - and I imagine it is - I guess that says something depressing about the male of the species.
The approach I take with my son is to stress that sex on the internet is (in general) nothing like sex in the real world - or certainly shouldn't be - insofar as it totally lacks a few ... um ... crucial aspects like love, joy, passion, fun and mutual respect. I might even suggest that he may want to get out there and talk to some real-life girls.