Re: Every day's a school day
One thing I don't get about this: do these people have five thumbs and one opposable finger?
227 posts • joined 9 Jul 2009
One thing I don't get about this: do these people have five thumbs and one opposable finger?
I could go on but then I would barf out my lunch.
I think you just did...
It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment.
Having worked a little in robotics, it turns out that that's a really, really bad way to "know what leg to move forward next", and almost certainly not the way you (or any other walking organism) does it. The idea that to interact successfully with the world an organism maintains an explicit "internal model" of its own mechanisms (and of the external environment) is 1980s thinking, which hit a brick wall decades ago - think of those clunky old robots that clomp one foot in front of the other and then fall over, and compare how you actually walk.
In biological systems, interaction of an organism with its environment is far more inscrutable than that (that's why robotics is so hard), involving tightly-coupled feedback loops between highly-integrated neural, sensory and motor systems.
... and since chaotic systems have so far defeated mathematical modelling ...
Errm, no they haven't. Here's one I made earlier:
x → 4x(1–x)
That's the "logistic map". Here's another:
x' = s(y-x)
y' = x(r-z)-y
z' = xy-bz
That's the famous Lorentz system, which has chaotic solutions for some parameters. Chaotic systems are really easy to model. In fact, for continuous systems, as soon as you have enough variables and some nonlinearity you tend to get chaos.
Because in any reasonable definition of AI ...
Well what is a reasonable definition of AI? Genuine question: I get the impression that most commentators here equate "real" AI with "human-like intelligence" - under which definition we are, of course, light-years away. But does the "I" in AI have to be human-like? Or, for that matter, dog-like, or octopus-like or starling-like, or rat-like?
Perhaps we need to broaden our conception of what "intelligence" might mean; my suspicion is that "real" AI may emerge as something rather alien - I don't mean that in the sinister sci-fi sense, but just as something distinctly non-human.
Jihads, Crusades, Intifadas - they're all the same.
Not quite: Intifada was, in its original meaning, a political term with connotations of "rebellion against oppression" (the first Intifada was a socialist protest against the monarchy in Iraq). Of course it is now more strongly associated with the Palestinian struggle against Israeli occupation - which may or may not (depending on who you are talking to, and when) have been hijacked by religious extremists.
Agreed on the others, though.
The particular sort of barbarism practised by Desh doesn't respect even those rules even if they were often evident more in the breach than the observance.
This is a an entirely deliberate strategy. You need to appreciate their motives: they are a doomsday sect. They believe that the global Caliphate will arise only after an apocalyptic showdown between Islam and the non-believers. Their avowed intention is to evoke the highest levels of disgust and abhorrence in order to hasten that showdown.
Why? Because Drs and nurses say, hmm I have seen something like this before, and short circuit the differential diagnosis. Try encoding that in an expert system !
That would be easy to encode in an expert system - if the system designers were able to pin down what "something like this" actually meant. And that's the Achilles' Heel of expert systems: identification (and encoding) of the explosion of edge cases and hard-to-articulate intuitions that constitutes the deep knowledge of an experienced human expert. This is why knowledge-based systems hit a brick wall. We've known about this for a long time.
My personal favourite IndEng word is "prepone", meaning "to bring forward in time", by analogy with "postpone".
Mine is "doubt" to express a misunderstanding. As an academic I am sometimes contacted by Indian students/researchers expressing a "doubt" about some aspect of my published work. The first few times this happened I thought they were being a bit cheeky, until I twigged that they were just seeking clarification.
... Thomas Edison is known for having electrocuted elephants ...
How is that not useful?
Babel fish are worth a punt. If everyone understood each other it would improve not only research but a lot of other aspects of commerce.
Actually, in research and academia language is hardly an issue; English is already de facto the lingua franca of science.
Not sure about fruit flies, but I have a colleague who studies genetically-modified zebra fish (same technology - calcium imaging). Young zebra fish are almost completely transparent, so you can image their entire brain/neural system in one shot. It's pretty impressive watching screeds of individual neurons (~ 10,000) flickering away in real time.
Turns out that there are more neurons in the zebra fish visual system than in the rest of the brain/nervous system in its entirety. Seeing well is pretty damn important to those critters - wouldn't surprise me if fruit flies were similar in that respect (although their visual system is very different).
The problem is that the models are continually getting it wrong.
"All models are wrong, but some are useful" - George Box
"The best material model for a cat is another cat, or preferably the same cat" - Arturo Rosenblueth
Non-scientists routinely misunderstand the purpose and utility of models in science. Here's a famous example of an exceptionally useful - but completely "wrong" - model: it's the Ising model for ferromagnetism. When a ferromagnet is heated up to a specific temperature (the Curie point), it abruptly de-magnetises. This is a classical phase transition (like the boiling of water, etc.). The Ising model was proposed in 1924 by Ernst Ising, in an attempt to understand the ferromagnetic phase transition (phase transitions were poorly understood at the time). It is elegant, abstract, and - as a model for ferromagnetism - completely wrong. It's absolute rubbish. It's childishly simplistic. Real ferromagnets are, in reality, nothing like the Ising model - they're way more complex in structure and (quantum) electrodynamics. But here's the strange thing... the Ising model completely nails the ferromagnetic phase transition. It describes the behaviour of the relevant physical quantities near the Curie point astoundingly well. The Ising model (which was finally solved analytically in the 1940s by Lars Onsager) subsequently became the "fruit fly" of the physics of phase transitions. It's probably not far off the mark to say that almost everything we know about phase transitions (and we now know a lot) is rooted in studying the Ising model. It is one of the most elegant, successful and influential models in the history of science.
It's instructive to consider just why the Ising model is in fact so successful. It turns out that, in general, phase transitions fall into distinct "universality classes": that is, many apparently completely different physical phenomena which demonstrate phase transitions turn out to behave in identical, stereotyped ways near their critical point - they may be described, not just qualitatively but quantitatively, by the same mathematics. (This is a rather deep discovery, which stems from studying - you guessed it - the Ising model.)
So the Ising model didn't have to be "correct", or even "accurate" (it's not). It just had to nail the one phenomenon it was intended to model. It abstracts the problem. That is what useful models do - that's what they're for.
In climate science, as in any other science, that is how we should view models: not as "right" or "wrong" ("another cat"), but as useful in abstracting and pinpointing the crucial aspects of the phenomenon we wish to understand.
BB, your misunderstanding of science is quite extraordinary.
One thing the article might have mentioned (although I'm really not sure what to make of it), is that Hiroshi Ishiguro, due to the effects of ageing, had cosmetic surgery to make him look more like his android. He apparently claims it was more cost-effective than updating the android.
The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.
That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.
If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.
So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.
Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.
More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).
Just don't call it "AI" until it can design a game like Go by itself. From scratch.
You can do that? Hats off, sir/madam.
To be fair, human brains throw massively more hardware than any computer system in existence at doing ... just about anything. Plus they have had the benefit of aeons of evolutionary time to hone their algorithms and heuristics.
Looked at that way, it hardly seems like a fair contest.
However it's not AI
According to ... what/whose definition of AI? (not a rhetorical question).
Can the system play any other game not programmed in?
I play a fair game of chess, but am absolutely rubbish at Go. Never had the time or motivation to program it in.
Did the computer "learn" the previous matches? No, they were loaded into a database.
Correction: it learned from previous matches. Perhaps those matches were "loaded from a database" during the training phase. I used to load matches from databases for training during my chess days - we called them "chess books" back then.
Hint: why not find out how AlphaGo really works.
Of the >140,000 million NHS yearly budget, only about 40,000 million is available for things like buying drugs, new hospitals, MRI scanners and desktop refreshes. The rest goes on wages. That's a political failure.
Yeah right, why should we pay people to do this stuff?
It was named after a German botanist, so no, it would be closer to "Fook-sia", or "Fooch-sia" with the ch similar to that in the Scottish "loch".
Sorry, no "spoilsport" icon.
FWIW, Fluxbox (which has been my WM of choice for a decade) is still under development - albeit at a somewhat leisurely pace. It knows what it is, and is comfortable to stay that way - which suits me fine.
What would you consider a language with complicated grammar then?
Basque, Finnish, Navajo, Adyghe, Abkhaz, Korean, Icelandic, Thai, ...
English grammar is in fact pretty simple compared even with its latinate and germanic progenitors. I recall learning Spanish, that the trickiest things to get to grips with were the imperfect past tense (which doesn't exist in English) and the subjunctive mood (which English has almost lost).
Now Afrikaans - there's a really, really simple grammar.
More specifically, he called them dumb f**ks for trusting him with their data - jokingly, perhaps, but in the context you have to say he had a point.
(Beer icon because it's 5.00 ... somewhere. Here, in fact. Now.)
Now, I only look at the MSM to learn what is the latest lie that they are propagating or which piece of information they are trying not pass on to the general public.
Out of interest, having rejected mainstream media as a source of reliable information, what are your alternative sources of information, and how sure can you be that they are any more reliable than the mainstream media?
A group of cells in any brain learns to manage all its systems in the body and learns to balance itself so as not to destroy itself as things change from conception. We can do this too guys!
How? The Nobel committee is waiting to hear from you.
Hasta la Vista
You do realise that that translates roughly as "until we meet again"?
Big: adjective. Verb forms: embiggen, bignify. Adverbial form: bigly. Abstract noun: bignation.
Yes, agreed - and apologies (and an upvote): it wasn't clear to me what you were getting at.
I expect you're also one of those people who refuses to believe that any historical event took place unless you were there to see it with your own eyes.
... while trees fall silently in deserted forests ...
(Couldn't be arsed to go full haiku.)
Are you sure your arse is different from your elbow? Better send a probe up there.
You don't really get science, do you?
LionelB wrote earlier:
That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet.
IOW, I don't entirely* disagree with you. I just thought your analogy was crap.
*OTOH, I don't think "real" AI (whatever that means) is unattainable - always a duff idea to second-guess the future (cf. previous unattainables, like heavier-than-air flight, or putting humans on the moon). Basically, I don't believe that there are magic pixies involved in natural (human) intelligence.
No, you have to go back, create test cases for every imaginable scenario, ...
Sorry, no. You seem to have a total misconception as to how machine-learning in general, and "deep-learning" (a.k.a. multi-layer, usually feed-forward) networks in particular, function. You seem to have latched onto the bogus idea that a machine learning algorithm needs to have "seen" every conceivable input it might encounter in order to be able to classify it.
In reality, the entire point of machine-learning algorithms is to be able to generalise to inputs it hasn't encountered in training. The art (and it's not quite a science, although some aspects of the process are well-understood) of good machine-learning design is to tread the line between poor generalisation (a.k.a. "overfitting" the training data) and poor classification ability (a.k.a. "underfitting" the training data).
It's a hard problem - and while the more (and more varied) the training data and time/computing resources available, the better performance you can expect, I'd be the last person to claim that deep-learning is going to crack general AI. Far from it. But it can be rather good at domain-specific problems, and as such I suspect will become a useful building-block of more sophisticated and multi-faceted systems of the future.
After all, a rather striking (if comparatively minor) and highly domain-specific aspect of human intelligence is our astounding facial recognition abilities. But then we have the benefit of millions of years of evolutionary "algorithm design" behind those abilities.
@John Smith 19
Yes, deep-learning networks (usually) are just multi-layer networks - but that doesn't imply that "people could actually work out how they work". It's notoriously hard to figure out the logic (in a form comprehensible to human intuition) of how multi-layer networks arrive at an output. I believe the so-called "deep-dreaming" networks were originally devised as an aid to understanding how multi-layer convolutional networks classify images, roughly by "running them in reverse" (yes, I know it's not quite as straightforward as that).
So your reply to "homoeopathy is not medicine" is "write a new treatise on it and make it better!" Yup, got it.
Sorry, but that's a fantastically lame "analogy".
If you have to insert an explicit rule, it's not AI. It's a human-written heuristic.
You might well argue, though, that natural (human) intelligence is a massive mish-mash of heuristics, learning algorithms and expedient hacks assembled and honed over evolutionary time scales.
That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet. And of course it's hyped - what isn't? Get over it, and maybe even appreciate the incremental advances. Or better still, get involved and make it better. Sneering never got anything done.
If it was a tiny bug it could be fixed.
What makes you so sure it can't be fixed? FWIW, I suspect it is probably not a "tiny" bug, but may not actually be that hard to fix (off the top of my head I can imagine, for example, a training regime which omits random frames, or perhaps a sub-system which recognises highly uncharacteristic frames, which might mitigate the problem).
This research may well turn out to be rather useful to Google (although I'd also be slightly surprised if they weren't aware of something similar already).
It's just taking a statistic generated from the data and finding the nearest data point to that statistic in its database and then returning it.
No, it's not doing anything like that. Please find out how deep-learning systems actually work before posting fatuous comments.
At best, it's an "algorithm".
Yes, of course it is. Computers run algorithms - that's what they do. Whether you call it an "AI algorithm" (I wouldn't) or a "pattern recognition algorithm" (I might) is a matter of what you think those terms ought to mean.
There is no "general" AI ...
True, and likely to remain so for the foreseeable.
... and successful specific AI is large data sets with human curated rules
Not so true: successful domain-specific AI these days tends to be human-designed learning algorithms trained on large data sets - really not the same thing. With human-curated rule-based systems you can generally trace precisely how the system arrives at an output. With modern machine learning systems (particularly neural network-based ones) you generally cannot - the internal logic of the trained system (as distinct from the learning algorithm underlying training) is inscrutable. Call it pattern-recognition, if you like, but these are not hard-coded rule-based systems. They tend to be better at generalisation (within their domain).
Human-curated rule-based systems are still around to some extent (very 1980s), but machine learning (I won't call it AI, if it pleases) has moved on.
I kind of like "Kevin". No reason.
Vegemite > Marmite > Bovril
where ">" = "better than".
Note: I am of South African origin, and therefore strictly neutral on this issue. I am less neutral (but no less correct) on:
biltong >>>>>>> beef jerky
OTOH, those mutations may be beneficial, in which case inbreeding can be good.
Yes... but it may be hard to avoid (or even detect) other potentially harmful mutations hitching a ride.
In the long run, repeated inbreeding has the effect of reducing the set of available alleles in the inbred population for any particular gene, which is generally not a good thing. Hence, e.g., the robustness of mongrels in comparison with "pure-bred" dogs.
the organisation is using this discovery to try to obtain more funding?
Well, duh, of course they are. You need funding to do science.
(And they are almost certainly right - not many unique objects in the universe, because the universe is rather big and full of stuff.)
Well, the ether and phlogiston were, in their day, about the most plausible theories going for explaining the physical evidence as it stood. Sure, they turned out to be wrong, but establishing how a theory is wrong can be an excellent way of homing in on a better theory. To take an example, the Michelson-Morley experiment - which finally did for the ether - forced physicists (like Maxwell and Einstein) to develop new theories to account for the perplexing new evidence.
Getting stuff wrong in science is both commonplace and highly underrated. Better bad theory than no theory at all.
Your conception about how systems like AlphaGo learn is way, way off the mark.
Famously, Google's DeepMind learned how to play a range of Atari games from an input of raw pixels; in other words, it figured out how to play games just by "watching" them. Link to original paper here.
Biting the hand that feeds IT © 1998–2017