Lipreading
As a deaf friend of mine noticed - 'I cant understand what hipsters are saying'. 'Ah' I pipe up 'The facial hair upsets your lipreading!' 'No-they just seem to talk bollocks.'
Next year will mark the 60th anniversary of the Dartmouth Artificial Intelligence (AI) Conference. That conference, which marked the birth of AI research, explored whether machines could simulate any aspect of human intelligence. Since then, Google has developed a self-driving car, computers can type what you speak, and phones …
This post has been deleted by its author
"If you think about it, this sort of thing is just perfect for replacing humans at the C level (CEO, CFO, etc) level of corporations, hence its doom."
It was Harold Wilson (that Harold Wilson, yes) who suggested replacing senior managements with a set of traffic lights connected to a random number generator, because making quick decisions that can easily be corrected is better than taking ages to make the wrong decision and then defending it for reasons of personal prestige.
Interesting article.
The (very good) BBC "Horizon - Now the Chips are Down" from 1978* has a segment looking at a doctor who is programming a computer to diagnose patients. I did a course in AI in my CS degree 25 years ago, so I'm slightly cynical every time I hear of AI. But maybe we really are on the final upward curve this time?
* Also predicts widespread unemployment due to a lack of secretarial, factory and filing clark type jobs off set by "as many as 60,000 IT jobs"...
"They are just specialist databases with a specialist interface."
I thought that the strength of "Expert Systems" in various diagnostic processes was that they could untangle things like two overlapping conditions. They also did not suffer from confirmation bias that leads people to go for the "obvious" answer.
My African-American friend went through the cognitive doctor when he had a cold.
He walked in the door.
Cognitive computer loaded 'Gorilla Anatomy'.
The holy grail of AI has been promised for decades and always fails to deliver. Call it what you want but AI is created by humans, whom all are fundamentally idiots.
Well, aside form the potential bad diagnosis, what should worry us all is that some of those highly educated boffins intimated that Google put out a immature algorithm.
Geez, I am going back to my shell and hope I will never be a grown-up chick… it has been eons since boffins talked about AI-completeness… Here is a Wikipedia entry for the time being: https://en.wikipedia.org/wiki/AI-complete
So, What's Up Doc?
The Chinese room is a really, REALLY shit argument. It generalizes to "nothing physical can know anything" which either makes him a full blown dualist - something he's denied in the past - or alternatively, as I suspect, he's well aware of the problem but has managed to build a whole career off the back of this one rather weak argument, so has always deliberately avoided addressing it.
"Bishop believes there are three things humans do which are simply incomputable. The first is true creativity.
The second is understanding. Computers will never truly understand concepts, he argues, basing this on John Searle’s Chinese Room Argument suggesting instead that they display a kind of computational quasi-understanding.
Finally, he doesn’t believe that computers will never be truly conscious."
I just think he's biased. Taking a human point of view -- but humans do tend to be full of themselves until they screw up.
"Bishop believes there are three things humans do which are simply incomputable. The first is true creativity."
If that's so, why are there computerized musical and graphical artists that can produce works that attract critical attention? This almost seems like passing an artistic Turing Test (as in, can they tell the human artist from the computer one?).
"The second is understanding. Computers will never truly understand concepts, he argues, basing this on John Searle’s Chinese Room Argument suggesting instead that they display a kind of computational quasi-understanding."
I've read the argument, and I see this as less a claim of impossibility but rather a quantification of the specific capability of the computer. The Chinese Room can be seen as rather a rather simple ("weak") form of AI and it's been acknowledged over the years, but it doesn't seem to rule out the possibility of something that acts like the Chinese Room but in a different way that allows for a higher level of process than just rote translation (a "strong" AI).
"If that's so, why are there computerized musical and graphical artists that can produce works that attract critical attention?"
Modern Art attracts attention as well. It's still crap.
Can these AIs portray imaginary scenes meaningfully? Can they interpret someone's description and paint it? Can they compose music to fit specific requests that actually sounds good and doesn't follow any particular set of rules to a T? No. They're just throwing paint and notes and patterns around.
Anon because modern artists in the family :)
"Can these AIs portray imaginary scenes meaningfully?"
That can be aided through procedural generation, which is already used to build random 3D worlds. Criteria could be placed and the scene rendered in particular ways to create an "imaginary" landscape. I see this as quite possible, just not in focus.
"Can they interpret someone's description and paint it?"
Natural language processing is improving which would be of help on this, so I see this as "not now" but increasing in possibility as time passes.
"Can they compose music to fit specific requests that actually sounds good and doesn't follow any particular set of rules to a T?"
That depends on what you mean by request, and as for following the rules, that's just a little bit of either random or procedural drift. Incidentally, you should really look up a little cluster in Spain called Iamus. This thing actually generates spontaneous classically-styled music (to the point it can be easily identified in the style). They call the technique it uses "evolutionary music".
"I just think he's biased. Taking a human point of view -- but humans do tend to be full of themselves until they screw up."
I just think we should watch you: I bet you come straight form The Factory for the Absolute. You don't like us, the humans! We know you are a machine! We screw up…. again!
No, more like 70 years ago. But so far the only progress has been in redefining the jargon to make it look like progress. We have better databases and ways to interrogate the data. It's not real A.I., no-one is actually researching real AI, partly because over the last 70 years we have realised that we don't know what biological intelligence is and we have a better idea of how to write programs.
Computer "Neural Networks", AI, Cognitive Computing, Machine Learning, algorithmic "evolution" etc have almost zero connection to the similarly named things in biology. It's Humpty Dumpty jargon.
Google Translate abandons 30 years of effort on Natural language work to use the "rosetta stone" approach of a big database of known matched human translations. Zero AI in it.
@Mage
In reply to your point about Google Translate, one wonders whether in fact our brains do something similar. We don't know.
People have wondered for a long time how language began, and all the early theories were waffle spiced with religion. We need an evolutionary explanation of how we got from the kind of mutual understanding among animals to the achievement of human language.
@PST The best recent explanation I've come across was that language evolved out of trading goods with non-related groups of people. Which lends itself to why pidgin works reasonably well even in cases where there is no common language. I wouldn't be at all surprised if it emerged in multiple places about the same time in groups that were under stress (survival). We've been finding more examples at disparate sites the world over of behaviors appearing far earlier than previously thought in challenging conditions. Or to put it lightly, the wiring was already there with the operator asleep.
Aside: if you mentally graph the memes within the posts from Amfm, you do better in the comprehension department. It's very much a shorthand. Watson (the machine) can relate.
Since then, Google has developed a self-driving car, computers can type what you speak, and phones have become really good at playing chess.
First, the Google Car isn't finished, and there is no AI in there, just a PC with a (very) complex set of instructions reacting to its sensors and instructions. If it were AI, it might want to stop and admire the view every now and then.
Second, computers can type what we speak, most of the time almost accurately, but again, that's not AI in the sense that it's not an artificial brain doing it, just another set of complex instructions. if it were AI, it might interrupt us now and then to say "REALLY ?".
As for chess, yeah, I'll leave you that one. The damn thing beats me every time.
"First, the Google Car isn't finished, and there is no AI in there, just a PC with a (very) complex set of instructions reacting to its sensors and instructions. If it were AI, it might want to stop and admire the view every now and then."
This argument basically comes down to "we are special little flowers with some magic goo that makes us qualitatively rather than quantitatively different".
Ravens don't stop to admire the view but they can solve quite complex problems. Dogs are currently the only species known other than our own that can understand the "pointing" gesture (chimpanzees don't.) I don't know if computers will acquire sufficiently compute power, storage and the right programming to become as intelligently capable as human beings, but I think it is very unwise to say something like the Google Car has no AI rather than limited AI.
"but I think it is very unwise to say something like the Google Car has no AI rather than limited AI."
Agreed. For a ridiculously simple example, take a pocket calculator multiplying 3*7.
Rudimentary and extremely limited, but humans are the only animals on Earth that can compete with it in its limited intelligence.
And before the inevitable, "It's only a basic algorithm!" I'd like to point out that for thousands of years, Humans regarded mathematics as evidence of a truly rational being.
Hell, it's still the "universal language" that's supposed to enable us to recognise ET.
Ravens and dogs, and every living creature, have one thing computers will never have : the ability to act on impulse.
That was the point of my example. Ravens may not admire the view (and what proof do we have of that ?), but I am fairly sure that they can decide to fly to a given place without any proper reason to do so. I'm sure you can find a corresponding example for dogs.
When a computer is not responding to code imperatives, it does not decide to do anything but wait for the next imperative. That is why it is not AI. Intelligence is whimsical, or it is not.
"When a computer is not responding to code imperatives, it does not decide to do anything but wait for the next imperative."
You are assuming there is no factor that upsets the apparent determinism of behaviour. Unpredicted behaviour of a computer system can usually be tracked down to an unexpected combination of conditions. The apparently impulsive behaviour of animals and therefore humans is equally deterministic - if you can untangle all the factors.
That sounds like a philosophy I hold: Practically always, we do things for a reason; it's just we're not always conscious of that reason. Even something simple as "boredom" counts as a reason to try the unknown path once in a while.
Given such a philosophy, it's entirely possible for a computer to seemingly act on impulse. It's just what they consider "an impulse" we wouldn't see the same way.
Not so! My Windows 10 PC seems to act on impulse all the time.
Behavioural scientists I know argue that people are programmed to do everything that they do, and that impulse really isn't. It just seems that way because we don't yet understand the parameters that made them do it. There may be no whimsy at all in a human being, but simply enough complexity that we can't understand the programming. None of us can really know, because we don't know exactly how people do things.
You seem to be arguing for hard AI, and suggesting that soft AI has no value. I'd argue the opposite. Soft AI has significant value, and from an ethical perspective it's probably safer.
Could we please have confirmation that there is indeed an "n" before the "ever" ?
Because there is an enormous change in meaning there, and the sentence, as it stands, does not compute with the general tone of that part of the article.
Gotta be a typo. Bishop sounds like a realist. Dude must have a thick skin to handle a prominent role in the field of AI.
IMO... expectations are still inflated, even after the rebranding from Artificial Intelligence to Cognitive Computing. More like Pseudointelligence, as in 'obviously fake'. Chess and Jeopardy are tough for humans, relatively trivial for computers. You could tell from its wackier answers that Watson was just a glorified search engine with a lame natural language parser (IBM admitted it required a ton of special-case coding). It's possible to go beyond simple search engines, but with rapidly diminishing returns. Nothing here.
Are you sure about that?
Perhaps we could split the question.
Can a computer become "conscious," as in self aware?
Can a computer with a von Neuman architecture become "conscious," as in self aware?
And of course since the brain is a computational device are we "conscious" to begin with?
AI researchers have always used what's available but we know that brain's architecture is very far from that of Intel/ARM/Freescale processor. We now know it's running something like 15 Peta FLOPS with 400W at maybe 15Hz (but with huge fan ins of 10 000 to 1) yet huge server farms are needed to even come close to that processing power and fall short of holding a conversation.
My instinct is you need a brain like architecture to get brain like behavior and brain like performance, but as WIZARD demonstrated (facial recognition at 30 frames a second 30 years ago) not necessarily a perfect copy of the brain.
Now what's the power consumption if you cut the operating frequency of a Pentium to 1MHz? (assuming you can of course).
For those that noticed, I used the term "expert system" because AI is so poorly defined...
I'm not surprised most Sci-Fi is based upon the fear of an intelligent machine killing off humanity.
Is there any reason to have humanity around once you do get a machine that has an effective IQ of 120?
More importantly, if a machine had an effective IQ of 6000 (e.g. Holly before Red Dwarf had the radiation leak), what properties would it have?
6000 P.E Teachers?
P.
"More importantly, if a machine had an effective IQ of 6000 (e.g. Holly before Red Dwarf had the radiation leak), what properties would it have?
6000 P.E Teachers?"
IQ doesn't work like that. Someone with a +2s IQ can perhaps read a Harry Potter book three times faster than someone with a median IQ, but no matter how long it takes someone with a median IQ will never be able to understand, say, a C++ textbook.
I think this is one of the core problems with AI - adding CPUs or just making them faster doesn't of itself deal with complex problems in, say, the mathematical solution space. And all the PE teachers in the world if you added them together wouldn't be able to solve some problems within the capability of a Feynman or a Higgs.
"IQ doesn't work like that. Someone with a +2s IQ can perhaps read a Harry Potter book three times faster than someone with a median IQ, but no matter how long it takes someone with a median IQ will never be able to understand, say, a C++ textbook."
Are we sure about that? Is it really a matter of "You will never understand this no matter how much you try, so just give up." and not, "It'll take you a pretty long time to get it. Are you sure you want to put down the effort?" After all, even the retarded can learn things if you're patient enough. Saying the former implies there are people literally too dumb to live, which is eugenic in nature.