Researchers at Facebook have attempted to build a machine capable of reasoning from text – but their latest paper shows true machine intelligence still has a long way to go. The idea that one day AI will dominate Earth and bring humans to their knees as it becomes super-intelligent is a genuine concern right now. Not only is …
I agree that this sounds like a retrograde approach - like 80s "GOFAI" (Good Old Fashioned AI), which pretty much hit a brick wall back then.
Much more promising is the Deep Learning approach, where systems are designed to learn to make their own rules from interaction with the environment.
'Basically this isn't AI at all, but no different to 1980s "Expert Systems"'
Pretty much what I was going to say.
Expert Systems (ES) work by following set rules, or a model. It parses input and calculated the probable result, and is consistent in this approach. That model, however, is created for the ES: It does not build the model itself. Advanced ES can adjust the model within parameters if results show the predictions made are inaccurate, but they still have to fit within the bounds of the supplied model and rules.
For Artificial intelligence (AI) that model would be adjusted as the AI learns: It would process the data as per the model, then compare the predicted result with the acutal result and start shifting weightings in the probabilities. In medical terms, this would be the process of taking symptoms and calculating the cause. The more cases presented, the better the model will be, but the AI could scrap the model entirely and build a new one from the raw data if needed: Something an ES can't do.
For humans: We cheat. We are as likely to miss details and skip steps in processing information. This is both a strength and a weakness in the human brain and why we fail to realise things at first glance but rather it can take several moments to realise (There is a bicycle approaching; That is a man in a dress; That car isn't going to stop; That is someone I know). As a result, we can react quickly to the unusual, but we can miss things along the way. It's wired into us thanks to evolution: If you can't react quickly to a potential danger, you don't survive, but if it's safe then take all the time you need to check and double check and realise you were wrong in your initial assumption and that root vegetable really doesn't look like someone's face.
So there has been a choice: To develop AI to be consistent in accuracy or to mimic the human brain and accept it will make mistakes. The last I heard, the aim was to remain accurate: We've enough natural stupidity without introducing more artificially.
For me it was early 90's Expert Systems but much the same...
We were doing this sort of thing in Prolog when I was at university. On the hardware side, the parallel gated logic looks like a PAL from the same era, not even up to FPGA complexity where at least you could get some interesting interconnects and feedback for training.
We are struggling at the moment to get machines to learn some knowledge without forgetting it when it learns something new. When (if) machines actually understand that knowledge, then things will get really interesting.
Also, in pedant mode, its "forward model", not "forward mode"
When (if) machines actually understand that knowledge, then things will get really interesting.
What would it mean for a machine to "understand" something? How would you know?
I'm not sure intelligence, artificial or biological, is really about "knowledge" and/or "understanding". Maybe it's more about how to interact with a rich environment (including other agents, intelligent or not).
This is where a lot of people in AI research go wrong in my view. They are treating the problems of intelligence as technical, when the underlying questions that we need to answer are overwhelmingly philosophical.
That is bad news in terms of getting reliable answers because philosophers seem to be pretty bad at that, but until we have a much clearer idea about what consciousness and understanding are, how can we imagine we could simulate them. Even if one relies on the concept of consciousness as an emergent property of the system, relying on something mysterious appearing in a system of sufficient complexity seems little different from superstition.
Shame that all those boffins* are studying the risks of a fantasy technology (a dead horse thoroughly beaten by sci-fi authors) while dumb automation steadily turns the world to shit.
Easy exercise: think about some other "fantasy technologies" that look ... a little less fantastical with hindsight of, say, a few of decades - or some other horses flogged to death by sci-fi authors (moon landings, atomic weapons, robotics, the internet, mobile communications, virtual reality, bio-prosthetics, machine translation, machine face recognition, genetic engineering, ...)
*boffins = people who know more about shit than you do
The "garden" example reminds me of an A Level Mathematics text book that used a reading comprehension exercise to show the difference in the rigour of mathematics compared to normal language.
The answer that the AI bot should have given would depend on whether you're building an introvert or extrovert bot. The extrovert bot will say "garden", but the introvert bot will say "garden, assuming that Mary took the ball with her into the garden and no-one has removed it."
Its interesting that many intelligent humans struggle to create intelligent machines over many years, yet many of them 'believe' their own intelligence was due to random undirected changes over millions of years...
Perhaps "I don't have enough faith to be an atheist" by Geisler, Turek, Limbaugh should include a discussion on AI...?
My take on the subject is that intelligence is the result of natural selection, good or bad luck in choosing one's parents, good or bad luck as a foetus, nurture, environment and choice.
"Random undirected changes over millions of years" is an intentionally misleading phrase.
do some people not understand the difference between "intelligence" and "artificial intelligence" or AI?
Intelligence refers to something which not only is self aware and has the ability to constantly learn and adapt but also understands how that new experience relates to themselves and others in the world (plus all the other knock on effects to other things which are not directly related). The emphasis of who am I? cannot be overstated enough in this.
Artificial intelligence is something which can appear to be intelligent (even though by definition it is not) - for example chat bots. In much the same way that artificial leather looks like leather but is not leather.
When will the press learn the difference and stop publishing nonsense AI stories that fear monger "Terminator" scenarios?
> When will the press learn the difference
Never? I absolutely agree with your distinction, but we're the minority. Most people have fallen into the mindset that Weak AI is useful today, and is rapidly approaching full intelligence.
On second thought, AI Winter II should arrive any year now. Young kids today, growing up with the technology, instinctively scoff at AI. Adults can see the writing on the wall: companies are trying to use weak-ass AI for jobs that demand real intelligence, like driving. The true believers of the cult of academic-industrial AI (lol boffins) will be the last to wake up to reality. When they're working at Starbucks you'll know the AI bubble is over.
AI Boffins, the new English Majors :)
“Mary picked up the ball. Mary went to the garden. Where is the ball?” It should reply, “garden.”
Actually, the most accurate answer to where is the ball should be 'in Mary's possession". There is no guarantee the language "went to the garden" define as 100% in the garden, where there is possibility that the ball is still outside the garden in that millisecond (ex: one hand holding the ball and hand is slightly outside the garden). If you've done animation, you'll know what I mean when it doesn't return true as it being in the garden.
This example just tell us, the reason isn't the AI still has a long way to true intelligence, but people "are still pretty dumb" to see that their reasoning is really lacking true intelligence and trying implement their dumbness to the AI.
Next time, try asking the AI, " I want Hot water". Just beware of the surprise pouring of 100 degree Hot water when it hasn't implement the assumption of temperature for 'Hot", and the definition of service or container for 'want'.
Biting the hand that feeds IT © 1998–2019