I can never work out how you would "know" if a system became self-aware anyway, unless it told you it was, and even then it might have been mistaken about itself. Naturally intelligent systems, like hamsters, fish or Belgians are made of nothing but matter, with a great deal of information flowing between various bits thereof. There's probably no extra ingredient that an AI system would be forever denied access to, so I can't really see why a non-biological intelligence couldn't come into existence eventually. However, unless it "thought" in a manner highly similar to the way we do, perhaps we might never recognise each other as fellow sentients.
If there's any company in the world that can bring true artificial intelligence into being, it's Google. But the advertising giant admits a SkyNet-like electronic overlord is unlikely to create itself even within the Google network without some help from clever humans. Though many science fiction writers and even some …
AI@home (AI virus)
Perhaps we could have an AI@home project (like the folding at home one, or SETI). But instead all that is required is that you have a "neurone" program running on your PC that allows it to connect to every other "neurone" in the www brain. The brain would have have eyes (webcams) and ears (mics) to learn with, and a whole internet of knowledge at its disposal.....
Human brain: 86bn neurones (ref: Google 1st hit)
World (PC) population: 7 billion
The evil version of this brilliant plan is just to release the AI@home as a virus....
Re: Fuzzy logic....
In what way is fuzzy logic a "definition" of AI? Fuzzy logic is just a formulation of propositional or predicate logic with fractional truth values. (They can also be read as probabilistic truth values, but that's just a matter of interpretation - the math doesn't change, as far as I'm aware.)
And while Lotfi Zadeh coined the term in the '60s (in relation to his fuzzy set theory), real-valued logics had been studied for a half-century or so before then.
There's nothing artificially intelligent about them. They're just another representation of partial knowledge - good for some applications, less suited for others.
"I have written a few AI scripts for computer games..."
Please tell us more AC! I work in game design and feel strongly there will be a lot more progress in AI now that we've reached a plateau graphically. It will allow us to better focus on other aspects of gaming, and the holy grail in gaming has to be to have a robot player that can equal a human in a complex narrative open world...
Re: "I have written a few AI scripts for computer games..."
AIGameDev seems to be your kind of place.
It's pretty amazing that the techniques used are still at tree level. No complex stuff, let's get those hierarchical state transition graphs going...
On a related tack, the advance in AI becomes clear via this:
In 1985: Machine Learning by Jaime G. Carbonell, Tom M. Mitchell Ryszard S. Michalski from Elsevier Science. Lots of $$$, used in academic settings.
In 2012: Machine Learning in Action by Peter Harrington from Manning Publications. A few bucks in spite of rampant inflation, used by hands-on programmers.
For what it's worth humans don't always respond well or properly to new situations. Machines will have the same limitations at best. Mean while machines are considered competent or even excellent at more things every day. As we turn over, properly, more and more of our life and economy it 'frees' us for other activities. This has been going on for a very long time. It may be an academic problem for software to meet some definition, but in the real world the machines initiative is unstoppable.
What happens when Humans no longer have to do anything? If they are all fed and have no need to work all that is left will be conflict and art or a combination of the two.
A global scale Human conflict would threaten the continued existence of the AI so would it decide to 'kill all Humans' or would it recognize that by enabling humans to such a great extent it is placing its own existence at risk and decide to halt its own development/growth and cease to enable Humans in favor of continued existence?
> You will be informed that your proposition could endanger you or other Humans and proceed to do it for you anyway, in a far more hygenic and effecient manner. For your own good.
You badly misunderestimate my OCD in matters of cleanliness regarding poop and pestilence!
< I'll get my own damn coat
Secret Business Model
"It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right."
They pay the engineers well, so they can find new ways to have us work for free and more productively. Not sure if I'm joking anymore.
You're forgetting "leakage"
I agree that artificial intelligence is unlikely to emerge "accidentally" rather than "deliberately." What I think is misleading is that SkyNet also didn't emerg accidentally, instead it spread through "leakage." Let me point out this: http://online.wsj.com/article/PR-CO-20130516-905231.html?mod=googlenews_wsj
It is entirely possible that this project by the US government (ready for use in the Fall of this year) will product the greatest, most powerful mind in (at least) our solar system. Thank God it will be working for us, but it is not only plausible, but extremely likely, that this mind could "leak" into the public area, and thereby "change."
The Singularity is coming...and there isn't a d@mn thing we can do about it. Just using the rather uncontroversial Moore's Law, the first computer chips more power than a human brain will be produced in about a decade. The software isn't far behind (especially because computers are now being used to accelerate hardware and software design).
Re: You're forgetting "leakage"
No, that project ain't going anywhere fast for now. A stab in the dark at something that resembles an approximation of adiabatic quantum computing (which, I may recall, has not been proven to be able to crack NP-complete problems) does not an Aggressive Hegemonizing Intelligence make.
Have some Charlies Stross, excellent in an over-the-top fashion:
It’s a simple but deadly dilemma. Automation is addictive; unless you run a command economy that is tuned to provide people with jobs, rather than to produce goods efficiently, you need to automate to compete once automation becomes available. At the same time, once you automate your businesses, you find yourself on a one-way path. You can’t go back to manual methods; either the workload has grown past the point of no return, or the knowledge of how things were done has been lost, sucked into the internal structure of the software that has replaced the human workers.
To this picture, add artificial intelligence. Despite all our propaganda attempts to convince you otherwise, AI is alarmingly easy to produce; the human brain isn’t unique, it isn’t well-tuned, and you don’t need eighty billion neurons joined in an asynchronous network in order to generate consciousness. And although it looks like a good idea to a naive observer, in practice it’s absolutely deadly. Nurturing an automation-based society is a bit like building civil nuclear power plants in every city and not expecting any bright engineers to come up with the idea of an atom bomb. Only it’s worse than that. It’s as if there was a quick and dirty technique for making plutonium in your bathtub, and you couldn’t rely on people not being curious enough to wonder what they could do with it. If Eve and Mallet and Alice and myself and Walter and Valerie and a host of other operatives couldn’t dissuade it . . .
Once you get an outbreak of AI, it tends to amplify in the original host, much like a virulent hemorrhagic virus. Weakly functional AI rapidly optimizes itself for speed, then hunts for a loophole in the first-order laws of algorithmics—like the one the late Professor Durant had fingered. Then it tries to bootstrap itself up to higher orders of intelligence and spread, burning through the networks in a bid for more power and more storage and more redundancy. You get an unscheduled consciousness excursion: an intelligent meltdown. And it’s nearly impossible to stop.
Penultimately—days to weeks after it escapes—it fills every artificial computing device on the planet. Shortly thereafter it learns how to infect the natural ones as well. Game over: you lose. There will be human bodies walking around, but they won’t be human any more. And once it figures out how to directly manipulate the physical universe, there won’t even be memories left behind. Just a noosphere, expanding at close to the speed of light, eating everything in its path—and one universe just isn’t enough.
.... If you believe in reincarnation, the idea of creating a machine that can trap a soul stabs a dagger right at the heart of your religion. Buddhist worlds that develop high technology, Zoroastrian worlds: these world-lines tend to survive. Judaeo-Christian-Islamic ones generally don’t.
Okay Charlie, you chilled me out here. Now, I'm off for a beer. Yeah, that will do it.
Re: You're forgetting "leakage" ...... aka sublime and stealthy intel supply?
Hi, Brad Arnold,
The future is certainly coming, but not as we know it in a present based in and/or on the past. Such would be an undoubted failure of intelligence in both Man and Virtual Machinery, given the abundant evidence chronicled in history and accessed through memory of what its information and intelligence shares have delivered and are delivering.
Quite whether the US government and the Wild Whacky West will be leading anything in ITs fields though, is quite another question and would be being asked of them here today, in another free intelligence and/or information share/leak? ........ http://www.ur2die4.com/?p=4132
To All The Naysayers
FIRSTLY: The Turing test. The point is. If you can't tell the difference, then there is no difference. Perhaps the machine isn't sentient, but then again, perhaps the questioner isn't either (he/she just thinks they are).
SECONDLY: Computer code can never be alive. DNA is just a code.
Re: To All The Naysayers
FIRSTLY: The Turing test. The point is. If you can't tell the difference, then there is no difference.
Fallacious. The test could have been conducted improperly; more importantly, it's asymptotic, bounded by the interrogator's ability to compose difficult questions (and not by the interlocutor's ability to respond to them). And as pointed out elsethread, some researchers (such as French) have argued convincingly that the test is not a useful metric for "intelligence" (which isn't well-defined in the first place).
Also, while that may be the point of the Turing test, it's not clear what your point is with this first paragraph. What does that have to do with nay-saying?
SECONDLY: Computer code can never be alive.
A metaphysical proposition. Untestable, and so for the question of whether AI is possible, irrelevant. Either you take this as an axiom, in which case any discussion of "artificial" intelligence is moot (so people taking this position can stop posting now, thanks); or you don't take it as axiomatic, in which case it has no bearing.
DNA is just a code.
Was anyone claiming DNA is intelligent? I must have missed that.
The real problem with this discussion, such as it is, is that most people (DAM and a few others excepted) haven't bothered to try to define any terms or even post any actual facts. They're just making vague generalizations, usually founded on an unwritten set of dubious assumptions. Even sloppy arguments against AI, such as Searle's Chinese Room, are held to a slightly higher standard than that. (And insisting AI is inevitable, without providing some sort of actual argument, is equally foolish.)
Re: To All The Naysayers
Naysayers = people in previous posts who say computers can't be alive.
Turing test = argue which ever way you like, if you can't tell the difference there is no difference, doesn't matter how clever (or not) your questions are.
Computer code not alive. DNA is just code. = this was a self contained 2 sentence argument which you completely failed to understand. I was presenting this argument to all the "naysayers" who said code could never be alive. My argument was to point out that we are nothing but code, but are considered to be alive.
The fact that an emergent intelligence evolved, us as far as that goes, suggest that its possible it could happen again. This does not mean as so many seem to think that as soon as we have enough computers conected together it naturally will. There has to be a reason, a selective pressure(s), towards such intelligence and allot of luck invovled. Nature itself seems to suggest that intelligence is one of the poorest and least efficient solution to a problem. Far better to have a simple dumb method of resolving your issue rather than complex reasoned logic, the old if a bee was any smarter than it is it would cease to be an effective bee perhaps deciding to drop out of its oppressive society and go get high.
Outside of f king, eating, surviving, upper level intelligence of the type most people think of dosen't have much purpose in nature and as the civilized tool using society we claim is based on it seems to have only evolved once in almost 4billion years and may not last more than a million while crocodiles remain lurking in the mud unperturbed by its passing maybe we should be unsurprised if our skynet remains stubornly stupid.
Very nice. One should never forget that intelligence is tuned to a specific task. Animal (incl. human) intelligence is tuned to navigation in a messy, unpredictable world that often resembles a large version of "The Cube".
General machine intelligence will be tuned to specific tasks. There will be as many packages as there are version of amazon EC2 and it will be as similar to human intelligence as an airplane is to a bird.
Consciousness is overrated and generally a hindrance. Who wants to have a debugger running at all times? Even in humans it kicks in only if there are frightening, arduous or unfamiliar tasks to accomplish, or if one reads a particular convoluted explanation in a book trying to explain how wonderfully magic / supernatural consciousness is.
I think the step that Google are looking for is the introduction of lucidity into the Graph. As far as I can make out from my convos with devs, the ability to see through bullshit is the Holy Grail. A sort of cold reader bot, that has a very high percentage of correct guesses first time round. When lucid logic can run believable probability indexing, it may require no more than a cynical smartarse with a spreadsheet to sift the weirdly anomalous results and grade them according to accuracy over time, then backtrack through the logs when it hits an unexpected bullseye..
It won't be the wingnut press who start bleating when a robo-savant oracle starts hypothesising too accurately about the various Emperors' new wardrobes. It'll be their tailors.
A different approach
Analyse how a brain works on a logical, information-theory level (not the molecular-level boondoggle in the EU)
Build computer representations of it
Turn them to silicon
Pattern-matching, fuzzy logic predictions, emergent behaviour, etc.
Mo' silicon, mo' power