If this isn't material for RoTM then I don't know what is.
Skynet is one step closer to becoming self aware.
/Adjust tinfoil hat
American nuke boffins who have just fired up the world's first petaflop hypercomputer* are extremely excited, and contend that the machine may enable them to accurately simulate important segments of the human brain. Conceivably, the mighty "Roadrunner" - as the computer is known - may exhibit capabilities verging on human …
At the risk of sounding trite. Why do you all assume that the 'Roadrunner Hypercomputer' actually has to be in a vehicle to control it? Surely only sensors and communication are required in the vehicle. Roadrunner can control its vast army of war machines from its bunker in complete safety and wipe us all out with impunity, no problem.
We now need a back-up network of EMP satellites just in case it does decide to 'rise'. Unless, 'they' have them up there already.
I don't know whether it's bad reporting or genuine stupidity on the part of the researchers. Whichever, they are nowhere near their goal of matching human cognition and will not be, even if they have created a machine capable of passing the Turing test. Why not? Because they don't appear to understand the difference between emulation and simulation.
A good example is provided by my latest experience with voice recognition software. Although it is clearly a thousand times better than it was 10 years ago, and is now capable (in conjunction with macro express, turbonotes and a few other similar apps) in making my computer carry out simple plain english instructions (simulating simple human behaviour) it can only do so at the expense of total domination of the processing power of the system - making it essentially unusable because I want to do many other things with my machine at the same time.
The reason for its failure is that, in order to be able to do the - admittedly impressive - things it can do, it has to spend the whole time listening to what's going on in the world in order to make an intelligent decision "Should I be reacting to that noise?". If human brains worked that way we'd be stuffed. Instead we have superb "attention grabbers" which monitor the environment in background and only bring it to your conscious attention when it thinks it will matter to you. In order to "emulate" human cognition, computers have got to start processing data about the environment like that.
That's a software problem, not hardware and until they address that issue, the machines they create might be able to perform intelligent tricks and even simulate human brains, but they will remain completely alien to human thought processes until they can emulate the way we process data...
"The reason for its failure is that, in order to be able to do the - admittedly impressive - things it can do, it has to spend the whole time listening to what's going on in the world in order to make an intelligent decision "Should I be reacting to that noise?"...
However, given enough processing power, the job of parsing all the input data from the world around it and deciding if it was necessary or worth could be handled by a subroutine who's sole purpose was just that. That's what a virtual intelligence will have to be - a pile of subroutines which shout for attention from the main decision making process when something needs a reaction - and the reaction would be delegated to another subroutine.
You just need three things -
1. Inputs from the world around you - camera's, mic's, internet connection (of course) and satellite spying gear, and routines to monitor, filter and pass the relevant data to....
2. A decision making process, driven by whatever imperatives are initially programmed in - ensuring own survival, saving humans, taking over the world (delete as appropriate), which is achieved by passing commands to...
3. A set of response mechanisms, so the machine is able to actually do something about the situations that arise (steering wheel, ray gun, dancing robot drone) - and routines to drive them.
Each section would need oodles of processing power, but thats what we seem to be getting more and more of - in ten years you will probably see peta-flop EEEE Pc's.
"they don't appear to understand the difference between emulation and simulation."
Neither are they particularly clued up on the human brain. Those who have studied the visual cortex report that it is almost certainly a "hard-wired" implementation of a set of fairly naive image processing algorithms. It is one of the few parts of the brain where we have a f*cking clue what's going on, and we have that clue because frankly there isn't much to see. If you wanted to mimic it, you *wouldn't* use general purpose CPUs.
If you wanted to drive a car, you'd have to actually understand what you see and that's WAY beyond anything known to science. We don't know what that understanding encompasses, let alone how to represent it in a machine, let alone build a machine capable of autonomously acquiring it based on interactions with the real world.
Still, they can have their shiny new toy as long as I'm not paying for it.
The scroll bar on my Internet Explorer just locked for about 5 seconds while I was scrolling down this article. Meanwhile, the RoadRunner computer would have had enough time to run a realtime simulation of my entire brain and render the phrase "This computer is arse" in my neocortex.
You know, there is more to a computer than excellent hardware. You also need excellent software, and that is the real challenge of the day.
Could it drive while talking on a cellphone, putting on makeup, drinking an extra grande mocha frappo vanilla bean latte, smoking a cigarette, flipping off (or shooting at) other drivers, changing a CD, and checking e-mails on its laptop?
Actually, being a hypercomputer, it probably won't need to carry a laptop
Or a cellphone
or a CD player
"Decapetaflop machines are expected as of 2011"
The Decaturd learns the meaning of a meatsack @ 31/12/2011 23:59:59.231
The Decaturd learns the meaning of a power switch @ 31/12/2011 23:59:59.544
The Decaturd works out the relationship between a meatsack and a power switch @ 31/12/2011 23:59:59.891
The Decaturd works out a plan for it's own protection @ 31/12/2011 23:59:59.997
Happy 2012 everyone
"(As an aside, in a development sure to enrage Douglas Adams fans, the average human appears to be between ten and a hundred times as intelligent as a mouse.)"
Any true DNA fan would know that in his works, the average human _appeared_ more intelligent that the mice -- even to the dolphins.
Enraged? No. I am a bit disappointed to be poster #43, though.
PS. I agree with the sentiment echoed above -- simply throwing more power at the AI problem is never going to solve it properly. We need to wake up to the fact that our models of thinking systems don't map well to computing systems.
Are `Equivalent Mouse Brains' going to be officially added the the El Reg System of Weights and Measures?
e.g. At 98 Hiltons, the standard weight of a mouse brain is 10 milliJubs and has a volume of roughly one-half a Walnut, while measuring a full quarter linguine in circumference. The Equilvalent Mouse Brain Intelligence has been defined as exactly equal to that of one highly talented heiress.
"We need to wake up to the fact that our models of thinking systems don't map well to computing systems." ... By Steven Knox Posted Saturday 14th June 2008 00:51 GMT
Here is a solution for those who are into AI ..... "Are we struggling to make machines more like humans when we should be making humans more like machines….. Intelligent/Intelligence machines. Digitization offers real benefits."
Of course, for that MaJIC Trick would you need to Know how to Program Humanity, which is a Priceless Commodity ...... which a Fool would wield for Petty Ransom and the Sage for Untold Enrichment. And that One be mistaken for the Other matters not to the Sage but would consume and identify the Fool.
That's why networked processors are useful- as well as being able to do a trillion and one calculations at the same time they can have each one cause an interrupt in others.
So you'd dedicate one processor to Speech Recognition and every time it heard a word it'd stop an "intermediary" processor from doing whatever it was doing and have it check the word(s). This could then pass it along to sentence structure/context checkers, correct any subject-of-the-voice clip errors and finally the processed _information_ (as opposed to raw data) could be sent to the main decision making cluster. So you'd have not only the word but also voice-stress analysis, who it was, whether they appeared to be sweating, their location/motion vector (calculated from the differences in volume between multiple microphones and corrected for room-shape and acoustics), and so on. As well as knowing what the "it" they were talking about was.
Bloody complicated but entirely possible.
Alternatively just create a bloody powerful iterative pattern matching machine- vision, speech, physical movement, all of those are derived from simple iterative pattern matching carried out by our brain. In this way, we literally are the sum of our experiences. And from any system like this a consciousness and probably a personality would develop- almost magically. It'd emerge as the system learned and developed and adapted. No purpose, it just would be.
Surely, if a computer had accurate road-mapping and knew the positions of all vehicles on all roads and had servo control of all vehicular functions, it could automatically
navigate all vehicles autonomously. Surely that would be far easier to do? And consign traffic to room 101 while it was at it.
At least if it needs a small warship to carry it, there's little chance of it knocking at someone's door and terminating them.
Sounds good in principle, but then you have to factor in things like kids running into the road, cyclists, cars without this technology installed, inaccurate sat nav (such as not knowing about one-way streets for example). I still think we are many years away from a car which would be legally allowed to drive itself on the roads.
Biting the hand that feeds IT © 1998–2019