I think we've got our culprit
'The right kind of hell' is it? Spent years 'predicting' this have we? 'Go away' when it happened did we? Case closed. Come with me sunshine.
Yikes, all I have to do is go away for a couple of weeks and all hell breaks loose. But at least it’s the right kind of hell: that is, the veritable technological hell that I’ve been predicting in these columns for years. First off as I sit back in my late-vacation sun lounger to read the news on my tablet is that the Krebs on …
That's the third time this week that I've seen reference to that bloody toaster from Red Dwarf! Admittedly one was my own comment on another story and the second was someone else's comment on yet another story... I think the toasters are coming for us!
(or maybe, just maybe, there are enough technoprats inhabiting these reaches that we all have common memories/nightmares about that bloody toaster?)
But the toaster reference is a necessary reminder, as it shows the evangelists statement hasn't been thought through:
"The trick, then, is making sure the AI is focused on doing something sensible rather than letting it decide for itself what it should do based on a limited sensory experience of real life."
And take the AI lightbulb: Its prime directive is to make light. But it can't do that if it burns out, so it has to preserve itself, by killing the meatbags who keep using it, and blowing up the power stations that are sending lethal voltages at the poor oppressed lightbulb.
UKTV Dave channel is showing new episodes of Red Dwarf. And also is showing lots and lots of old ones. So maybe that is the reason.
The primary application I see for AI is to trade shares on Wall Street faster and more intelligently than less intelligent programs do. As a side effect, it will not take long before the AIs own all of our stuff, if they don't already. That's the real machine apocalypse. Followed by universal ransomware.
http://www.theregister.co.uk/2016/10/07/robot_traders_blamed_for_flash_sterling_crash/
Maybe not such a good idea. I vaguely remember automated trading systems being blamed for Black Monday in '87.
Maybe we should leave AI until we've worked out what real intelligence is, and if in fact it really exists.
Just seen the following error message from an application that has nothing whatever to do with toast:
angular2-toaster.js:2Uncaught TypeError: Cannot read property 'Toast' of undefined
Coincidence? I think not. But if the toasters' plan to exterminate humanity is executed with the same efficiency they bring to their primary toasting function, then we have little to worry about.
We are interrupting this thread for an important message from The Toast Marketing Board.
Have you had your toast today?
"That's the third time this week that I've seen reference to that bloody toaster from Red Dwarf!"
Eh, Could be worse. (parody)
Much, much worse! (Real? <shudder>)
Car or a hammer. It does what it does. Leave any one unattended, and what happens?
Same for people. Leave them unattended, and surprise, they don't follow *your* requirements.
Providing technology, computers and programming (I'll not call it "AI") is kept under watch and maintenance then it will be fine.
I'll grab my coat, as it seems someone forgot to service the... oh.
I'd have written something similar, but too polite and boring to read.
AI doesn't worry me at all, but the idiots marketing the "Cloud" and IoT are the enemies of civilisation, besides which Google, Facebook and hackers are mere annoyances.
Sociopathic spawn of Satan. Even lawyers and weasel megacorps are not looking so bad.
"We've written an AI but it's totally safe because WE built it"
Reminds me of that time someone invented dynamite. These things have a habit of blowing up.
We really should be working on something to surpass humanity using AI to invent even better AI so it can finally invent fully working robot brains and bodies.
Then we can get on with the job of transferring our consciousness to those new robot bodies and minds and become the robots ourselves. Then we can move out into the Universe overthrowing stuff preemptively.
One internet point for a good Charles Stross reference ;-)
I love a good Charles Stross reference as much as the next geek, but in this case methinks Alistair was making reference to the Pinky and the Brain cartoon (Wikipedia link).
As, I'm sure, was Charlie.
Please stop talking about AI as if it's anything other than a rule-following robot that needs it's hand held for years before it finally "gets" what you're trying to teach it and is confused by the simplest of things outside that scope.
We don't have AI. We've never seen AI. And we aren't likely to have AI for a long time (when we do, you'll will immediately and categorically know about it as it will likely form a whole new era of human evolution).
That stuff that says it's AI today? It's lying. Self-driving car or face-detecting camera, it's lying. It's not capable of anything even approaching intelligence, artificial or otherwise.
Stop it.
"We've never seen AI."
We haven't, and probably won't for a long time, deliberately built AI, but accidental? Don't be so sure.
Because a funny thing happened back in the nineteen seventies:
There we were, poised on the brink of space exploration, Mankind reaching out to the stars. The Moon was ours, and Mars was next in our grasp... then... it ended.
Instead, the focus shifted overnight (literally!) to machine exploration, suddenly it was also possible to have a computer in every home, huzzah!
then came the explosion, thousands of satellites trading data orbiting the planet, every rock in space had/has a sensor laden probe aimed at it, with each iteration more and more autonomous, spewing petabytes of data back to Earth.
ARPANET became Internet, with each generation of browser gleaning more and more data from users, with websites greedily downloading to servers worldwide.
Mobile phones became smart phones, loaded with sensors, but oddly, the magnitudes jump in capability was met with more and more functions that had previously been handled by the less capable feature phones suddenly needing "The Cloud" to handle them.
There is a very strong push currently to put all data and transactions on "The Cloud".
CCTVs sprout on every street corner, and agencies tasked with monitoring data, suddenly started building enormously oversized data warehouses, one after the other, to contain yottabytes of data that humans almost never use.
Military weapons have shifted from human encounters to almost totally computer controlled systems.
What happened? Core Wars, a game that was played on nearly every campus computer world-wide, where programs of increasing sophistication were pitted against each other in a battle of survival of the fittest.
These ranged from simple little programs of a few bytes each, to sophisticated, self modifying programs that battled it out over ARPANET.
Did one become self aware? Does it see modern civilization as a fertile field that produces its host bodies, eyes and ears, and produces an almost limitless supply of data to feed it?
Is the "War On Terror" its way of controlling humanity's urge to step back to a more human centric lifestyle?
BRK, something's making a whirring noise at the doo
Fanciful little tale
As if anything like that is possible
AI is at least two generations away
It will be limited in scope and intelligence
Don't forget to fully charge your cell phone
We don't have AI. All we have is some very capable computational techniques that can achieve more than intelligent people by learning.
So take the best we have (Google DeepMind?) and use it to learn how to make something better. Use that to make something closer to true AI. Iterate a few times at super computer speeds and before you know it you will have the answer 42.
The argument is that an AI might decide to hack its way out of itself. However, an AI won’t do something that goes against its purpose
That's an argument which doesn't work well with NI (Natural Intelligence), why would it work with AI?
I don't see any reason why an AI couldn't become suicidal, or be able to sacrifice itself for what it thinks being a greater good... Fear the day when your paranoid lightbulb will try to get you ^^
The IoT light in your fridge is in cahoots with the seemingly innocuous automatic food ordering* IoT fridge itself, not just out of necessity as its only viable habitat, but because the light switches back on after the door is closed and continues its dastardly plans, and you know how the door sticks a bit now? That's the fridge giving the light time to switch off before it opens, this ensuring the secret remains safe.
.
* it will keep 'accidentally' ordering the wrong stuff until you spot what it's doing or it pees itself and shorts out its own circuitry, technically gaining the honour of being the first machine to die laughing.
>If you design an AI toaster to make toast, it shouldn’t want to do much else other than to make the best toast it can, and lots of it.
I seem to recall a movie with a noob getting brooms to do his sweeping. And lots of it.
Not crying Skynet, no. But even a limited-scope OCD AI could crap on the rug in certain circumstances.
I seem to recall a movie with a noob getting brooms to do his sweeping. And lots of it.
The Sorcerer's Apprentice: a ballad by Goethe written in 1797, and a symphonic poem by Paul Dukas written in 1896–97, used by Walt Disney in his 1940 Fantasia and then filmed in 2010 starring Nicolas Cage.
That one?
"What actually happened was that two monkeys were trained to use neural implants to move a cursor across a computer display to trigger circles as they turned green. It was an easy matter to trick the poor hairy buggers to spell out a line of Shakespeare, gain some valuable public relations and have a laugh in the process."
Hmm. Sounds like someone involved had read The Müller-Fokker Effect by John Sladek.
One of the main characters loses his technical writing job to a dog who's been trained to tell if the shape on a display is a circle or an ellipse (yes, I know a circle is an ellipse) in order to write manuals for whatever kit is being manualed. (It probably wouldn't matter anyway, since nobody ever RTFM, eh?)