I went to night-school and got a diploma in metal-butt polishing.
Ill be just fine.
Enjoy your beryllium mines, chumps!
Fatigued by bluster about the implementations of yesteryear's machine learning algorithms, we spoke to Dr Sean Holden of the University of Cambridge to enthuse ourselves again. Holden, a senior lecturer in machine learning at Cambridge's Computer Laboratory as well as being a fellow of Trinity College where he shepherds along …
well, as log as you define intelligence as "human intelligence", no argument. But is it right to assume "intelligence" = "human intelligence". There might be more than one way to skin a human, you know...
do I detect a hint of schadenfreunde there, i.e. "oh, but it's not real intelligence, cause they just use brute computing power"? Well, why would it matter, how they got "there"? Surely, if a super-computer can replicate intelligent behaviour, 100% accurately, then it's... intelligent?
As a meatbag, I have various motivations. These motivations are generally geared to preserving my life (food: yum. High fat and sugar food: yummier! High cliff: scary. Snakes: Avoid!) and passing on genes and caring for people who share those genes - in environments similar to those my ancestors lived in. Some of these motivations of mine are now not optimal for the selection pressure that lead to them (easy example: donating my sperm to a bank would be a low cost, low risk way of passing on my genes, but I haven't the instincts to do that in the same way I feel sexual attraction and the urge to find a mate).
So, what would 'Skynet's' motivation be? And what is the difference between a motivation and programming? An AI might be programmed to be self-preserving - that would make some sort of sense for a military command-and-control system which might be under attack. An AI footsoldier might be programmed so that it's own self-preservation is secondary to taking orders (or even it's own tactical reading of a situation, where it's own sacrifice buys an advantage for its allies).
AI encompasses some interesting developments. But there is a category mistake made by many in the field as to what constitutes judgement (which is a very different thing from blind rule following). Conscious emotional judgements are a part of what it is to be human. We sort of imagine that many many very fast clock cycles where a computer is following an algorithm, which every single time is following rules, might somehow be equivalent to an emotional judgement. As though if we have enough processing such that we don't really understand at a macro-level what has just been done, where the computer has implemented the kind of recursive self-defining patterns of logic we find in neural networks, we have created something that is the equivalent of emotional judgement. But there is absolutely no evidence that is the case, there is no way to know the computer has consciousness, and there is plenty of reason to think it probably doesn't have (it's though the "not knowing" what consciousness is, is then sufficient to say "we probably created it" if a computer passes a Turing test. A test which has always been logically insufficient as proof of anything other than that a human can under certain strictly limited circumstances confuse a machine with a human).
Doing much, much more processing very, very quickly doesn't transform a category mistake into a truth. It just means the same mistake is being made over and over on a larger scale.
It's important not to say "never" with regard to advances in computing and AI. Of course we are going to make great strides. But IMO there has to be a very different kind of advance than the current limited set of tools is providing.
[quote]Holden told us of a game called Mao, “where the only rule is that you can't tell a new player what the rules are. It's a card game. So a new player in Mao has to infer, by playing, what the rules of the game are. And that is, as far as I'm aware at the minute, completely beyond any of the AI stuff that we have.”[/quote]
Damn! You pipped me to the post.. however I'll add '..play a game of Mornington Crescent /and consistently win/!' to fully complete the statement..
Yes I hate the typical AI excuse of calling it a 'moving target' because stuff that they figure out "no longer looks like AI". Bullshit. The AI people are the ones who decided that a chess playing, and later Go playing computer qualifies as AI.
Anything that is based on logic alone is not AI as far as I'm concerned - humans have the element of deductive reasoning and creativity and that is what makes our intelligence unique from that of an automaton that plays chess or solves a Sudoko. Since it isn't easy to model solving a problem that requires creativity, I'll settle for playing a game like Mao where the rules can't be programmed in and the main task of the AI is to figure them out for themselves.
That's something a four year old could do (if motivated) so we'll worry about what a 25 year old human can do down the road. But at least that would show me something that isn't basically programming in the set of constraints and letting loose a ton of computing power to find the optimal (or most optimal in the time allowed) answer.
The procedure for creating human intelligences is well-established. It's rather time-consuming and the results are of variable quality, but it's worked well enough to produce 7 billion instances. The only reason to replicate human intelligence in a machine would therefore be to see if it's possible. The expense and difficulty tend to rule out this kind of idle curiosity.
The main aim of AI (at present) seems to be replication of human cognitive capabilities. The payoff is that the machine can then be made to exercise these capabilities faster or more reliably or in situations where meatware has problems.
So today we have the hardware to test ideas that were worked out 20 or 30 years ago. And in 10 or 20 years the hardware will be available to test some of the ideas that are being worked out right now. But how can you create something that you haven't defined sufficiently? coherently? universally?* yet?
*What is the word I'm looking for...
The thing with AI is that whenever you start getting deeper into questions around it you turn out to be doing philosophy. An entirely necessary and worthwhile activity, but not one that has the strongest record for coming up with practical answers over brief timescales. Whether philosophy will answer the AI questions or AI will answer some philosophy questions will be an interesting path to follow over the next few years.
.... SMART Humans? SCADA Systems? Exploding Weapons?
It is difficult to say where the field will be going, said Holden, "but the one thing you probably can take for granted is that it'll be almost invisible because immediately when something becomes a solved problem in AI, it stops looking like AI.”
Hmmm? It is pretty clear where a certain sector/vector/parallel of the field is going …. weaponisation, ….. for the above quote from Holden is surely a mirror/clone of …. Or would a real world fear of remote transfer of virtual command and control to unknown forces and anonymous sources cause an almighty all systems crash? ….. which was registered earlier here …. http://forums.theregister.co.uk/forum/1/2016/03/06/amd_microcode_6000836_fix/#c_2801173
And is that prime leading use and abuse and misuse not always the normal natural way of doing things with humans, and therefore fully to be expected? Is history not littered with examples making exclusive inequitably advantageous use of progressive steps and quantum leaps into new fields of private endeavour and pirate enterprise?
Of course it is, although in these new times with virtual spaces are things somewhat different, for power and energy residing in commanding fools as controlling tools are neither catered for nor possible. AI, and that which runs IT, will not countenance it.
And that is the Change which all systems will encounter and be forced to deal with in order to survive and prosper.
I Kid U Not.
And quite whether it be future considered and recognised in AI Leading IT fields as primarily a Proprietary Alien Western Delight or Erotic Eastern Confection is the Great IntelAIgent Game Play.
As of the time of this post, do the two downvotes, and certainly others, wish to continue to deny the stealthy invisible creep of the remote transfer of virtual command and control to unknown forces and anonymous sources, for is such not evidenced as a novel program already being beta tested here ........ http://www.thedailybell.com/news-analysis/central-banking-conspiracy-now-involves-canadian-basic-income/ ..... which is surely easily classified as a weapon?
And does that identify the enemy/command and control system to be vanquished ... and/or a program in desperate need of fundamental radicalisation?
Or is such collapse already well underway?
And whatever is Mark Carney thinking in proposing to continue the Great Ponzi ....... http://www.telegraph.co.uk/news/newstopics/eureferendum/12187164/eu-referendum-mark-carney-priti-patel-suffragettes-brexit-live.html
Been there, done that, and it doesn't work, Mark, and creates popular targets for public attack.
Same old, same old .... for this was telegraphed some time ago [22nd Feb 2013 0648hrs] and is proven to be not wrong ....
amanfromMars …. sharing a message on http://www.telegraph.co.uk/finance/economics/9886554/QE-may-need-to-be-raised-by-175bn-says-BoEs-David-Miles.html
A Wise Word to the Foolish in an Age of Instant Global Communication is Best Not Ignored ... for Goodness Knows what Can Happen Then and Thereafter whenever Nothing is Impossible and Anything is Probable
"David Miles said he was open to alternatives to buying government bonds, but added that he could not see any"
Excuse me, but are we expected to believe that both the Bank of England and an external member of the bank's Monetary Policy Committee, see only one option, with that option being to buy junk toxic failed government bonds, whenever magic QE cash can buy anything? Are they in the Bank of England and those members of the bank's Monetary Policy Committee, certifiably mad and/or just plain stupidly idiotic and completely devoid of imagination.
To think to try and spin to a nation and nations that there is but only one course of action, whenever that notion is so very clearly a monumental lie, is both an insane and a criminal act of colossal malfeasance, and if be not, then it most certainly should be so defined ….. and legislation introduced and passed to ensure that such nonsense is never again perpetrated against the masses.
J'accuse ….. and ponder on who is paying whom to keep the ponzi that is the present fractional reserve fiat capital banking system going, for the benefit of a few to the greater detriment of the many?
Hmmm? …. Remember, remember, the fifth of November and do not forget what Henry Ford saw and said … "It is well enough that people of the nation do not understand our banking and monetary system, for if they did, I believe there would be a revolution before tomorrow morning."
And as for the prescience of Andrew Jackson, well, one can only but marvel at the singularity in the parallel of today with that of 179 years ago in 1834 …… " Gentlemen! I too have been a close observer of the doings of the Bank of the United States. I have had men watching you for a long time, and am convinced that you have used the funds of the bank to speculate in the breadstuffs of the country. When you won, you divided the profits amongst you, and when you lost, you charged it to the bank. You tell me that if I take the deposits from the bank and annul its charter I shall ruin ten thousand families. That may be true, gentlemen, but that is your sin! Should I let you go on, you will ruin fifty thousand families, and that would be my sin! You are a den of vipers and thieves. I have determined to rout you out, and by the Eternal, (bringing his fist down on the table) I will rout you out!"
Get your act together, chaps, for your heads are on the block and the masses are wise to your shady workings and dodgy dealings. And that is not a nice position for you to find yourselves in, all alone, and with no popular friends and whole nations of enemies.
The great and significant difference today though is that there are crazier and considerably smarter opposing non-state actors at their work, rest and play against crooked executive elitist and inequitably established arrangements. And aint that the gospel truth ‽ .
This article discusses only 'Applied AI'. Speech recognition, playing games, driving cars... . All Applied AI -to date- has to be directly programmed -with a fastidious level of detail- by humans.
The true revolution will come with 'Hard AI', meaning "the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can".
In my opinion, true Hard AI will arrive when we are able to emulate (most of) the workings of a human brain, and that will probably come as the result of the "Human Brain Project" or a similar initiative.
Regarding the issue of Skynet considering humans a nuisance and enslaving/eradicating them: If you have a working emulation of a human brain, it should be relatively trivial to affect it's 'rewards centre'. Here an 'Applied AI' working as an 'artificial conscience' could help a lot, e.g. by providing rewards when the machine 'helps' human beings, with some set of predefined moral limits and priorities.
Testing this 'Artificial Conscience' would be a dangerous business, though.
Biting the hand that feeds IT © 1998–2019