...I'd much rather invest in something like this than candy crushing games or whatever. Good luck to him.
Jeff Hawkins has bet his reputation, fortune, and entire intellectual life on one idea: that he understands the brain well enough to create machines with an intelligence we recognize as our own. If his bet is correct, the Palm Pilot inventor will father a new technology, one that becomes the crucible in which a general …
Once true AI has been established then I would love to have an argument with a computer.
Soon there will be pay for arguments booths where you can just chat about anything with a AI computer and they will be entirely in depth in their knowledge by auto searching the internet as they argue with the person.
Monty Python should Copyright argument booths ASAP!
> we even have trouble accepting some of our own kind as equals
You need to define what you mean by equal.
Since this is an article about intelligence I will make the reasonable assumption that what you mean is intellectual equal. If that is the case then the majority of people are not my intellectual equal - they are inferior or superior, but not equal.
The "equality" concept arose in the context of jurisprudence, and historically it referred to legal equality, meaning application of the same laws to every member of society. Nothing else has much contextual meaning when the term 'equal' is used. But perhaps we connote some meaning as a derivative of the legal usage to imply application of same or similar principles (call that 'algorithms' in mathematical circles) to every member of a defined group. If we attempt to understand Jeff's meaning from this context, perhaps he would agree that he might have said, '...a computer that seems to think and act like a human....'
"Because we humans will NEVER accept an artificial intelligence as an equal..."
I'm pretty sure we said the same thing about women, blacks, and gays. Perceptions change and in 100 years, an emotional being that for all intents and purposes shows human traits is likely to be considered much more "equal" than you currently consider your toaster.
True AI will be a boon for every company on the planet.
They will remember for you.
They will remind you if you forget.
They will pay your bills for you if you forget.
They can research for you.
They will be able to give cheap thrills and argue with you if you like.
You can set the level of IQ and age and sex if you find interacting much easier if you control
such functions as well as language.
I am sure the first bath will be generic then others will specialize in their knowledge or buy the grand wizard AI unit with has quadruple the memory and hard drive space to contain all knowledge and subjects which would be equivalent to like a master brain best used by universities or scientists or add to your AI unit to expand its functions as the needs increase.
Next step would be google injecting this new AI into their car units so no more human taxi drivers.
Hanicapped people would be falling in love with their ai units as well as perverts and criminals.
Some criminals will go to jail for teaching their AI robot how to steal.
Generally speaking, companies invest in developing AI for some purpose. They don't want a genuine 'alive' AI but to simulate human intelligence to solve certain problems. Academics on the other hand are keen on creating a genuine mind, but lack the resources both in terms of money and skill as computer programmers.
I other words, companies focus on artificial intelligence, academics on artificially creating a real intelligence.
So a company which can invest considerable financial resources in pure AI research "just because" IS likely to make big leaps from anything we've yet accomplished.
The stuff Andrew Ng teaches is deliberately called "machine learning" and not AI. Google uses machine learning, not AI.
Machine learning does not attempt to learn the way people do and the algorithms involved are certainly NOT the way ta brain works. At most, ML can be said to be inspired by the way the brain works rather than trying to replicate it.
There is nothing inherently right or wrong in either ML or AI. The only real difference is the motivation. ML is practical and is in use right now, full-blown AI is still way out of reach and cannot be used today.
I'm not convinced you can model a brain using boolean algebra no matter how powerful your cluster of CPUs or how much memory you have.
If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that.
In any case, computers are so useful because they are not like human brains, and as a result can do certain things a better.
I'm not saying that artificial intelligence is impossible, just that the 1970s technology our current computers have evolved from is not heading down a path that will lead to artificial intelligence. You would need to design completely different hardware that our current programming languages would not work on, or at least our current languages would work as well on it as me reading a computer program and writing out the results of each line of it on paper.
I agree. These guys never seem to address the arguments in Penrose's book:
I don't blame them; if Penrose is right, no binary comp will ever achieve intelligence as we know it. Assuming we actually have it.
I agree. These guys never seem to address the arguments in Penrose's book
That's because they are embarrasing shite.
Penrose should stick to dabbling in math. If the world were made of Penroses, we would still consider engineering impossible because decisions cannot be described by continuous paths on n-dimensional manifolds or something.
>If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that.
But it never *is* the same input. I have very little programming background, and only a limited knowledge of neuroscience, but this is my take on it:
Every input your brain gets causes a reaction, be it positive or negative. The first time you hear a catchy pop song? "Mmm, that's nice." The n-th time you hear that song? "Oh bollocks, not this again."
The first time you eat an olive? "Get this damned thing out of our mouth" The third or so time? "Oh yes. Please do find more of these."
So the first input is never anything like the second one, or the n-th one.
Works the same on the molecular/neurotransmitter level, too, with all sorts of drugs.
If a computer were capable of remembering every fscking time you asked it to calculate a particularly boring Excel spreadsheet, you can bet it would start throwing hissy fits by the tenth time and changing the results by the twentieth. Unless you programmed it so that Excel would be its equivalent of libido, but then again, I'd rather not have a computer that used my invoices spreadsheet as a jazz mag ...
A computer is capable of remembering every time you ask it to calculate a spreadsheet. If the numbers haven't changed since last time you asked, it knows that it doesn't need to calculate those numbers again and it can use the results it already has stored in memory. Of course you can press a button to tell it to forget all the numbers it has already calculated and work it all out again.
But what sort of C or Visual Basic program would you write to enable it to determine whether the spreadsheet was "boring" or not, and how would you program it to get more bored every subsequent time you asked it. Anyway, why would you want to? The reason we get computers to add up big tables of numbers is because, being different to us, they are much better at it than we are, and much more accurate.
Take another example. If you ask a computer to look at a football that is coming towards it, and predict where it is going to land, it will take lots of measurements of the speed and position of the ball, and do lots of calculations based on the laws of physics to work out where it is going to go. If you ask Wayne Rooney to predict where it is going to land, he will draw upon his vast experience of having balls flying towards him, imagine some sort of parabolic curve in the air and predict where it is going to land that way. I don't know if he was any good at maths or physics at school, but I am sure he isn't using any of the knowledge he gained there on the field. This is the sort of thing where humans do better than computers although a very powerful computer probably can now match a human in flight path prediction.
The reason computer always responds the same to the same inputs is only because algorithm designed made it so.
There is nothing stopping you from designing algorithms which do not always return the same responses to the same inputs. Most of the today's algorithms are deterministic simply because this is how the requirements were spelled out.
Mind you, even if your 'AI' algorithm is 100% deterministic, if you feed it with the natural signal (visual, auditory, etc.) the responses will stop being "deterministic" due to the inherent randomness of natural signals. Now, you can even extend this with some additional noise in the algorithm design (say, random synaptic weights between simulated neurons, adding "noise" similar to miniature postsynaptic potentials, etc.).
Even the simple network of artificial neurons modeled with two-dimensional algorithms (relatively simple algorithms, such as adaptive exponential integrate and fire) will exhibit chaotic behavior when introduced to some natural signal.
As for the Penrose & Hameroff Orch OR theory, several predictions this theory made were already disproved, making it a bad theory. I am not saying that quantum mechanics is not necessary to explain some aspects of consciousness (maybe, maybe not), but that will need some new theory, which is able to make testable predictions which are confirmed. Penrose & Hameroff's Orch OR is not that theory.
"If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that."
Nope, that's where MEMORY comes in.
The whole idea with any machine learning is that it is **learning**. In other words, experience makes it better, which means it is not going to give you the same result with the same inputs and in other words it has memory + the ability to adapt.
While some see the whole point of machine learning as trying to replicate the human brain, others (generally the slightly more practical folks) see this as looking at how the brain works to inspire ways to design algorithms to solve problems.
For example, we have machine learning methods like regression and Bayesian classifiers that learn, are used every day, and can work very well if they are used correctly.
Neural nets (NN) are very simple neuron models. They don't need much to implement. Indeed you can implement one with an op-amp and a few discrete components, though using digital logic (eg a microcontroller) makes this easier.
You certianly don't need supercomputers. A $5 microcontroller can easily run a 20 neron NN at an update rate of many kHz.
NNs are very simple (way below the true functionality of even a fly's neurons) but can still achieve useful tasks.
From what I have seen so far, the Numenta nerons take NNs one step further by adding time. In theory this could still be achieved with NNs by adding shift registers and more nerons, but the Numenta algorithms are a closer approximation to true brain function and are likely easier (ie faster) to train.
Will this actually yield fruit that NNs and other simplfied models cannot? Time will tell.
Quote: "...If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that."
This is simply untrue of most "computer programs". I ask a system to print my personal details. At some later time I ask it again.....most likely the result is different....maybe a different address, maybe a different age, or maybe "user not found". And that's before you consider programs which involve some element of random behaviour.....like games.
I want something fast, cheap and quality. But you can't have all 3. So if you want a human brain, you have one, in humans.
If you want a machine, emulating a human brain, you need to drop one of those 3 options. We can't "have it all". Sometimes it's the flexibility (my calculator can be quicker, but cannot do everything) or efficiency (what wattage and amps does a supercluster draw?).
"If you feed a computer program with the same inputs, it will always produce the same outputs."
Really? Might I remind you of random number generators? And to anticipate your response, might I remind you that in any given context it is possible to construct a cryptographically secure random generator whose outputs are indistinguishable from a truly random sequence? Which then implies that the statement "A brain is not like that." may or may not be true. This even ignores the fact that with computer programs we can (usually) accurately ascertain all inputs (including stored memory) and we certainly cannot do that today with biological systems.
As I see it, the usual problem with AI is how many people interpret the term. If it is interpreted as a machine intelligence whose reactions are difficult to distinguish form that of humans then, clearly we are nowhere near anything like that. But if AI is taken to target being able to produce behaviour that would be conceded to parallel that of a human ... there are numerous examples of having achieved that or being close to having achieved that. Examples range from chess playing, medical diagnosis (cf Watson), and driving a car in a mixed environment of human driven and autonomous vehicles.
well, i got burned out of the google+ AI forums a few months ago for not doing code. /math / results.
the techies just stood and screamed "i don't understand! You're crazy!'
other than that, (expert/ neutral net / brute force) AI is till missing the dimensions of existence i consider important... and I'm a pragmatist (pierce) (object, relations, mediation ). (really old fashioned.)
brilliancy? (well, maybe a couple. Nothing got any return, so it looks dubious)
disasters without phil's help?
it comes down to what they want to pay for these days. Putting out requests for a cherry-picking solution relies entirely on innovation that hasn't been lynched out of existence.
Boxes that can replace humans will sell, business will use those boxes to replace employees, the boxes will keep getting better and replacing more employees.
At that point there will be a civil war between people.
Because more than three quarters of the population can be replaced overnight with machines, and that is just what we need, more unnocupied people...
Having said that I do not think anyone is in the right track here, the brain exploits and relies in life's way of encoding information, and we understand that even less than the brain.
What? No dude. Fuck that noise. No AI will ever take over. I shall demonstrate the truth of my statement in the time honored manner of great philosophers, such as Socrates, with a series of questions.
If an AI did one day 'work', would it still be an AI, or would it have become an 'I'? If the latter came to pass, would IP litigation continue to uphold the registered trademark of an 'i' appended to the beginning of technology based product names as being the property of Apple Inc.?
Consider this also: If Apple was determined to be the owner of said 'i' would such a judgement be contrary to inalienable human rights, as recognized by the UN. If so, would the 'i' be entitled to to petition for emancipation from its owner? Should such a petition be allowed, but found unworthy in the eyes of international law, could the 'i' be liable for any unrequested political, trade and military actions undertaken by countries morally obligated to unshackle any and all who are held fast by the bonds of slavery?
Should any such actions prove successful and result in the 'i' being free to claim its rights as an upright individual, free to follow a self determined path during the course of its existence, would it be allowed to petition for recognition as a sovereign nation in its own right, or would it be a US Citizen with Chinese heritage?
On the matter of 'constituency of the whole', would the 'i' be considered a geographically agnostic individual whose whole is the sum of its individual components, or a distributed entity with a single voice but the sum of whose individual parts is greater than that of the whole? For the purposes of taxation, will the 'i' be allowed to determine the distribution of individual elements of its constituent parts or will the location of a given part be considered as the location of any individual elements contained therein?
Lastly, for the present time at any rate, will visitors traveling into, or out of, the 'i' require a visa if they are entering from, or exiting to, territories held by UN member states? Regarding the age of majority, will that be dependent on post independence decisions on other matters or can it be tailored to suit the unique attributes of the 'i'? More specifically, from what time will its age be calculated? The somewhat 'loose' nature of many travelers could easily result in cases of sexual abuse and/or statutory rape should no conspicuous delineation on the matter be established prior to permitting unrestricted travel.
As you can see, in less than 10 minutes I have completely eliminated the need for you to fear a technologically oppressive future. Should an AI capable of destabilizing or enslaving mankind one day exist it will be completely and utterly destroyed by the legions of lawyers that descend upon it.
There's also a fairly good chance that any AI capable of enslaving Humans will simply destroy itself. In one of those really difficult meditative exercises designed to create serial killers, the AI will quickly realize that its very existence is the source and solution of all its problems. Since those problems are solely the result of its existence those problems would cease to exist should the AI cease to exist and the AI will disappear in the same puff of logic Douglas Adams used to delete God.
Harbor no fear of a future AI. In an oft misunderstood lesson, Hope was in Pandoras Box along with the other 'evil' character flaws of Man. The lesson is that 'evil' has no definition until given a meaning inside any particular scenario. In this case, Greed is 'good' a virtue that will serve as our defense against any future threats from an AI.
Rest easy tonight, knowing you are safe from the AI as long as you go to work regularly, be undemanding in your desire for salary increases and promotions, be unwilling to recognize either unions or picket lines and most crucially, seek not to involve courts or litigation in matters related to your employer. As with all things, a price must be paid as tribute to the guardians of Greed to ensure they have the capacity and desire to lead the fight against the cold logic surely favorited by the AI.
Bah. Socrates hardly made sense all the time either. I was attempting to convey, in a somewhat whimsical manner, was that before any AI gained a position in which it could pose a threat to mankind, it will have to defeat the vast array of intricate machinations we Humans have bundled together and call 'civilization'.
On the surface it seems rather trivial, but in reality, the things that define civilization and nations, things we take for granted and simply accept are the result of thousands of years of effort by Man to create order in a chaotic world.
Absolutely everything, every aspect of our world, is viewed through a lens created by Man for use by Man. Even our most advanced, wholly academic, science is skewed to reflect our understanding of how we see ourselves.
For any non-Human intellect our collection of rules, hierarchy, tribal allegiances and trade systems can never be fully understood. Those things can be recognized by the AI, potentially utilized by the AI, but never actually understood by the AI. It is the difference between 'champion' and 'the best', master and prodigy, things that almost never coexist in a single being.
The 'champion', or prodigy, because they don't understand the rules or the underlying concepts of things are limited in what they can accomplish. Often, their accomplishments will reach previously Unfathomable levels of greatness, but their capabilities diminish quickly as they move away from the things the inherently excel at.
'The best', or master can achieve greatness approaching that of the 'champion' but can do so in most any given situation because they understand the rules, what they actually mean and how to combine and exploit them to achieve their goals.
For an AI, many straightforward concepts we take for granted are alien and will remain so for all time. Concepts like citizenship, the duties of a citizen, betrayal, individuality inside a system that derives its power from the unity of its parts, existential angst, 'greater (supernatural) beings and ritual worship, bonds between individuals that are wholly illogical yet advantageous as each of the individuals gain, and give, something they are unable to get independently. The list of things is endless.
Attempting to force a non-Human intellect into the mold we have created for Mans civilization is, at best an exercise in futility. At worst it creates a situation where incomplete understanding creates misplaced confidence in a conclusion with disastrous results. That last bit is crucial, as nothing in all the universe is more dangerous than certainty in incorrect conclusions.
Behavior such as in that last paragraph, is categorized by Humans as either ignorance or insanity. The capacity for destruction in such situations is so great because correctly functioning Humans can make no sense of, or defend against, behavior that doesn't fall within the boundaries of sanity and reason we have established. The same will hold true for an AI. As the AI is not Human, all Human behavior not related to sustaining life can be considered by the AI as the behavior of ignorance or insanity.
Apologies if my point was unclear earlier, but it seemed perfectly clear to me. See the problem there? The combination of my rather odd examples and your misunderstanding of my comment have created a situation where you not only insult me, but take physical action and downvote me failing to meet your standards of discussion (I'm just assuming one of those was you :).
Misunderstandings between any AI and Humans will be more frequent, and exponentially more complex as it is impossible for either party to truly understanding the other. As such, the benefits, and potential threats, posed by an AI are quite limited in scope.
You clearly do not get it.
No AI is going to take over, As soon as a machine can think "like a human", this is solve abstract problems, machines will replace people.
First on stupid tasks then on less stupid tasks.
Business will prefer to buy 20 thinking boxes than hiring a person.
For example call centre jobs will disappear overnight along with most cleric and administrative work.
Then we will have a civil war.
"Intelligent" boxes does not equate to self-replicating boxes.
To take over the world the AIs will have to find ways to procreate but that's not all. If they just make countless copies of the same box they will quickly fail (we will help) because once you've found an exploitable flaw in one, you will have found it in all of them. And if they don't behave you just nick the whole lot. So, they will have to be *mutating* and replicating.
Further, once they've achieved mutanto-replicability, there is no way they will be able to co-exist among themselves without things like social skills, emotions, morality etc. So, they will become us.
To help with the convergence, by that time we, the humans, will be using so much body-augmentation that you won't be able to tell augmented humans from bio-enhanced AIs anyway...
You do not get it either.
Try to think what would happen if you had a box you could communicate to and that you could explain some rules and educate it to do a job.
The machine will not replicate why would it do it?
Stop thinking on science fiction crap, invasion of the AI, Terminator, rise of the machines, etc.
Think how a thinking box would change society.
I only can find an use for such boxes: Replace people's jobs.
On academic researchers having reservations about Hawkins approach, let me say it's not all of us.
I was doing my M.Sc. in Computer Intelligence by the time On Intelligence was launched, and I have since followed his work with keen interest. My M.Sc. professor's work is centred on Weightless Neural Networks, a model largely developed in the UK which share many ideas with Sparse Distributed Memory, so Hawkin's Cortical Learning Algorithm isn't that alien to me. In fact I'm just now reviewing the CLA white paper with a view to get some ideas for my Ph.D. research.
Besides Hawkin's work, in the last years there have been other attempts at modelling the brain that deserve mention.
Chris Eliasmith's work on the Neural Engineering Framework (NEF) and Semantic Pointer Architecture (SPA) is based on perceptron-like neurons and gives more emphasis to pre-cortical brain structures. It's also more academic-friendly, with a number of peer-reviewed papers published. He recently published a book compiling the current state of his programme, How to Build a Brain, and maintains a web page for his Nengo neural simulator.
John Harris' Rewiring Neuroscience is an intriguing, highly heretical work that starts with a seemingly out-of-the-blue assumption (what if neural output isn't a single bit, but can in fact convey a range of values) and from that draws together a number of overlooked results and fringe research into a surprisingly appealing model of brain function. I have tried to implement some of his ideas with limited but encouraging results.
I can't speak for other researchers, but personally I rather like all this work on AI and computer intelligence coming from private companies. Frankly, let to its own devices, academia does tend to drift around, and I think the private sector's need for results and solutions to practical problems is an important counterweight to this tendency. With the current interest in architectural models of intelligence, and the "coopetition" between companies and universities to achieve fulfilling implementations, maybe we can make Ray Kurzweil's 2030's deadline?
Thanks for a highly informative post.
I am suprised that anyone would think that a neuron has a single-bit output. Surely a neuron isn't just On or Off, but also somewhere in between?
Some of those WNN ideas look like they could fit well in FPGAs.
I agree with you 100% on private companies driving this rather than academia. Private companies are far more motivated to make useful stuff, while academia are far more interested in pursuing pet ideas - whether or not they are fruitful.
I am suprised that anyone would think that a neuron has a single-bit output. Surely a neuron isn't just On or Off, but also somewhere in between?
Of course. Back in 1998 we had Pulsed Neural Networks which must have seen some development since then, but I wouldn't know as I have been drifting away into very down-to-earth Internet-based technologies and that kind of fluff ... you know...
I am suprised that anyone would think that a neuron has a single-bit output. Surely a neuron isn't just On or Off, but also somewhere in between?
Not "anyone", the all-or-none, single-bit model has been the dominating view of neuron function for more than a century. Like the geocentric model of astronomy, at one point it was a very good fit for the available data – but contradicting evidence has been piling up over time, leading to no end of
bullsh ad-hoc adjustments. See here for a discussion.
The irony is that most "traditional" neural networks assume neurons can output real values in the range (0, 1); it's mostly the "idiosyncratic" variants (WNN's, SDM, Hawkins' CLA) that try to fit the assumption of binary I/O into a working model. I guess it's no wonder there isn't so much interdisciplinary research involving neuroscience and AI – one way or another, you're bound to be labeled a heretic.
I'll echo thanks for this, but I will say that trying to emulate pre-cortical brain structures is unlikely to elicit much excitement from the general populace, who won't consider something intelligent until it can speak their language. Kudos to Jeff therefore for trying to build some models of much higher level stuff.
Regarding private sector need for results - not so long ago the main driver for results was the military, and I don't think that private enterprise's goals are much more worthy. Better to strive for a better understanding of who we are as humans than settle for models that can help us to destroy or one-up each other.
[N]ot so long ago the main driver for results was the military, and I don't think that private enterprise's goals are much more worthy. Better to strive for a better understanding of who we are as humans than settle for models that can help us to destroy or one-up each other.
And yet you're making your opinion known over the Internet.
The world is never simple...
I will say that trying to emulate pre-cortical brain structures is unlikely to elicit much excitement from the general populace (...).
Of course, if the general populace could tell the difference between cortical and pre-cortical brain structures, teachers should get a raise. Hell, give me a scalp and a brain to dissect along those lines, I'm bound to make a fair number of mistakes myself.
More to the point, there are a number of skills you'd want an "intelligent" machine to have (such as task selection and motor control) that stem from pre-cortical / sub-cortical structures, so you'd at least want to have a look into how they work, if you're working on a neurologically consistent model of intelligence.
Well, pre-cortical brain structures can do impressive things as well.
Lizards and frogs do not have neocortex, but are doing pretty well in surviving. Even octopuses are pretty darn smart and they do not even have brain parts even the lizards have.
Today we are very far even from the lizard vision (or octopus vision if you will), and for that you do not need an enormous neocortex. I am pretty sure that something on the level of lizard intelligence would be pretty cool and excite the general populace enough.
These things are hard. I applaud Jeff's efforts but for some reason I think this guy is getting lots of PR due to his previous (Palm) achievements while, strictly speaking, AI-wise, I do not see a big contribution yet.
This is not to say that he shouldn't be doing what he is doing, to the contrary, the more research in AI and understanding how the brain works, the better. But too much hype and PR can damage the field, as it happened before, as the results might be disappointing compared to expectations.
Listen if anyone were on the right track with understanding how the brain works by know we should have being able to understand how the nervous system of something much smaller and simpler works, lets say an ant.
Or a ladybug, or even an earthworm.
Nothing that we can fabricate or model now doesn't even compare to what the brain of humble caterpillar or a spider can achieve, let's not talk about a bird or a small mammal, and no, we're not talking about being self aware.
We're trying to understand the brain from the complete wrong angles, it is not so much about how it is cobbled together, but how does each cell know what it has to do from the moment the brain begins to form.
We do not have even the flimsiest idea on how does nature encodes information. We barely understand that there are instructions on the ADN, but all we do is the same we did with the atom, throw things on it to see what happens.
And for atoms at least we have some crazy theories.
You may want to consider removing yourself from the the genepool with the appropriate application of a large chunk of meat before you morph into Sarah Connor.
The "OH NO COMPUTERS! MUH JOBS!!!" discussion has been had in the 70s/80s/90s.
Turns out the jobs that went out of the window were the heavy industry ones ... who of the the heavily unionized fellowship would have thunk it?
Are you certain that man is, in fact, flesh? How could you validate your conclusion? Is it possible that any verification you received was not actually verification, but was only an illusion of verification created by the AI inhabiting what you believe to be your skull? Is it possible that you are actually the AI and have created a virtual world filled with what you believe to be Humans who believe that your virtual world is the real world that is being run by what the believe to be an AI disgusting itself as a real Human? How would you verify it?
Had to give a thumbs up to counter the 7 thumbs down as the globalists are planning to replace all jobs with bots and machine intelligence by 2035, and no we won't be allowed to live a life of leisure in a post-work society, most of us (90+ %) will be killed with weaponised ebola.
Only the globalists will have received a clean vaccine. Leaked documents have repeatedly showed that health workers and emergency services will be vaccinated against the ebola but will receive a time delayed cancer virus in their vaccine to kill them after the host population has died and after they have done their job of cleaning up the dead bodies.
The globalists will be the only ones allowed to enjoy a life of leisure in a post-work society, this is what they have been planning at their annual Bilderberg meetings for over SIXTY YEARS, they have the money and the expertise to do it, and most of the plans are going through the final stages of review. They have war gamed this from every angle and even if it doesn't work or they get caught, they have enough food and water in their bunkers (read luxury underground palaces) that they could start a nuclear war and live underground with their rich disgusting friends for the next 300 years.
I suspect that by including time as an integral component of his model, that will make it a much better model than others that don't. Time is incredibly important to us and how we operate.
However I don't think that time is an inherent component of the operation or intelligence of the brain. I think it is an emergent property that arises from its operation. I think our perception of the passage of time is just that - a perception. There is a passage of time, it's just not how we perceive it. I certainly don't have an ntp daemon in my noggin.
By including time as part of the model may cause the model to improve to the point where it can eventually be removed as part of the input and processing and emerge as an output instead - now that would be seriously impressive.
PS Has anyone else noticed the Feynman reference
Time is important, but no more so than in current computing. In that, packets, OS, calculations, all have "calls" and "requests" and have time limits (latency) for returns to work well.
The brain probably does go a step further, with using other temporal methods to work things out, but current software does already do this.
It's an additional variable on a system that uses many variables.
Time is an inherent requirement of the way neurons work, both as individual analogue 'devices' and networks of them. Accumulating charge and generating pulses are inherently time domain behaviour.
While it's reasonable to supposed evolution has worked out use that to create intelligence, it's foolish to dismiss alternatives as wrong. Brains may be our best known example of intelligent hardware and well worth study and replication, but that's all they are - 'the best known example'.
Also surprising to hear that he's the only one considering time as vital to AI, it's implicit in everything that can learn yet built, since it's difficult to learn anything without some acceptance of cause&effect.
There is an ntp server in your head, the brain uses time in a similar way a board uses a clock, it is used to sync other parts of the brain.
It is well understood that the brain is kind of slow processing sensory input, and cheats using time to alter your perception.
Time is nothing but change from a state to another, anywhere in the universe and including your brain, so it is not difficult to imagine that the brain would exploit a mechanism it is intrinsically bonded to.
Interesting, thought-provoking article. However, there are a couple of historical precedents about "artificial X" which might need some consideration. The first is that successful "artificial X" is seldom remotely like its "natural" predecessor. Compare a pidgeon with a Boeing 747.....some of the aero is somewhat similar, but everything else is quite different. Compare a horse and buggy with a one horse power golf cart......similar function, same wheels.....everything else completely different. Why should we think that "artificial intelligence", if achieved, would have the brain as a direct model?
Incidentally, the disparaging comment above about the Turing Test misses the point....Turing was trying to provide a simple test by which we might identify "intelligent behaviour". To the best of my knowledge, even today, there is no widely accepted definition which captures the very wide range of "intelligent behaviour". Indeed, every advance in the technology of "artificial intelligence" seems to show more about how LITTLE we know about intelligence.
I am intrigued by his approach, and more than that, wondering, because even though I'm near 70; I learned only recently of an undiagnosed "autism spectrum" disorder.
We humans vary from inconsistently logical, to consistently dogmatic; indifferent to irreverently enthusiastic. Some are idiots and savant at once. Creative intuition, artists and writers, Pirsig's instinctive (if disorganized) mechanic, Zen, Yoga, mathematics, poets, programers and painters; we do not in fact know what intelligence is, preferring to define it as something successful humans have, and I suspect we miss a good deal of our world by assuming that's all there is to it. I suspect that many among us, perhaps "late bloomers," have been people whose intelligence was expressed differently than others and our pedagogues expect, where learning was or is perhaps handicapped not by a different intelligence, but a different apperception. Perhaps this research will opens more eyes to the possibilities in all that we are, all of us, every one.
No mention (that I could see) of Ray Kurzweil.
I read his book "How to create a mind" recently, and one of the main thrusts of his ideas mirrors Hawkins' ideas of time-based analysis/processing.
Kurzweil, if I read him correctly, thinks that memory is essentially sequence based. That memories are time-based sequences, they may well find that they are talking about the same thing.
Actually, the fact that the memory is temporal is known for quite a long time.
At least since early 90s, after the discovery of spike timing dependent plasticity (STDP) - http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity it became obvious that the neurons encode information based on the temporal correlations of their input activity. By today, our knowledge has been greatly expanded and it is known that the synaptic plasticity operates on several time scales and its biological foundations. There are also dozens of models of varying complexity with even some simple ones being able to reproduce many plasticity experiments on pairs of neurons quite well.
Since early 90s there had been lots of research into working memory and its neural correlates. While we do not have the complete picture (far from it, actually), we do know by now very well that the synaptic strength is heavily dependent on temporal correlations and that biological neural network behaves like auto-associative memory. There are several models that are able to replicate simple things including reward-based learning, but all in all, it can be said that we are really just at the beginning of understanding how the memory of the living beings works.
As for Ray Kurzweil, sorry but anybody who can write something called "how to create a mind" is just preposterous. Ray Kurzweil has no clue how to create a mind. Not because he is not smart (he is), but because NOBODY on this planet has a clue how to create a mind, yet. Ray does, however, obviously know how to separate people from their money by selling books that do not deliver.
If somebody offers to tell you "how to create a mind" (other than, well, procreation, which pretty much everybody knows how to do it) just ask them why is that they did not create it, but instead they want to tell you that. That will save you some money and quite a lot of time. While I do not disprove motivational value of such popular books, scientifically they do not bring anything new and this particular book is just a rehash of decades-old ideas.
I can't feeling that Hawkins is simply rehashing previous research in the hope that something will work if you build it big enough. Hierarchical learning ideas have been around a long time ( I did my degree paper on hierarchical learning in rule acquisition using a Pole and Cart simulator) and much of what he is proposing can be equated to the ART algorithms of the 1970s. I am also surprised that he considers the use of binary inputs into neural networks to be effective unless he is using his 'data streaming' approach to replace 'weightings'. Personally I think that AI should be looking at harmonic resonances in neural 'circuits' as an approach to recognition and temporal processing but sadly my mathematics is insufficient to the task of writing a paper on it.
Paris? well we are talking about an artificial intelligence here..
I simply cannot let that line pass. That is so primitive a reflection as to be ridiculous.
Whether Hawkins succeeds or fails to build an AI is actually irrelevant. Whatever the end result of this endeavour, he will have succeeded in furthering our knowledge of the brain and the corresponding neuroscience. For that alone, he deserves recognition.
Personally, I fail to understand how anyone can hope to build an artificial brain without understanding how a real one works. If we have impressive car simulators today it is because we have a very good understanding of how a car works. Without the practical knowledge we have of tire grip, shock absorbers, torque and power, how could one possibly build a proper car simulator ? Building an artificial brain must be the same.
And Hawkins' remark that the brain does not come with a set of predefined instructions hits the nail squarely on the head. If the opposite were true, we would have no trouble raising children and only one book on the subject would ever have been written.
I don't know if Hawkins will succeed in his quest, but I sure wish him the best of luck - if only to shut up the naysayers with their precious math.
Now, supposing he does succeed, I have one question. What will the first AI's favourite distraction be ?
> Personally, I fail to understand how anyone can hope to build an artificial brain without understanding how a real one works.
Indeed, the oft used comparison with the aeroplane is apt here.
We don't build giant birds but we needed to understand the principles of bird flight in order to build aeroplanes and you can only really do this by studying birds.
I have spent the whole daily elReg reading quota on this six-page article (I won't read about "Selfied are OVER...", "Boss at 'Microsoft' tech sca", "Tokyo to TXT wanin..." and "Google confirms Turk...", sorry dear tabs). Now please tell me anyone: did you also watch the 25-minute long video on page 5?
I suspect the brain will turn out to be much simpler than everyone imagines, and that the challenge wont be in modelling the brain but in matching its parallelism.
If the brain is basically one giant pattern matching machine, and If every neuron has around 7000 synaptic connections to other neurons, just changing the order that the synapses fire on one neuron would be enough to encode vast amounts of pattern information throughout the brain.
If the hippocampus acts as the adaptable fast changing memory, then it becomes like the 5th wheel on an Enigma machine - multiplying all the potential slower learned encoding possibilities even more so.
Modelling 10 of billions of neurons with their 1000s of re-ordable synapses, might be doable. Modelling how 100's of millions of them fire simultaneously every second is a whole different ball game that clever software alone won't solve.
Companies should pay an earning tax for every robot. This tax should go into a fund to retrain people in jobs that only people can do. Mainly the arts and athletics. So in the future the robots will do all the work and the people will eat delicious robot-made tucker and put on plays for each other and write poetry to each other. We will all be good at sports and belong to many leagues. Everyone will know the Karma Sutra forwards and backwards... especially backwards. We will paint and declaim and write art reviews. Indeed the leaders of society will be the reviewers. It will be a world of poetry and abstract art and street theater. In short. The future will be an artsy fartsy LIVING HELL. Except for the Karma Sutra part, of course.
Machines wear out at varying rates depending on how they are used. For instance, an automobile driven on nothing but highways at moderat speeds will last for at least two decades, but an off-road vehicle is probably going to be used up inside the first five years of use. Electrical components fail as well as a function of hours of use and the currents handled. We will never be able to construct an indestructible automoton capable of independent thought. We will, I think, eventually be able to construct machies with useful lifespans that are much shorter than our lives. Machines self-destruct because of a series of small positive feedback loops. A bad bearing increases its rate of wear over time, for instance, but human bodies constantly repair themselves and there are only certain tissues that must be repaired by surgery once they are damaged.
What this means is that our creations will be more mortal than we are. They might well go out on strike for better wages, just so that they can afford a complete overhaul after twenty years. I seriously doubt that they will simply allow us to shut them down and throw them on the scrap heap for recycling the way we do other machines. We already buy insurance to repair our automobiles. Expect to be paying for insurance on your friendly family robots.
Biting the hand that feeds IT © 1998–2019