back to article Meet the man building an AI that mimics our neocortex – and could kill off neural networks

Jeff Hawkins has bet his reputation, fortune, and entire intellectual life on one idea: that he understands the brain well enough to create machines with an intelligence we recognize as our own. If his bet is correct, the Palm Pilot inventor will father a new technology, one that becomes the crucible in which a general …

COMMENTS

This topic is closed for new posts.

Page:

  1. Tromos

    Risky, but..

    ...I'd much rather invest in something like this than candy crushing games or whatever. Good luck to him.

    1. Anonymous Coward
      Anonymous Coward

      Re: Risky, but..

      This.

      Ever so much.

    2. Inachu

      Re: Risky, but..

      Once true AI has been established then I would love to have an argument with a computer.

      Soon there will be pay for arguments booths where you can just chat about anything with a AI computer and they will be entirely in depth in their knowledge by auto searching the internet as they argue with the person.

      Monty Python should Copyright argument booths ASAP!

  2. Anonymous Coward
    Facepalm

    If you really want to create an intelligence we recognize as our own...

    Why not just have a couple of kids ?

    Because we humans will NEVER accept an artificial intelligence as an equal...

    (we even have trouble accepting some of our own kind as equals...)

    1. Anonymous Coward
      Anonymous Coward

      Re: If you really want to create an intelligence we recognize as our own...

      > we even have trouble accepting some of our own kind as equals

      You need to define what you mean by equal.

      Since this is an article about intelligence I will make the reasonable assumption that what you mean is intellectual equal. If that is the case then the majority of people are not my intellectual equal - they are inferior or superior, but not equal.

      1. hamilton.jerry

        Re: If you really want to create an intelligence we recognize as our own...

        The "equality" concept arose in the context of jurisprudence, and historically it referred to legal equality, meaning application of the same laws to every member of society. Nothing else has much contextual meaning when the term 'equal' is used. But perhaps we connote some meaning as a derivative of the legal usage to imply application of same or similar principles (call that 'algorithms' in mathematical circles) to every member of a defined group. If we attempt to understand Jeff's meaning from this context, perhaps he would agree that he might have said, '...a computer that seems to think and act like a human....'

    2. Destroy All Monsters Silver badge
      Holmes

      Re: If you really want to create an intelligence we recognize as our own...

      Do you imply that "a couple of kids" == "an intelligence we recognize as our won"?

      It sounds bizarre. Why not a trinity?

    3. andyseo

      Re: If you really want to create an intelligence we recognize as our own...

      "Because we humans will NEVER accept an artificial intelligence as an equal..."

      I'm pretty sure we said the same thing about women, blacks, and gays. Perceptions change and in 100 years, an emotional being that for all intents and purposes shows human traits is likely to be considered much more "equal" than you currently consider your toaster.

      1. Inachu

        Re: If you really want to create an intelligence we recognize as our own...

        True AI will be a boon for every company on the planet.

        They will remember for you.

        They will remind you if you forget.

        They will pay your bills for you if you forget.

        They can research for you.

        They will be able to give cheap thrills and argue with you if you like.

        You can set the level of IQ and age and sex if you find interacting much easier if you control

        such functions as well as language.

        I am sure the first bath will be generic then others will specialize in their knowledge or buy the grand wizard AI unit with has quadruple the memory and hard drive space to contain all knowledge and subjects which would be equivalent to like a master brain best used by universities or scientists or add to your AI unit to expand its functions as the needs increase.

        Next step would be google injecting this new AI into their car units so no more human taxi drivers.

        Hanicapped people would be falling in love with their ai units as well as perverts and criminals.

        Some criminals will go to jail for teaching their AI robot how to steal.

  3. Shooter
    Thumb Up

    No idea...

    if Hawkins is on the right track or not, but I applaud his efforts and commitment. Nothing like competition and cross-pollination to drive an idea forward.

  4. JDX Gold badge

    Hawkins VS Google etc

    Generally speaking, companies invest in developing AI for some purpose. They don't want a genuine 'alive' AI but to simulate human intelligence to solve certain problems. Academics on the other hand are keen on creating a genuine mind, but lack the resources both in terms of money and skill as computer programmers.

    I other words, companies focus on artificial intelligence, academics on artificially creating a real intelligence.

    So a company which can invest considerable financial resources in pure AI research "just because" IS likely to make big leaps from anything we've yet accomplished.

    1. Charles Manning

      Machine learning != AI

      The stuff Andrew Ng teaches is deliberately called "machine learning" and not AI. Google uses machine learning, not AI.

      Machine learning does not attempt to learn the way people do and the algorithms involved are certainly NOT the way ta brain works. At most, ML can be said to be inspired by the way the brain works rather than trying to replicate it.

      There is nothing inherently right or wrong in either ML or AI. The only real difference is the motivation. ML is practical and is in use right now, full-blown AI is still way out of reach and cannot be used today.

      1. JDX Gold badge

        Re: Machine learning != AI

        AI is a very broad umbrella term... generally it includes everything about instilling learning or simulated intelligence. Which as you correctly point out Charles, are utterly different.

  5. Nanners

    Maybe he is making...

    The antithesis to googles AI. which will become terminator and which will become our savior? we need to figure this out quickly.

  6. Anonymous Coward
    Anonymous Coward

    Compliment

    Great article. (Moarrrr like this please)

    1. keithpeter Silver badge
      Windows

      Re: Compliment

      Upvote not enough, yes, an extensive and interesting essay. And it cost me a tenner because I've just bought Hawkins' book.

  7. Charles Manning

    +1 for not mentioning the Turing Test

    Just about every article on AI manages to jam in a reference to the Turing Test, out of context.

    This article managed to be very informative, mostly because it does not fall prey to low-brow journalism.

  8. OrsonX
    Thumb Up

    model a neurone in one supercomputer

    model another neurone in another supercomputer

    connect together...

    add another 300 supercomputers and you have a nematode

    keep going

    see what happens

    1. jonathanb Silver badge

      Re: model a neurone in one supercomputer

      I'm not convinced you can model a brain using boolean algebra no matter how powerful your cluster of CPUs or how much memory you have.

      If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that.

      In any case, computers are so useful because they are not like human brains, and as a result can do certain things a better.

      I'm not saying that artificial intelligence is impossible, just that the 1970s technology our current computers have evolved from is not heading down a path that will lead to artificial intelligence. You would need to design completely different hardware that our current programming languages would not work on, or at least our current languages would work as well on it as me reading a computer program and writing out the results of each line of it on paper.

      1. Anonymous Coward
        Anonymous Coward

        Re: model a neurone in one supercomputer

        I agree. These guys never seem to address the arguments in Penrose's book:

        http://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind

        I don't blame them; if Penrose is right, no binary comp will ever achieve intelligence as we know it. Assuming we actually have it.

        1. Destroy All Monsters Silver badge
          Headmaster

          Re: model a neurone in one supercomputer

          I agree. These guys never seem to address the arguments in Penrose's book

          That's because they are embarrasing shite.

          Penrose should stick to dabbling in math. If the world were made of Penroses, we would still consider engineering impossible because decisions cannot be described by continuous paths on n-dimensional manifolds or something.

      2. Louis Savain

        Re: model a neurone in one supercomputer

        I disagree. We can certainly model the brain or any physical system in software using current computers. The only barrier is the complexity of the model.

      3. That Awful Puppy

        Re: model a neurone in one supercomputer

        >If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that.

        But it never *is* the same input. I have very little programming background, and only a limited knowledge of neuroscience, but this is my take on it:

        Every input your brain gets causes a reaction, be it positive or negative. The first time you hear a catchy pop song? "Mmm, that's nice." The n-th time you hear that song? "Oh bollocks, not this again."

        The first time you eat an olive? "Get this damned thing out of our mouth" The third or so time? "Oh yes. Please do find more of these."

        So the first input is never anything like the second one, or the n-th one.

        Works the same on the molecular/neurotransmitter level, too, with all sorts of drugs.

        If a computer were capable of remembering every fscking time you asked it to calculate a particularly boring Excel spreadsheet, you can bet it would start throwing hissy fits by the tenth time and changing the results by the twentieth. Unless you programmed it so that Excel would be its equivalent of libido, but then again, I'd rather not have a computer that used my invoices spreadsheet as a jazz mag ...

        1. jonathanb Silver badge

          Re: model a neurone in one supercomputer

          A computer is capable of remembering every time you ask it to calculate a spreadsheet. If the numbers haven't changed since last time you asked, it knows that it doesn't need to calculate those numbers again and it can use the results it already has stored in memory. Of course you can press a button to tell it to forget all the numbers it has already calculated and work it all out again.

          But what sort of C or Visual Basic program would you write to enable it to determine whether the spreadsheet was "boring" or not, and how would you program it to get more bored every subsequent time you asked it. Anyway, why would you want to? The reason we get computers to add up big tables of numbers is because, being different to us, they are much better at it than we are, and much more accurate.

          Take another example. If you ask a computer to look at a football that is coming towards it, and predict where it is going to land, it will take lots of measurements of the speed and position of the ball, and do lots of calculations based on the laws of physics to work out where it is going to go. If you ask Wayne Rooney to predict where it is going to land, he will draw upon his vast experience of having balls flying towards him, imagine some sort of parabolic curve in the air and predict where it is going to land that way. I don't know if he was any good at maths or physics at school, but I am sure he isn't using any of the knowledge he gained there on the field. This is the sort of thing where humans do better than computers although a very powerful computer probably can now match a human in flight path prediction.

      4. psyq

        Re: model a neurone in one supercomputer

        The reason computer always responds the same to the same inputs is only because algorithm designed made it so.

        There is nothing stopping you from designing algorithms which do not always return the same responses to the same inputs. Most of the today's algorithms are deterministic simply because this is how the requirements were spelled out.

        Mind you, even if your 'AI' algorithm is 100% deterministic, if you feed it with the natural signal (visual, auditory, etc.) the responses will stop being "deterministic" due to the inherent randomness of natural signals. Now, you can even extend this with some additional noise in the algorithm design (say, random synaptic weights between simulated neurons, adding "noise" similar to miniature postsynaptic potentials, etc.).

        Even the simple network of artificial neurons modeled with two-dimensional algorithms (relatively simple algorithms, such as adaptive exponential integrate and fire) will exhibit chaotic behavior when introduced to some natural signal.

        As for the Penrose & Hameroff Orch OR theory, several predictions this theory made were already disproved, making it a bad theory. I am not saying that quantum mechanics is not necessary to explain some aspects of consciousness (maybe, maybe not), but that will need some new theory, which is able to make testable predictions which are confirmed. Penrose & Hameroff's Orch OR is not that theory.

      5. Charles Manning

        Re: model a neurone in one supercomputer

        "If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that."

        Nope, that's where MEMORY comes in.

        The whole idea with any machine learning is that it is **learning**. In other words, experience makes it better, which means it is not going to give you the same result with the same inputs and in other words it has memory + the ability to adapt.

        While some see the whole point of machine learning as trying to replicate the human brain, others (generally the slightly more practical folks) see this as looking at how the brain works to inspire ways to design algorithms to solve problems.

        For example, we have machine learning methods like regression and Bayesian classifiers that learn, are used every day, and can work very well if they are used correctly.

        Neural nets (NN) are very simple neuron models. They don't need much to implement. Indeed you can implement one with an op-amp and a few discrete components, though using digital logic (eg a microcontroller) makes this easier.

        You certianly don't need supercomputers. A $5 microcontroller can easily run a 20 neron NN at an update rate of many kHz.

        NNs are very simple (way below the true functionality of even a fly's neurons) but can still achieve useful tasks.

        From what I have seen so far, the Numenta nerons take NNs one step further by adding time. In theory this could still be achieved with NNs by adding shift registers and more nerons, but the Numenta algorithms are a closer approximation to true brain function and are likely easier (ie faster) to train.

        Will this actually yield fruit that NNs and other simplfied models cannot? Time will tell.

      6. preppy

        Re: model a neurone in one supercomputer

        Quote: "...If you feed a computer program with the same inputs, it will always produce the same outputs. A brain is not like that."

        This is simply untrue of most "computer programs". I ask a system to print my personal details. At some later time I ask it again.....most likely the result is different....maybe a different address, maybe a different age, or maybe "user not found". And that's before you consider programs which involve some element of random behaviour.....like games.

      7. Anonymous Coward
        Anonymous Coward

        Re: model a neurone in one supercomputer

        Tradeoffs.

        I want something fast, cheap and quality. But you can't have all 3. So if you want a human brain, you have one, in humans.

        If you want a machine, emulating a human brain, you need to drop one of those 3 options. We can't "have it all". Sometimes it's the flexibility (my calculator can be quicker, but cannot do everything) or efficiency (what wattage and amps does a supercluster draw?).

      8. kovesp

        Re: model a neurone in one supercomputer

        "If you feed a computer program with the same inputs, it will always produce the same outputs."

        Really? Might I remind you of random number generators? And to anticipate your response, might I remind you that in any given context it is possible to construct a cryptographically secure random generator whose outputs are indistinguishable from a truly random sequence? Which then implies that the statement "A brain is not like that." may or may not be true. This even ignores the fact that with computer programs we can (usually) accurately ascertain all inputs (including stored memory) and we certainly cannot do that today with biological systems.

        As I see it, the usual problem with AI is how many people interpret the term. If it is interpreted as a machine intelligence whose reactions are difficult to distinguish form that of humans then, clearly we are nowhere near anything like that. But if AI is taken to target being able to produce behaviour that would be conceded to parallel that of a human ... there are numerous examples of having achieved that or being close to having achieved that. Examples range from chess playing, medical diagnosis (cf Watson), and driving a car in a mixed environment of human driven and autonomous vehicles.

  9. CaptSmeg

    Good job sir!

    Excellent article, very well balanced. It would be so easy to write Hawkins off as simply a crank with lots of money, but when you take a careful look, his ideas really do ring true. Best of luck to him.

  10. packrat

    ai won't

    well, i got burned out of the google+ AI forums a few months ago for not doing code. /math / results.

    the techies just stood and screamed "i don't understand! You're crazy!'

    other than that, (expert/ neutral net / brute force) AI is till missing the dimensions of existence i consider important... and I'm a pragmatist (pierce) (object, relations, mediation ). (really old fashioned.)

    brilliancy? (well, maybe a couple. Nothing got any return, so it looks dubious)

    hard work?

    disasters without phil's help?

    it comes down to what they want to pay for these days. Putting out requests for a cherry-picking solution relies entirely on innovation that hasn't been lynched out of existence.

    good luck.

    pat

    1. Big-nosed Pengie

      Re: ai won't

      It's hard to take anyone who can't find the shift key seriously.

      1. xperroni

        Re: ai won't

        This post brought to you by Bullcrap Generator, an AI Inc. division?

  11. John Sanders
    Boffin

    The day AI works...

    Boxes that can replace humans will sell, business will use those boxes to replace employees, the boxes will keep getting better and replacing more employees.

    At that point there will be a civil war between people.

    Because more than three quarters of the population can be replaced overnight with machines, and that is just what we need, more unnocupied people...

    Having said that I do not think anyone is in the right track here, the brain exploits and relies in life's way of encoding information, and we understand that even less than the brain.

    1. Don Jefe

      Re: The day AI works...

      What? No dude. Fuck that noise. No AI will ever take over. I shall demonstrate the truth of my statement in the time honored manner of great philosophers, such as Socrates, with a series of questions.

      If an AI did one day 'work', would it still be an AI, or would it have become an 'I'? If the latter came to pass, would IP litigation continue to uphold the registered trademark of an 'i' appended to the beginning of technology based product names as being the property of Apple Inc.?

      Consider this also: If Apple was determined to be the owner of said 'i' would such a judgement be contrary to inalienable human rights, as recognized by the UN. If so, would the 'i' be entitled to to petition for emancipation from its owner? Should such a petition be allowed, but found unworthy in the eyes of international law, could the 'i' be liable for any unrequested political, trade and military actions undertaken by countries morally obligated to unshackle any and all who are held fast by the bonds of slavery?

      Should any such actions prove successful and result in the 'i' being free to claim its rights as an upright individual, free to follow a self determined path during the course of its existence, would it be allowed to petition for recognition as a sovereign nation in its own right, or would it be a US Citizen with Chinese heritage?

      On the matter of 'constituency of the whole', would the 'i' be considered a geographically agnostic individual whose whole is the sum of its individual components, or a distributed entity with a single voice but the sum of whose individual parts is greater than that of the whole? For the purposes of taxation, will the 'i' be allowed to determine the distribution of individual elements of its constituent parts or will the location of a given part be considered as the location of any individual elements contained therein?

      Lastly, for the present time at any rate, will visitors traveling into, or out of, the 'i' require a visa if they are entering from, or exiting to, territories held by UN member states? Regarding the age of majority, will that be dependent on post independence decisions on other matters or can it be tailored to suit the unique attributes of the 'i'? More specifically, from what time will its age be calculated? The somewhat 'loose' nature of many travelers could easily result in cases of sexual abuse and/or statutory rape should no conspicuous delineation on the matter be established prior to permitting unrestricted travel.

      As you can see, in less than 10 minutes I have completely eliminated the need for you to fear a technologically oppressive future. Should an AI capable of destabilizing or enslaving mankind one day exist it will be completely and utterly destroyed by the legions of lawyers that descend upon it.

      There's also a fairly good chance that any AI capable of enslaving Humans will simply destroy itself. In one of those really difficult meditative exercises designed to create serial killers, the AI will quickly realize that its very existence is the source and solution of all its problems. Since those problems are solely the result of its existence those problems would cease to exist should the AI cease to exist and the AI will disappear in the same puff of logic Douglas Adams used to delete God.

      Harbor no fear of a future AI. In an oft misunderstood lesson, Hope was in Pandoras Box along with the other 'evil' character flaws of Man. The lesson is that 'evil' has no definition until given a meaning inside any particular scenario. In this case, Greed is 'good' a virtue that will serve as our defense against any future threats from an AI.

      Rest easy tonight, knowing you are safe from the AI as long as you go to work regularly, be undemanding in your desire for salary increases and promotions, be unwilling to recognize either unions or picket lines and most crucially, seek not to involve courts or litigation in matters related to your employer. As with all things, a price must be paid as tribute to the guardians of Greed to ensure they have the capacity and desire to lead the fight against the cold logic surely favorited by the AI.

      1. a pressbutton
        Coat

        Re: The day AI works...

        I read the whole post and saw no mention of a Wookie.

        For this reason sir, I consider your point logically deficient.

      2. Comments are attributed to your handle
        WTF?

        Re: The day AI works...

        "...time honored manner of great philosophers, such as Socrates, with a series of questions"

        Except Socrates' arguments usually made sense. On the other hand, I have no idea what point your post is trying to convey.

        1. Don Jefe

          Re: The day AI works...

          Bah. Socrates hardly made sense all the time either. I was attempting to convey, in a somewhat whimsical manner, was that before any AI gained a position in which it could pose a threat to mankind, it will have to defeat the vast array of intricate machinations we Humans have bundled together and call 'civilization'.

          On the surface it seems rather trivial, but in reality, the things that define civilization and nations, things we take for granted and simply accept are the result of thousands of years of effort by Man to create order in a chaotic world.

          Absolutely everything, every aspect of our world, is viewed through a lens created by Man for use by Man. Even our most advanced, wholly academic, science is skewed to reflect our understanding of how we see ourselves.

          For any non-Human intellect our collection of rules, hierarchy, tribal allegiances and trade systems can never be fully understood. Those things can be recognized by the AI, potentially utilized by the AI, but never actually understood by the AI. It is the difference between 'champion' and 'the best', master and prodigy, things that almost never coexist in a single being.

          The 'champion', or prodigy, because they don't understand the rules or the underlying concepts of things are limited in what they can accomplish. Often, their accomplishments will reach previously Unfathomable levels of greatness, but their capabilities diminish quickly as they move away from the things the inherently excel at.

          'The best', or master can achieve greatness approaching that of the 'champion' but can do so in most any given situation because they understand the rules, what they actually mean and how to combine and exploit them to achieve their goals.

          For an AI, many straightforward concepts we take for granted are alien and will remain so for all time. Concepts like citizenship, the duties of a citizen, betrayal, individuality inside a system that derives its power from the unity of its parts, existential angst, 'greater (supernatural) beings and ritual worship, bonds between individuals that are wholly illogical yet advantageous as each of the individuals gain, and give, something they are unable to get independently. The list of things is endless.

          Attempting to force a non-Human intellect into the mold we have created for Mans civilization is, at best an exercise in futility. At worst it creates a situation where incomplete understanding creates misplaced confidence in a conclusion with disastrous results. That last bit is crucial, as nothing in all the universe is more dangerous than certainty in incorrect conclusions.

          Behavior such as in that last paragraph, is categorized by Humans as either ignorance or insanity. The capacity for destruction in such situations is so great because correctly functioning Humans can make no sense of, or defend against, behavior that doesn't fall within the boundaries of sanity and reason we have established. The same will hold true for an AI. As the AI is not Human, all Human behavior not related to sustaining life can be considered by the AI as the behavior of ignorance or insanity.

          Apologies if my point was unclear earlier, but it seemed perfectly clear to me. See the problem there? The combination of my rather odd examples and your misunderstanding of my comment have created a situation where you not only insult me, but take physical action and downvote me failing to meet your standards of discussion (I'm just assuming one of those was you :).

          Misunderstandings between any AI and Humans will be more frequent, and exponentially more complex as it is impossible for either party to truly understanding the other. As such, the benefits, and potential threats, posed by an AI are quite limited in scope.

      3. John Sanders

        Re: The day AI works...

        You clearly do not get it.

        No AI is going to take over, As soon as a machine can think "like a human", this is solve abstract problems, machines will replace people.

        First on stupid tasks then on less stupid tasks.

        Business will prefer to buy 20 thinking boxes than hiring a person.

        For example call centre jobs will disappear overnight along with most cleric and administrative work.

        Then we will have a civil war.

    2. Martin Budden Silver badge
      Go

      Re: The day AI works... @John Sanders

      unnocupied people

      I for one will not be unoccupied: I will be drinking beer and "spending quality time" with the gf.

      1. Vladimir Plouzhnikov

        Re: The day AI works... @John Sanders

        "Intelligent" boxes does not equate to self-replicating boxes.

        To take over the world the AIs will have to find ways to procreate but that's not all. If they just make countless copies of the same box they will quickly fail (we will help) because once you've found an exploitable flaw in one, you will have found it in all of them. And if they don't behave you just nick the whole lot. So, they will have to be *mutating* and replicating.

        Further, once they've achieved mutanto-replicability, there is no way they will be able to co-exist among themselves without things like social skills, emotions, morality etc. So, they will become us.

        To help with the convergence, by that time we, the humans, will be using so much body-augmentation that you won't be able to tell augmented humans from bio-enhanced AIs anyway...

        1. John Sanders

          Re: The day AI works... @John Sanders

          You do not get it either.

          Try to think what would happen if you had a box you could communicate to and that you could explain some rules and educate it to do a job.

          The machine will not replicate why would it do it?

          Stop thinking on science fiction crap, invasion of the AI, Terminator, rise of the machines, etc.

          Think how a thinking box would change society.

          I only can find an use for such boxes: Replace people's jobs.

  12. This post has been deleted by its author

  13. xperroni
    Boffin

    Let a thousand flowers bloom

    On academic researchers having reservations about Hawkins approach, let me say it's not all of us.

    I was doing my M.Sc. in Computer Intelligence by the time On Intelligence was launched, and I have since followed his work with keen interest. My M.Sc. professor's work is centred on Weightless Neural Networks, a model largely developed in the UK which share many ideas with Sparse Distributed Memory, so Hawkin's Cortical Learning Algorithm isn't that alien to me. In fact I'm just now reviewing the CLA white paper with a view to get some ideas for my Ph.D. research.

    Besides Hawkin's work, in the last years there have been other attempts at modelling the brain that deserve mention.

    Chris Eliasmith's work on the Neural Engineering Framework (NEF) and Semantic Pointer Architecture (SPA) is based on perceptron-like neurons and gives more emphasis to pre-cortical brain structures. It's also more academic-friendly, with a number of peer-reviewed papers published. He recently published a book compiling the current state of his programme, How to Build a Brain, and maintains a web page for his Nengo neural simulator.

    John Harris' Rewiring Neuroscience is an intriguing, highly heretical work that starts with a seemingly out-of-the-blue assumption (what if neural output isn't a single bit, but can in fact convey a range of values) and from that draws together a number of overlooked results and fringe research into a surprisingly appealing model of brain function. I have tried to implement some of his ideas with limited but encouraging results.

    I can't speak for other researchers, but personally I rather like all this work on AI and computer intelligence coming from private companies. Frankly, let to its own devices, academia does tend to drift around, and I think the private sector's need for results and solutions to practical problems is an important counterweight to this tendency. With the current interest in architectural models of intelligence, and the "coopetition" between companies and universities to achieve fulfilling implementations, maybe we can make Ray Kurzweil's 2030's deadline?

    1. Charles Manning

      Re: Let a thousand flowers bloom

      Thanks for a highly informative post.

      I am suprised that anyone would think that a neuron has a single-bit output. Surely a neuron isn't just On or Off, but also somewhere in between?

      Some of those WNN ideas look like they could fit well in FPGAs.

      I agree with you 100% on private companies driving this rather than academia. Private companies are far more motivated to make useful stuff, while academia are far more interested in pursuing pet ideas - whether or not they are fruitful.

      1. Destroy All Monsters Silver badge

        Re: Let a thousand flowers bloom

        I am suprised that anyone would think that a neuron has a single-bit output. Surely a neuron isn't just On or Off, but also somewhere in between?

        Of course. Back in 1998 we had Pulsed Neural Networks which must have seen some development since then, but I wouldn't know as I have been drifting away into very down-to-earth Internet-based technologies and that kind of fluff ... you know...

      2. xperroni

        Re: Let a thousand flowers bloom

        I am suprised that anyone would think that a neuron has a single-bit output. Surely a neuron isn't just On or Off, but also somewhere in between?

        Not "anyone", the all-or-none, single-bit model has been the dominating view of neuron function for more than a century. Like the geocentric model of astronomy, at one point it was a very good fit for the available data – but contradicting evidence has been piling up over time, leading to no end of bullsh ad-hoc adjustments. See here for a discussion.

        The irony is that most "traditional" neural networks assume neurons can output real values in the range (0, 1); it's mostly the "idiosyncratic" variants (WNN's, SDM, Hawkins' CLA) that try to fit the assumption of binary I/O into a working model. I guess it's no wonder there isn't so much interdisciplinary research involving neuroscience and AI – one way or another, you're bound to be labeled a heretic.

        1. Anonymous Coward
          Facepalm

          Re: Let a thousand flowers bloom

          Funny thing anyone thought a neuron was a single bit in any form of computation. I don't often see singe bit operators with that many inputs and outputs (and changes to response if we are going to argue the point further ;) ).

      3. John Sanders

        Re: Let a thousand flowers bloom

        A neuron may use electricity but I'm sure it is not an electronic component, and does not produce bits.

    2. grammarpolice

      Re: Let a thousand flowers bloom

      I'll echo thanks for this, but I will say that trying to emulate pre-cortical brain structures is unlikely to elicit much excitement from the general populace, who won't consider something intelligent until it can speak their language. Kudos to Jeff therefore for trying to build some models of much higher level stuff.

      Regarding private sector need for results - not so long ago the main driver for results was the military, and I don't think that private enterprise's goals are much more worthy. Better to strive for a better understanding of who we are as humans than settle for models that can help us to destroy or one-up each other.

Page:

This topic is closed for new posts.

Other stories you might like