back to article Swiss researchers demo ‘Avatar-like’ robot control

A partially-tetraplegic patient in Switzerland has controlled a robot 100 km distant via a head cap sensing electrical signals from his brain. According to the Associated Press, the experiment is the first time a “mind control” interface has been used by a partially paralyzed person, and without invasive implants. From his …

COMMENTS

This topic is closed for new posts.
  1. jubtastic1
    Meh

    This is encouraging, but...

    I honestly thought that by now we'd be able to hook directly into the nervous system, to send and receive signals for natural feedback and control of prosthetic limbs, seeing that skull cap leads me to think this is still way off even if cochlear implants are basically doing this to restore hearing.

    <tangent>

    It's weird how the some of the things we assume will be sorted during our lifetimes remain forever on the brink of being realised (AI, fusion, cure for male pattern baldness), while others come storming in straight out of left field, like cloning, never saw that coming, wonder what happened to Dolly? I guess I could ask the Internet, one of the things that did pan out but on reflection I think I'd rather remember her as she was.

    </tangent>

    1. Why Not?

      Re: This is encouraging, but...

      No reason why this shouldn't be implanted long term, put the electrodes under the skin and put an inductive pickup at the base of the skull / somewhere convenient.

      However as its still in development I think the subject would prefer that a visit to the operating theatre isn't necessary to tweak an electrode.

      If its truly reading the movement thoughts not reading 'Must concentrate - think about thinking about lifting your arm's mnemonic'

  2. Neoc

    This is not "mind-controlled".

    This is no different that reading the movement of an eyelid (or an eye) to control said robot.

    The day I can think "left", or imagine the robot turning left, and the robot does it, *that* will be mind-controlled. Until then, it's just a glorified joystick.

    Yes, there may be training involved so that my commands are nice and "crisp" for the receiver, but that is no different to learning to drive - you get better with time.

    1. James Micallef Silver badge
      Happy

      Re: This is not "mind-controlled".

      "The day I can think "left", or imagine the robot turning left, and the robot does it, *that* will be mind-controlled"

      erm.... according to the article, that's EXACTLY what the device is doing, nothing to do with using his eyes. Still a bit crude as yet, but it's a good start I reckon. Of course there needs to be some training involved so the computer can pick out what brain patterns mean 'turn left', which ones mean 'go forward'.....

      ....and which ones mean 'that undergraduate student is HOT, I love how she unwittingly reveals her cleavage when she bends over to adjust my headset'

      1. Neoc

        Re: This is not "mind-controlled".

        I *have* read it. Here's the relevant section: "was able to control the robot at Lausanne by imagining acts like lifting his fingers"

        So in other words, the scientist decided in advance which section(s) of the brain they were going to look at and told the subject that (for example) "if you want the robot to move forward, think about moving your finger." This is not proper mind control. This is not imaging what you want the robot to do and the robot doing it. This is making the controller fit the controls, whereas it should be the other way around.

        As for my bit about the eyes, kindly re-read *my* post properly - I was making a comparison.

        If you're going to down-vote me, kindly do it for what I actually said.

        1. James Micallef Silver badge
          Happy

          Re: This is not "mind-controlled".

          Let me start off by saying you're right about the bit about the eyes, I misread that part of your post, my bad. Now to get to the meat of the question....

          You seem to be proposing that the only acceptable control method should be a third-person visual image ("imaging what you want the robot to do and the robot doing it"). I don't see why a specific visual frame of reference should be the only acceptable method of control. Why not a first-person visual reference (Imagining oneself to be turning left, which is actually the "Avatar" method)? Why not an internally voiced audio command (internally thinking the words "turn left, you damn robot!"). And indeed, why not imagine oneself controlling a joystick that is in turn controlling the robot? These are all valid internal mental representations for the desired outcome.

          Indeed, the way the mind works, it's highly likely that the mind will automatically be correlating different representations and firing them simultaneously. For example as someone who has used a joystick to play video games, my knowledge that the (real or imagined) joystick is controlling a robot will automatically correlate with a mental image of the robot turning left if I am using the joystick to direct it left. Quite possibly I'm at the same time issuing verbal instructions (surely I'm not the only one who has ever talked to their video game characters even though I know it won't have any effect).

          In my book, as long as the computer is scanning mental patterns, identifying a recognisable pattern and correctly performing an action based on that pattern, the researchers can rightly call it mind control. It might be mind-control v 1.0, or maybe even v0.1, but still mind control.

This topic is closed for new posts.

Other stories you might like