back to article Big question: Who gets the blame if a cyborg drops a kid on its head?

Who is responsible if a robot controlled by a human brain drops, say, a baby? It's a bizarre question, but one worth asking, according to scientists who raise a host of ethical and social questions around brain-machine interfaces (BMI) in the policy forum in Science. Research in this area begun in the 1970s when the US …

Silver badge

Hmmm, from past experience its either

1) N00b!

2) Opps im a newbie :)

3) It wasn't me, it was my brother on my account

4) I was afk

5) LAAAAAG!!

10
0
Anonymous Coward

Get your own acronym. This one's already taken

Everyone knows BMI = Body Mass Index

Why not come up with a better unique one?

Having said that, I just tried and failed miserably!

5
0
Silver badge

Re: Get your own acronym. This one's already taken

I've always heard it referred to as "Brain-Computer Interface" or BCI, since at least the late 1970s at Stanford.

5
1
Stop

STOP

Stop is ambiguous. Stop dropping the baby, or stop gripping the baby?

Good luck training you machine learning for subtly nuanced baby juggling.

8
0
Silver badge

Re: STOP

Actually

given enough babies, I am sure that your machine will eventually learn....

8
0
Silver badge

Re: STOP

For the success condition to apply does the baby have to be in one piece?

0
0
Silver badge

Why would anybody be daft enough to endanger the child?

I think one correct answer would be mu.

0
0

If you put a helpless human in a machine

You are responsible for what happens. That includes prams, pushchairs, baths, car seats, roller-coasters, hot-tubs, robot nannies, rocket sleds etc. Generally, the simpler a machine is the lower the chance of something bad happening so robot nannies are out for me

6
1
Stop

Re: If you put a helpless human in a machine

Exactly user takes responsibility for their actions. Caveat Luser. Its not frigging rocket science.

Fully autonomous machines need to hold the manufacturer responsible for clear mechanical failures and to provide a WORM audit log when the Luser does something daft.

Idiots are always going to find a way to auto-darwinate themselves or others. No special laws needed.

Just like that muppet who T-boned the Truck in his Tesla after ignoring 6 autopilot reminders.

6
0
Silver badge

Re: If you put a helpless human in a machine

"Fully autonomous machines need to hold the manufacturer responsible for clear mechanical failures and to provide a WORM audit log when the Luser does something daft."

But since that's going to be a while away, what about SEMI-autonomous stuff where the line between machine-controlled and human-controlled blurs. What if an accident occurs in that gray area? Is it the user's fault for giving an unintentionally "stupid" instruction, is it the machine's fault for not recognizing it as a "stupid" instruction, or did something unrelated intervene to turn an otherwise "sane" instruction "stupid"? We're heading into Book of Questions territory here, if you ask me, where a clear answer is going to be elusive.

1
0
Silver badge

We'd first need to sort out more general problems with our economy

We have already seen what our current way of solving the economic problem does to mobile devices. They have turned from their humble brain extending beginnings (calculators, etc) to devices which actually inhibit your cognitive abilities by constantly drawing your attention to them so you can be served ads.

If we extrapolate that to brain machine interfaces we'd get a rather dystopian future.

So one problem is that there are people/companies believing they have the right to turn other people's attention into money. Another problem is people/companies/governments applying force on other people.

The constitutional court of Germany has wisely declared the right for "Integrity and confidality of information processing system". This is one of the essential rights we'd need for a cyborg future.

4
0
Silver badge

Facebook is aiming to create a system that "can type 100 words per minute straight from your brain."

But fake news has already got into everybody's heads already, judging by recent election results.

2
0
Silver badge

It's not like that's a limitation.

Yeah, and only few if any people are even capable of forming that many words without getting really verbose. Unless you have a screen keyboard and intransparent fingers, your writing speed is typically not constrained by your typing speed.

2
0
Silver badge
Stop

We're overthinking it folks ....

There is a reason Turing is considered a genius. He gave us *all* the tools we need to deal with AI. From self driving cars to robotic nannies.

The Turning test.

If the robot (or car or whatever) can pass whatever exams, and tests are required for a *human* to achieve licensing/certification - then it's passed.

2
3

Re: We're overthinking it folks ....

What you describe: passing exams, or even always getting full marks in exams, is easily achievable by computers. Computers and robots can even perform better than humans in many tasks (be it chess, go, building cars, or simple mathematics). But that's not the Turing test. To pass the Turing test, a human being cannot be able to identify if he is interacting with a machine or another human

1
1
Silver badge
Mushroom

Re: easily achievable by computers

Bollocks. We haven't even achieved the level of language skills needed, let alone the fine grasp on semantics that everyday speech throws up.

That's "a few years off".

Still.

0
0
Joke

Am I the only one

Who suspects my users would fail the Turing Test?

8
0

Can you trust this tech?

Serious question?

I mean governments already want backdoors to your computers, google already throw ads and even targeted ads at you as much as they can and seemingly Facebook, who want to know every tiny detail about you, your friends, your love life and your newly born poop machine is involved.. so who's to say that the tech won't be influenced by these factors and turn into giant spyware / ad serving tech?

It'd be perfect for government surveillance, monitor how they are going to vote easily, know what they are thinking at all times etc.

Not to sound paranoid, but the whole thing would have to be 100% open source with no proprietary software/hardware in sight to be trusted.

4
0

Re: Can you trust this tech?

It very definitely IS a serious question. My own attempt at a serious answer is here but in short, mind reading technology can only be prevented from being a totalitarian wet dream if we force governments to accept some of the protections it also makes possible (principally the ability to block authentication when "coercion" is detected) so that no one can be forced to disclose anything without their informed consent. Please see also my comments here

1
0
Silver badge

Re: Can you trust this tech?

And then the plods will just resort to psychology to apply a subtler form of mind control that doesn't trip the "coercion" detectors, etc.

The problem with sieges is that the attacker usually has the advantage: usually in the form of a broader scope. Unlike the defenders, the attackers aren't bottled up.

1
0
Terminator

Mandatory brain reader chip for newborns

Just like baptism, people will install the darn thing on kids, or even worse, it will be mandatory by law! because terrorism, of course.

2
0
Silver badge

Naw, only when they grow up

https://youtu.be/42Xi9peYYHU?t=2m11s

0
0
Anonymous Coward

Answer: the parents

You've always got to blame the parents ;)

1
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017