back to article Regulate, says Musk – OK, but who writes the New Robot Rules?

When the Knightscope K5 surveillance bot fell into the pond at an office complex in Washington, DC, last month, it wasn’t the first time the company’s Future of Security machines had come a cropper. In April, a K5 got on the wrong side of a drunken punch but still managed to call it in, reinforcing its maker’s belief that the …

Bronze badge

Working out what AI is thinking and why

“It should always be possible to find out why an AI made an autonomous decision,” says Winfield, referring to it as “the principle of transparency”,

I'm not sure that this is possible with some aspects of AI/ML, since the algorithm is honed by the AI/ML itself and may not be transparent/apparent to its developers at all even if logged in a black box log.

I recall reading something similar recently in relation to this - may have been to do with Machine Translation where the AI/ML was able to translate directly between 2 languages it had not been specifically trained on, via triangulation.

6
0

Re: Working out what AI is thinking and why

Yup, neural networks in particular train themselves and developers may not understand how the specific neural pathways have been trained. Neural nets (and other "trained" AIs) are some of the most powerful computing resources available, but transparency isn't their strong point...

3
0
Silver badge

Re: Working out what AI is thinking and why

"I'm not sure that this is possible with some aspects of AI/ML, since the algorithm is honed by the AI/ML itself and may not be transparent/apparent to its developers at all even if logged in a black box log."

If liability falls on the manufacturer - and I think it should - it's then up to that manufacturer to decide whether they want to ship something whose workings they don't and, more importantly, can't understand. It seems to me that they really shouldn't want to. Should they even be allowed to?

4
0
Silver badge

Re: Working out what AI is thinking and why

ship something whose workings they don't and, more importantly, can't understand.... Should they even be allowed to?

Then you are going to have a hard job doing most machine vision without deep learning type nets.

Writing procedural code for "if this horizontal line at the top of the image is n pixels long AND ... AND ... AND " then Peckham high st - is going to be a bit limited

3
0
Silver badge

Re: Working out what AI is thinking and why

Writing procedural code for "if this horizontal line at the top of the image is n pixels long AND ... AND ... AND " then Peckham high st - is going to be a bit limited

And if you don't know how it claims to be able to recognise Peckham High St - and that that "how" makes sense - then you've no assurance that it will recognise it correctly nor that it won't categorise other streets as being Peckham High St. Indeed you don't even know whether the system that recognises it correctly today will do so tomorrow after being provided with additional training data.

3
0
Silver badge

Re: Working out what AI is thinking and why

@John Riddoch

Your argument is valid. Only one minor correction: it's not just about AI. It's also true of humans. "The most powerful computing resources available" (your words) but it can be a bit of a bugger figuring out why they did what they did. Even when you ask them, they may not know.

According to one school of thought, we don't know why we do things. We do something because one part of our neural net decides to and then our consciousness comes up with a reason why we did it. This is particularly true of children: remember when you did something wrong, adults asked you why and your answer was that you didn't know? The adults refused to believe you (they must have forgotten similar events in their own childhoods), so kept pressing until you invented a reason. As you grew older, you invented reasons more or less automatically (because you had become used to having to explain your actions) and eventually adopted the delusion that those invented stories were why you did something. Neurologists have shown that the bits of the brain involved in conscious thought come into play after the bits responsible for performing actions.

So yeah, to quote you again, "transparency isn't their strong point." Or ours. Unlike AIs, we are capable of inventing explanations, but invention is all that it is. Informed speculation about our own actions, with more knowledge of internal state to go on, but it's still speculation not fact.

The best we'll be able to do with AI is keep a record of all the inputs. That will at least tell us if the AI is at fault in a given situation and then we'll have to find training that eliminates the error.

1
0
Silver badge

Re: Working out what AI is thinking and why

"According to one school of thought, we don't know why we do things."

I believe they call it "intuition": responding to something SUBconsciously, without any thought as to WHY we did it. We just do it: practically reflex. This is one reason AI research can't even begin to look into the problem of intuition: because, on a fundamental level, WE don't know how intuition works, and by definition we can't teach something we don't know.

1
0

Re: Working out what AI is thinking and why

There's nothing magic or sentient about AI (and that includes neural networks). If a manufacturer can't release a product with sufficient log/trace mechanisms to track the logical sequence of actions in an AI unit they're essentially releasing a sub-standard untested product and it should be treated as such!

0
0

Re: Working out what AI is thinking and why

Well said and it goes far deeper than that when you bring emotions and feelings etc. into it. Amazingly there are far too much "intelligent" leading AI advocates out there (Musk a prime example) who either don't understand that or just simply choose to ignore it, (classic self delusion).

0
0
Silver badge

will the rise of the Robots

also cause an uprising of those displaced by them?

There is a potential for well over 50% of the workforce to be replaced by Robots and A.I. over the next 10-15 years. As Robots and A.I. Systems don't pay any taxes, the government coffers will be even more up shit creek then than they are today.

Poverty will rise to levels not seen for at least 200 years.

"Let them eat cake" contributed to a lot of people losing their head. Will these pesky non tax paying immigrant robots suffer the same fate?

I find it slightly ironic that Elon Musk is talking about this. He has stated previously that he wants to make cars with as little human involvement as possible. I wonder if he has had a change of thought?

Interesting times we live in...

4
0
Bronze badge

Re: will the rise of the Robots

Good point.

There may also be a level of self-regulation of robot numbers here since at some point there will not be enough people to buy whatever it is that the robots are making also - therefore fewer new robots will be needed - i.e. peak robot.

The taxation argument can be applied to other new technologies (e.g. electric vehicles) since if fuel taxes/duties diminish due to lower consumption then they have to be raised elsewhere. However we know that governments frequently "innovate" creatively on the taxation front...

0
0
Anonymous Coward

Re: will the rise of the Robots

A good point, and one long discussed down the decades of industrial relations. It takes far fewer people now to run a car assembly line than it did decades ago. Nobody likes being replaced by a pay-less machine, and never have.

The stock answer to this is that the people no longer employed, say, assembling cars are now employed doing something else instead that machines cannot do. The idea being that the economic productivity of the nation is higher for the same number of man hours, and we're all better off as a result.

The trouble with that idea is that actually it's very difficult to come up with an idea for what all those people are actually going to do instead. Just because one industry no longer needs thousands of people, there's no reason why another is going to magically spring up in its place. Couple that with the fact that after being encouraged to go to university, etc. the younger generation seem less keen on doing things like manual labour no matter what the renumeration is (and who can blame them?), there is even less incentive for anyone with a suitably expansive industrial idea requiring a lot of manual labour to actually set up in an area which used to depend on such things.

Thus the politics of the UK over the past 40 years or so (if not longer when one looks back to the closure of the cotton mills, etc). Personally speaking I think it could have been a whole lot worse than it actually was. Just think what it'd be like without the inward investment from the Japanese car manufacturers.

Jobs that are robot proof? Plumbing. Electrician. Undertaker. Tax man. That's probably about it.

I sincerely doubt that AI will advance to the point where it can replace delivery drivers (or any drivers), but with the impending legal clamp down on the Gig economy the likes of Amazon, etc. will decide they have less of a need for delivery drivers (it'll become cheaper to buy it in a shop as Amazon are forced to actually employ their drivers as staff, and Amazon's business will shrink).

0
0
Silver badge

Re: will the rise of the Robots

Tax man? That seems ripe for automation. Jobs like plumbers and electricians can be replaced for new construction, but for repairs/retrofit those jobs will last until we get robots with human equivalent intelligence which I doubt any of us will live to see. Once that happens all of us are replaceable except those in creative fields.

0
0

Re: will the rise of the Robots

Exactly,wherefore common sense governing ,if ever in existence,would mandate limits on childbirths,planned population portals to recieve tubal ligations and vasectomys@puberty,and recognize that the cup runneth over be fruitful and multiply just doesnt cut it in a society of which strives and yearns for convienience in every aspect of life.

0
0
Silver badge

Re: will the rise of the Robots

"Once that happens all of us are replaceable except those in creative fields."

Want to bet on that last bit? (NOTE: Just ONE example. I think I've read on some research that's managed to "Turing Test" professional musicians.)

0
0
Silver badge

Re: will the rise of the Robots

"Exactly,wherefore common sense governing ,if ever in existence,would mandate limits on childbirths,planned population portals..."

IOW, Overpopulation, which seems to be about as politically suicidal as being caught with kiddie porn.

0
0

Re: will the rise of the Robots

Relax! There is no "rise of robots" happening any time in our lifetimes. AI engineers have enough trouble getting machines to do what they want with the code they've written let alone anything outside that. There will always be an evolution of machine/robot/AI creating technology and environments to make our lives easier but there is always going to be the need for people to create and maintain them plus they can't do anything that needs a human touch... and that is a lot more of things we do in life that you would think!

0
0

Re: will the rise of the Robots

No-one has produced a successful Turing Test - at least not against an intelligent audience and without ridiculous boundaries.

0
0
Silver badge

Re: will the rise of the Robots

That's why the quotes. The idea was that these professional musicians couldn't tell the compositions were created by a computer instead of a human. It's not exactly a Turing Test, but it is in the spirit of its purpose.

0
0

Maybot to the rescue

In the UK we are uniquely blessed in writing such regulations since our Prime Minister elect(hah!) May-bot is well known to be an advanced android who only narrowly recently failed the Turing test (failing when forced to show empathy to non-silicon based life forms)

If anyone can write AI regulations, she/it can...Obviously they may not be the best interest of the humans around , but then again this goes a long way to explain some of her/its recent behavior

1
3
Silver badge

Re: Maybot to the rescue

"our Prime Minister elect(hah!)"

I take your ironic point that we don't directly elect Prime Ministers. However to style someone as '$OFFICE elect' indicates that although they have been elected to that office they haven't yet taken it up. It would have been correct to refer to Trump as POTUS elect between his election and inauguration, subsequently he became simply POTUS. By contrast, as May is PM your use of "elect", even ironically, is inappropriate.

"Simply POTUS" is left on the table for those of you who want to play with it.

1
0

Asimov

Who else?

3
0
Silver badge

Re: Asimov

My thought too - I was scrolling through the comments looking for someone else who remembered Asimov. Seems like the younger generation have never heard of him.

2
0

Re: Asimov

Don't we actually need Susan Calvin? She can then interrogate any dodgy car in question and get to the bottom of things?

1
0
Silver badge

1) AI just isn't that clever. It's snake-oil and statistics, which is why these things roll into ponds.

2) The liability for anything - until it's literally self-aware - is still with the manufacturer.

3) "I, Robot"-esque philosophy aside, if the product takes action that harms, it was in the wrong or designed badly.

4) Though the "through inaction" Asimov sub-clause is a well-thought out literary device, in sensible terms it's stupid, impractical, impossible to implement and leads to only one logical conclusion - protecting humans from themselves (hence the 0th law of robotics!).

Sorry, guys, but this discussion is 50 years too early. At least. And you can't escape liability while you're selling a product that injures someone. You don't even escape liability if you put a real human in a school, say, who then hurts a child. Though product manufacturers would love to throw all liability out the window, when they do you are quite literally into "corporate manslaughter as a service".

The question is moot even with prototype technology.

If you hurt someone, you're responsible. Whether you're human or not. While the devices themselves are not self-aware and declared legally independent entities, they cannot take responsibility, so they they are just devices produced by a corporation - and that corporation has full liability for anything they do while being operated correctly (as judged by a court).

You can't get away with "Well, the lorry shouldn't have pulled out in front of our car, we don't guarantee that we can avoid every collision, even with no driver's hand on the wheel", so the law is currently correct.

5
3
Silver badge

"3) "I, Robot"-esque philosophy aside, if the product takes action that harms, it was in the wrong or designed badly."

Even if it was forced into a situation where it had NO CHOICE but to cause harm? Look up "Trolley Problem" and other scenarios where there simply is no right answer.

2
0
Silver badge

Then it should TAKE NO ACTION.

Until it's something capable of reasoned thought such that it could explain it's reasoning in a court of law (i.e. decades away from happening).

In your thought-experiment example, the machine has no concept of whether the 5 people who die if it does nothing are terrorists chasing the one innocent person who would die if it pulled the lever.

Whichever way around you put the lever (i.e. to squish or not squish either party/parties in the absence of further command), it cannot make that decision in a reasonable manner without contextual understanding of the implications.

Until it's capable of that reasoning, and it's proven in a court to be that capable, the MACHINE should not be left in any position where inaction will cause more harm than ANY SPECIFIC ACTION. This is why industrial controls are "fail-safe", etc.

Even then, it's a horribly contrived situation with no right answer (i.e. even a human would struggle depending on a very, very, quick split-second decision and getting the right answer, e.g squishing the cop chasing the group of muggers instead of the muggers because it's "less people dead" and a court would recognise that and hold them pretty blameless).

It's either responsible for all its own actions (in which case it gets brought before a court as an independent entity and has to find its own representation, etc. and the manufacturer won't defend it or take responsibility for it) or it's not (in which case it's a machine made by a company which gave it poor defaults and put it into a situation where it was required to think when it wasn't capable of that).

0
0

Look up "Trolley Problem" and other scenarios where there simply is no right answer.

The Trolley Problem is easy, just pull the lever part way so that the trolley derails - therefore there are no casualties on either fork of the track.

2
0
Silver badge

Then the passengers ON the trolley die. No third option.

1
0
Silver badge

"Then it should TAKE NO ACTION."

Except INACTION is actually an action in and of itself, just as it is still a choice not to make a choice. Sometimes, fate throws a curveball, and there is no happy ending. All you can choose is who dies. Doing nothing simply makes a choice of who dies.

1
0
FAIL

Yep

The actual recipe for AI is 10% statistics, 20% snake oil and 970% media hype.

Also, trying to apply 3 or 4 ridiculously simple laws (wishes, more accurately) to autonomous systems won't work, whether they're a robot, a corporation or any other kind of complex cybernetic entity - "Hey, petrochemical corporation - don't harm humans or by your inactions allow harm to come to humans".

Yeah, that'll work.

1
0
Silver badge

"Then the passengers ON the trolley die."

No, they can jump off. Didn't anybody tell you they were wearing crash helmets & protective clothing?

0
0
Silver badge

Which won't help much since they'll just be jumping into the path of OTHER vehicles.

0
0

That's not a difficult one either - if you pick up your hammer, walk up to your neighbour's cat and start hitting it - who's responsible for that, you or the manufacturer of the hammer?! Exactly the same scenario in your example.

0
0
Silver badge

Closer analogy would be you accidentally kicked a hammer hidden in the tall grass (no foreknowledge), and it flies up and hits the cat. Now it gets murky? Are you at fault for not being perceptive enough? Is the owner of the hammer at fault for not keeping track of it (since he/she may not have made the move that hid it in the grass)? Is the manufacturer at fault for not making the hammer easier to see? There's enough wiggle room that any of those three liabilities can apply.

0
0
Silver badge

Liabilty? No difference!

> “If an autonomous system acts to avoid a group of school children but then kills a single adult, did the system fail or perform well?”

This is a pretty simplistic situation as the answer is the same as it is for human driver/operators today: vehicles should not travel so fast that they cannot stop safely. If that means an AV needs to regulate its speed down to a crawl, then so be it. Since that is what a responsible human-driver would do.

And the same judgements regarding liability pertain to when someone runs out in front of a fast moving vehicle. If the act was unforeseeable then there can be no blame.

But AVs offer the possibility of having much more forensic quality information available to back up their case. Rather than "he said - she said" type disputes, there will be the ability to re-run all the recorded events leading up to an incident. There should therefore be far fewer cases of disputed liability. Though I am sure that to start with there will be many more cases of people trying to claim compo, fraudulently.

3
0
Silver badge

Re: Liabilty? No difference!

"Though I am sure that to start with there will be many more cases of people trying to claim compo, fraudulently."

Especially when a market forms to figure out how to TRICK the sensors...

0
0
Silver badge

Re: Liabilty? No difference!

If you take the phrase:

“If an autonomous system acts to avoid a group of school children but then kills a single adult, did the system fail or perform well?”

and replace "autonomous system" with "human driver", what you almost certainly have is a case of Driving Without Due Care and Attention, or Causing Death by Reckless Driving. It's an open and shut case of driving too fast for the conditions. You're supposed to be able to anticipate that the group of school kids might move so as to be in one's way. If they already were in the way (say, crossing the road on a blind corner), the driver has no defence whatsoever.

So if an autonomous system does it, the manufacturer of that system has failed to build in enough anticipation into the machine's abilities. If it happens just once, all our self driving cars will then start crawling through town at a snail's pace, just in case.

==EDIT==

Er yes, what Pete 2 said.

The difference is that I think humans are far better at anticipating what other humans will do than any machine will ever be. A group of well behaved school kids walking tidily down the pavement emits a completely different set of warning signs to a bunch of kids mucking about. You're wary of the former, you're down right paranoid about the latter.

1
0
Silver badge

Re: Liabilty? No difference!

> all our self driving cars will then start crawling through town at a snail's pace, just in case.

Which brings us nicely to the second part of the unfolding AV saga (part 3 would deal with: what crime has been commited, and where should the offence be prosecuted?).

But let's not get ahead of ourselves. Part 2. The ability of individuals to hold up traffic indefinitely would soon need to be addressed - within minutes of the first herd of AVs hitting the streets. We clearly cannot have pedestrians simply walking out to cross the road at will, safe in the knowledge that whatever they do, the AVs will be forced to bend to their will. So what will happen is that people will become subject to much more stringent "jay walking" regulations. Just to keep the traffic flowing.

So it will become illegal to walk in the road just because you feel like it. And just as illegal to cross except when the traffic lights permit. The AVs will enforce this law with their 360° cameras and the police's facial recognition systems.

2
0
Silver badge

Re: Liabilty? No difference!

"This is a pretty simplistic situation as the answer is the same as it is for human driver/operators today: vehicles should not travel so fast that they cannot stop safely."

But sometimes there's NO safe speed because you're FORCED into a no-win situation. Think, for example, something extreme occurs, like a car suddenly crossing over and "ghost driving" straight towards you. You really can't account for everything because, even if you do nothing in indecision, you could get hit from behind. This is Book of Questions territory here (a book full of such situations where there's no right answer), but we're expecting AI to come up with an answer where WE can't (by attaching legal/criminal liability).

0
0
Silver badge

It never rains but it pours ...

She suggests there is a precedent too, referring to when communications regulator Ofcom regulated for the convergence of telecoms, audiovisual and radio before it happened.

Command and Control a platform converging telecoms, audiovisual and radio …. [and RT/BBC/Rupert Murdoch type enterprises spring immediately to mind] … and you can easily create and present to clamouring masses belief in any number of different virtual realities being manufactured.

And whenever you are not very good at it, are you left behind to try and counter progress with conflicting stories and fake tales leading one down false trails.

And that appears to be the sub-prime course the West and her allies and servants have taken, and recently reinforced with this attraction/distraction …… https://euvsdisinfo.eu/about/ ….. although many will realise it is a systemic weakness shared.

Such then presents further vulnerabilities for deep base coded exploitation.

0
0

But who writes the New Robot Rules?

Well, that should be me, using my Enemies List as a guide.

And I'll also try to do a better job on not confusing the Enemies and Christmas Card lists.

2
0
Silver badge

Alien Supplies to Earthed Assets .... via Open Cast MetadataMining Operations

Care to Offer Immaculate Virtual Guidance with NEUKlearer HyperRadioProACTive IT, Etatdame?

An ICO for Global Operating Devices IntelAIgently Designed to Hack Crack Systems of Enjoyment would be a Grand Beginning and Open Opening Movement.

Register interest here for Quantum Communications Know-How.

..............................................

1
0
Silver badge

Re: Alien Supplies to Earthed Assets .... via Open Cast MetadataMining Operations

amanfromMars, proving that AI still isn't that smart.

0
0

Been there, Done that, in a limited sense

Consider this case: you have a state engine to implement, and use a genetic algorithm (quite 80's really).

Software fails in the field with an important customer, the bosses come to you asking when will this be fixed? It just doesn't sit well to say "when the competition for best solution wins in the genetic lottery, sometime or other... don't call me, I'll call you when the danged thing gets a winner". No, this just won't do, so I implemented the state engine in the old fashioned sweaty way grinding out code.

Fast forward. The shiny ALV just cleaned off a whole sidewalk of pedestrians. How do you patch that pile of AI ware that 'learned on its own", before another sidewalk is cleared of meat bags?

1
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017