back to article Robo-callers, robo-cops, robo-runners, robo-car crashes, and more

Here's a summary of this week's AI news, beyond what we've already covered. Oh no, not another robo caller The internet has been flooded with people raising questions about Google Duplex, an AI system that can supposedly make customer service calls on behalf of its human user. CEO Sundar Pichai announced the new feature on …

  1. tfewster
    Facepalm

    Google Assistant - Why would anyone use this?/li>

    - To make a Doctors or Dentist appointment, where they insist you ring on the day. Current procedure is 1) Phone at 08:30 2) Receive busy signal 3) Hang up 4) Hit "Redial" and repeat 20-40 times. That just needs automating until a humanpicks up.

    Uber: The sensitivity was dialed down to reduce “false positives,” aka objects that the car shouldn’t stop for...

    That's not AI - "Can I classify the object as non-dangerous?". It's not even sensible - A car will go over trash, anything big enough to hit needs action.

    1. Mark 85

      Those seem to be good uses. What is troubling is that Google is an ad slinger. There's probably more cash in this for them by selling the service for spam calls. I just hope it never gets a southern Asian accent and says it's from "Microsoft and you have a virus.".

  2. frank ly
    Flame

    "... so dialed down the AI to ignore them."

    Did they test it, in real world lighting conditions with plastic bags and life sized mannequins, etc? Did they inform the driver that the software had been 'dialed down' in such a way?

    If not then it sounds like criminal negligence.

    1. Waseem Alkurdi

      Re: "... so dialed down the AI to ignore them."

      <scribble, scribble, scribble>

      Yes, we have. See? We really have.

      On another note, why do we assume that megacorps actually care about human lives? Remember that GM memo which put a price on human life (the Ivey memo?

      1. Anonymous Coward
        Anonymous Coward

        Re: "... so dialed down the AI to ignore them."

        I would assume individually a person cares about life. But the corporate machine is one of moving tasks about... and it probably moves the task of caring about life to someone else... until such task falls off the production line.

      2. Davidcrockett

        Re: "... so dialed down the AI to ignore them."

        Government policy analysis does this all the time via cost benefit analysis. Statistical value of a life was about 1.6 million last time I checked.

        1. Anonymous Coward
          Anonymous Coward

          Cost of a Life

          Fire and Rescue Services do it too. They have software which works out the coverage of fire stations and appliances, estimates response times, statistically models the likelihood of fires, and as a result of a new housing developments, decides if they need to build a new fire station, buy another appliance, train more fire fighters etc.

          At the end of the day, it comes down to cost and as you say the statistical value of a life. Hint: it's about £1M.

          Source: I worked on an FRS project and saw the software in use (hence anonymous posting), though sadly didn't get to play with it myself.

    2. JimC

      Re: "... so dialed down the AI to ignore them."

      Well an AI that slams on the anchors every time a plastic bag blows across the motorway is going to cause some horrendous rear end pileups, bearing in mind how all the meatsacks drive dangerously close. So you can see the issue. There's definitely a very big risk in false positives. That being said this should still never have happened. The problem, more than anything else, seems to be with the monitoring driver. Project doesn't seem ready for minimum wagers who spend more time on the phone than watching the road, but that's not the Uber way...

      1. Paul Crawford Silver badge

        Re: "The problem, more than anything else, seems to be with the monitoring driver"

        The inability of a meta-sack to respond to a sudden failure of an automated machine is NO SURPRISE at all to the folks here.

        No the real problem is they seem not to have fsking tested the recognition system on a range of typical targets before going out on a drive. That is the criminally negligent part,

        Which also raises interesting points - what should the "driving test" for an automated car be before it is allowed on the road, and how should an MOT tester establish that all car sensors actually work (not just they self-test OK)?

  3. Anonymous Coward
    Anonymous Coward

    An airport pickup, Just standing at the airport with a sign & calling

    "Mr Bill, Mr Bill"

    Pickup for "Mr Bill".

    Robbies everywhere, hahahahaha.

  4. Andy 73 Silver badge

    Atlas

    Boston Dynamic's machine appears to roam the desolate country looking for small obstacles to jump over.

    Lesser AIs would walk round them.

    1. Adrian 4

      Re: Atlas

      Or stride / leap over instead of stopping and jumping.

      But tbf, a five-year-old human would do just the same.

      1. Anonymous Coward
        Terminator

        Re: 5 year old?

        Oh dear, a 3 year old would be dangerous enough to destroy the world... what are we gonna do now?

        1. Destroy All Monsters Silver badge

          Re: 5 year old?

          ..what are we gonna do now?

          Buy guns. Lots of guns.

  5. Adrian 4

    AI journalist

    Did AI write the article ? Or just someone very tired and emotional ?

  6. This post has been deleted by its author

  7. Anonymous Coward
    Anonymous Coward

    "The model takes..."

    "the history of the conversation" into account?

    FINALLY. Since "chat bots" were a thing... I even posted/emailed to a little company making and selling Chatbots back in like 2000/2001 asking if their bot had any features to "recall the time, data, place, theme" of a conversation. Their answer was "why would you want that? You don't need it!"

    But even the teenage me could figure out that a chatbot which only "knows" the exact sentence spoken, and nothing else is just an overblown dictionary. It also showed up their programming/con (it was probably an opensource chatbot they were selling anyhow ;) ), thus the customer service railroaded and denial response from them.

    But the strength of a system that can store, even rudimentary, data pertaining to a universal "task" or conversation would be very very powerful.

    1. Anonymous Coward
      Anonymous Coward

      Re: "The model takes..."

      Yep, basically a "leaning/bias" toward a particular context in the neural net, something that I've been circling for a couple of decades now. Combining the Google Assistant with some sort(s) of robo, not just the Boston Dynamics type, t would be a good start towards solving "the servant problem." Scary part about that is that it'd be the (ultra)rich that'd have it first. [Hell, we've got Siri, Alex, and Google Assistant listening in all the time as is.] Anyway, it's an interesting problem depending on you definition of interesting.

  8. TrumpSlurp the Troll

    Alexa, phone Bob's Siri and annoy it please

    With more complex software the possibilities are endless.

  9. handleoclast

    Uber is at fault, but...

    Sure, Uber is at fault. It's pretty much a given that whenever something goes wrong with Uber, it's Uber's fault. And this case is particularly bad, because they "fixed" something that wasn't broken. But...

    Having watched the video a few times, the woman seemed to come out of nowhere. Sure, it was a single camera, with a narrow-ish field of view. And the lighting wasn't good (or the video had been "downgraded" to make it look worse). But...

    The but? Three of them. It was night, the radar range equation, and our peripheral vision.

    It was night, so the car's headlights were on.. That's clear from the video.

    The radar range equation says that the returned power is inversely proportional to the fourth power of the distance. The intensity of a radar (or light) beam is inversely proportional to the square of distance, what little is reflected is similarly subject to the inverse square law, so the returned power is inverse fourth power.

    Our peripheral vision is more sensitive to faint light, and to changes in intensity, than our central vision. The same is not true of cameras and AI. Even if she had kept her eyes straight ahead, she'd have seen the lights from the car in her peripheral vision.

    Where am I going with this? The light returned to the camera was inverse fourth power but the light seen by the woman was inverse square law, so a lot brighter. The car may not have seen the woman because of lighting conditions, field of view of camera, etc., but it's for damned sure the woman could see the car well in advance.

    How can I be sure of this? In the past I lived in a rural location and occasionally had to walk twisty, hilly country roads. Roads with no pavement. Hedges dampened sound and blocked vision around bends. Which meant cars could come whizzing around corners without me being able to detect them until the last second and having to dive into a hedge. That was during the day. At night it was a very different matter. At night I could see the beams of the headlights from far away. Even if the car was coming up behind me the general increase in ambient illumination gave it away.

    Let me make this very clear. Even with the brow of a hill and a sharp bend in the road between us, I could detect a car coming up behind me at least 30 seconds before it passed by me. It would be harder to detect a car coming up behind me in an area with street lighting, but a piece of piss to detect one coming towards me. I would be aware of the car long before the driver was aware of me.

    The woman walked right into it. With every advantage over the driver and the automation, she walked right into it.

    OK, it's harder to judge distances at night. Even knowing that car headlights are a car-width apart, it's harder to judge distances. Which makes you more cautious, right? That little thing about not crossing the road if traffic is coming that was drilled into us as kids, right? If it's harder to judge distances you cross more cautiously, right?

    The only mitigation in her favour (maybe) is that few of us are taught a basic fact of geometry that is apparently not instinctual: if an object is in your field of view and maintains a constant angle to you, then you are on a collision course. I wasn't taught it, and was a little surprised as an adult when I read of it. It's apparently not instinctual in any vertebrate because birds and animals, as well as humans, get caught out by it. Even so, there was a car coming towards her. She could have waited for it to go by, just to be safe, but she walked right into it.

    So yeah, Uber was at fault. But so was the woman.

    1. John Robson Silver badge

      Re: Uber is at fault, but...

      "So yeah, Uber was at fault. But so was the woman."

      Uber deliberately disabled the safety feature that was designed to stop this incident.

      The camera is not a reliable witness in terms of how visible things were - but there is no justification for running down someone who is crossing a road - she didn't 'appear out of nowhere' she was crossing the road well ahead of the vehicle.

      Any human driver would have seen and avoided here, the uber vehicle saw and ignored her, the uber driver was screwing around on their phone.

      One charge of causing death by dangerous driving and one of manslaughter...

      1. LucreLout

        Re: Uber is at fault, but...

        Any human driver would have seen and avoided here, the uber vehicle saw and ignored her, the uber driver was screwing around on their phone.

        The main problem was the meatsack in the drivers seat was looking at the road for maybe 1 second in 10, which isn't enough for them to take over. First you watch the road, then when there's a spare second, you glance at the laptop, not the other way around. If they can't make that work thent he vehicles need to be double crewed - driver in the front ready to take over who does nothing but watch the road, and the nerd in the back tinkering with the lappy.

      2. handleoclast

        Re: Uber is at fault, but...

        The camera is not a reliable witness in terms of how visible things were

        I thought I mentioned that the camera view was limited, and possibly degraded in the video.

        What I didn't make explicit is that the video may have caused some people to assume the situation appeared reciprocal: the woman suddenly becoming visible on cam means that car was suddenly visible to the woman. That is far from the case.

        she didn't 'appear out of nowhere' she was crossing the road well ahead of the vehicle.

        Sure, a human driver ought to have seen her. The automation may have seen her and acted (if Uber hadn't disabled that feature). But she should have seen the car. There is no excuse for walking into a car in that situation.

        Oh, and she wasn't "well ahead" of the vehicle. If she had been then she would have reached the other side before the car passed her. She was on a collision course. That's not "well ahead" but "not far enough ahead."

        Since you failed to grasp the point, I'll make it again. The car was more visible to her than she was to the driver. No ifs, buts or maybes. This is simple optics. If Uber hadn't disabled the feature, she might not have been hit. If the driver had been paying attention she probably would not have been hit, difficult to be sure. But if she had been paying attention she would not have been hit.

        There's no excuse for walking into the path of a car, at night, when the headlights mean it is highly visible. The car was more visible to her than she was to the driver. No excuse. None.

        Uber fucked up. The driver fucked up. But the woman fucked up even bigger. We're each responsible for our own safety. Anybody crossing the road dangerously on the assumption that a driver will take compensating action is a fucking idiot.

        1. John Robson Silver badge

          Re: Uber is at fault, but...

          "Since you failed to grasp the point, I'll make it again."

          I get the point - but since you fail to get mine I'll make it again.

          The car was deliberately put onto the road in a configuration where it was told to ignore objects in the road. The safety driver who should at all times have been paying attention was looking at their phone.

          The pedestrian was perfectly well visible, and should not have been hit - the fault is entirely that of those who bring the lethal weapon to the collision.

          IFF she had jumped backwards off the kerb in front of the vehicle then I'd agree with you, but I'll wager that you rely on cars staying in lane, and stopping at roundabouts etc. all the time.

          Was it the best time to choose to cross, I have no idea, would it have been safe if the driver had being doing what they were meant to, yes.

          Most importantly - is the appropriate penalty for a possible lapse in judgement death? No.

          I don't know the victim, I don't know what her vision was like, I don't know what her hearing was like...

          I do know that the company responsible for detecting and avoiding obstacle made a very large mistake, and that the safety driver they employed (specifically to mitigate against such errors) made an even bigger one.

  10. Flywheel

    "Poor quality of images made up the watch list"

    I regularly try to read the "Caught on CCTV - do you know these people" section of my local rag and usually end up with streaming eyes due to the potato "quality" of the CCTV images. Most of them seem to utilise 1990's webcams and the resulting images aren't always identifiable as people, let alone people I might know.

    At the other end of the scale is a town where I used to live that was literally bristling with CCTV, but on the very occasion I needed evidence after my car was vandalised, I was told that many of the units weren't switched on and not all those that were enabled were actively monitored by the Council. We have nothing to fear, for now...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like