Reply to post: 6000 civilian deaths

Engineers, coders – it's down to you to prevent AI being weaponised

John Brown (no body) Silver badge

6000 civilian deaths

I couldn't help but notice that the article does make the point that the AI is highlighting what may be interesting images or video and that the analyst then examines the images/video to see if they really are of interest. The article also points out that prior to any AI pre-selection, the human analyst would have looked at all of the images/video before choosing items of interest. At no point does the article claim that AI is choosing the items of interest itself and then acting on them.

It's an interesting hook to the hang the AI ethics debate on, but I don't actually see anything specific in the article which points at the AI as being the cause of civilian deaths. Is the AI making strong recommendations? Is there a lack of context from what the AI is choosing leading analysts to see what might not be there? Is there some human failing by the analysts that "computer say yes" is biasing their decisions? Is AI nothing to do with this and it's pressure from the higher ups, both military and political, to "get results" leading to going after targets with lower levels of confidence?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019