6000 civilian deaths
I couldn't help but notice that the article does make the point that the AI is highlighting what may be interesting images or video and that the analyst then examines the images/video to see if they really are of interest. The article also points out that prior to any AI pre-selection, the human analyst would have looked at all of the images/video before choosing items of interest. At no point does the article claim that AI is choosing the items of interest itself and then acting on them.
It's an interesting hook to the hang the AI ethics debate on, but I don't actually see anything specific in the article which points at the AI as being the cause of civilian deaths. Is the AI making strong recommendations? Is there a lack of context from what the AI is choosing leading analysts to see what might not be there? Is there some human failing by the analysts that "computer say yes" is biasing their decisions? Is AI nothing to do with this and it's pressure from the higher ups, both military and political, to "get results" leading to going after targets with lower levels of confidence?