The problem here is not the weaponization of AI, but the real lack of it. AI is being used for something like a "smells like terrorism" test, and then humans take that and push a button. There is no feedback to the software that it's done the wrong thing!
When AI is applied to warfare, it should be used the same way as carpet bombing or arclight: Let loose, and stand back. You want the target destroyed by software? It gets destroyed by software. It is the responsibility of those on the trigger and those in charge of them to not pull the trigger, or give the order!
In WWII, the USSR used radio-controlled flame thrower tanks because the Fins were so good at killing tanks with humans inside them. These days we are using remote-controlled mini-bombers.
If the military is going to kill people based on someone scratching their ass the wrong way or shopping habits, then the program is fully in the "Dr. Evil" realm, no two ways about it. This isn't about "the fog of war," because the U.S. isn't in a war. Our borders are not in Syria. One does not halt a problem by random approximation.
Let the AIs fully fight the war, if they are going to be brought into it. Otherwise, the humans should take full responsibility for their actions.