AI advocates tell us the algorithms will "self-learn". On the surface, that sounds very impressive. But it begs the question, "Learn what?" How will developers know what algorithms their AI software is using if it has taught itself?
The more decision making is delegated to AI, the more vulnerable companies will become, because untangling the code and deciphering the AI "self-taught" rules will be a nightmare. Setting aside the issue of frustrated customers who might be denied products or services, there are serious legal risks in the area of perceived discrimination and in product safety.
For example, if a fully autonomous vehicle is involved in an accident, how will the manufacturer demonstrate that the vehicle behaved correctly? The more factors that are involved in the internal AI decision-making, the less transparent, and obvious they become. At some point, I can well foresee a company software engineer saying, in court, "we know what it did, but we don't know why it did it."
Oops, "Bad answer!" Get your checkbook out.