Current laws apply
For ML and related disciplines, it's not the new laws you need to worry about, it's the existing ones.
If your decision system is a black box, you really should do some thorough testing to ensure it's not going to make a decision that would be illegal if made by a human. Also testing to ensure the system can't be gamed or manipulated would be good to.
But many people don't like doing that testing because it shows that the magic black box doesn't in fact work very well. Or more exactly, the training data was insufficient. Hence why you end up with facial recognition programs that can't tell the difference between a gorilla and a black person, because the data set used only contained images of white people.
Even more fun is anti-discrimination laws. You are not allowed to discriminate based on various protected categories* even if there is a factual basis for the conclusion. You are also not allowed to infer someones protected categories from other information. So you can't offer a woman cheaper car insurance because they female, and females have a lower rate of accidents. You also can't "ignore" the gender, but then use the fact that they have had a name change when getting married to conclude they are likely to be a woman, thus cheaper to insure.
You can, if you're clever, find other ways around this**. But you have to ensure that your application is in fact using that information, and not the other. So any black box system that is relied upon in any way is asking for trouble. Worse still, that black box might well be making correct factual conclusions that you are not allowed to use, but you are not aware of.
* in general anything the nazi's would put you in a camp for. Race, religion, gender, age, membership of a political organisation etc.
** for car insurance, you can look at what type of car they are insuring, individual accident rate, income, annual mileage and driver monitoring, which together usually manage to make a more accurate prediction than the simple gender+age model