The problem is you're unlikely to find a smoking gun in the algorithm itself. What you really care about is the training data.
But to be honest we don't really need the algorithm to determine if something is discriminatory. For example, the algorithm for credit scoring is proprietary, making it a black box, but its disproportionate burden on certain groups when they try to not just borrow money, but also rent housing and land a job, is well-known.