There are two categories of algorithm that need to be distinguished in this discussion. The first is the traditional algorithm that uses a well defined set of steps to produce some result, such as an encryption algorithm. The second category is the ones using the new deep learning neural networks, as used by Google for image classification.
The first category is the software equivalent of a mechanical device - it does exactly what its user asks it to do (assuming no bugs). The second, which is what I think the article's author is concerned about, is more like the software equivalent of a dog - it can be trained to do what we want, but it is not entirely under our control.
While we can test, or even examine, the first category software, this is fundamentally impossible for the second. Its learning is distributed across a myriad connections that makes it impossible to examine for "correctness".
The solution, I suspect, is to test these programs in the same way we would test a living organism - by giving it an exam. An autonomous vehicle, for example, would need to be given a comprehensive driving test. Software for giving financial advice should also be put through tests similar to what a human doing the same job would need to go through. So, perhaps the author is correct - these programs need to be subject to the same sort of laws that apply to us.