The False Positive Rate is NOT 98%
I find it a bit upsetting that this totally wrong statistic keep being repeated by news sites I expect better of (the Register, Boing Boing).
Suppose you had a cancer detector such that, if you tested 100 people who did not have had cancer, it says “NO CANCER” to 99 of them, and “CANCER” in error to 1 of them. You would say that it had a false positive rate of 1%. You'd think it was working pretty well.
And if when you tested 100 people who did have cancer and it said “CANCER” to 99 of them and “NO CANCER” to 1 of them, you would call that a false negative rate of 1%. And that's not bad either.
If you then tested a population of 10000 people, 9900 of which did not have cancer and 100 of which did, you would expect there to be 99 false positives (1% of 9900) and 99 real positives (99% of 100). 198 positive results of which 99 are accurate and 99 are wrong.
So 50% of the people the test says have cancer, don't. If the test says that you have cancer, you've a 50/50 chance of actually being fine. This is NOT because there's anything wrong with the test: as we saw, the test gets it right 99 times out of 100. The false positive rate is only 1%. It's because you're using the test on a population with a very low "base rate". This is the base rate fallacy: https://en.m.wikipedia.org/wiki/Base_rate_fallacy
What these headlines have done is the equivalent of taking these stats - the output of a system with a 1% false positive rate and 1% false negative rate - and saying “Oh, there are 99 false positives and 99 real positives, so the false positive rate is 50%”. Which is stupid.
If you had such a system - which is 99% accurate - would you decide that it's useless and throw it away, just because a positive result on a population with a low base rate doesn't necessarily mean that you've found what you're looking for? Or perhaps, if you weren't a total fucking moron, you might use it as a tool on populations that you already suspect of having higher base rate - perhaps in our analogy testing people who have shown symptoms rather than screening the population at large - whilst understanding its limitations.
Back to the plot. The police could usefully and sensibly use this to look for 1 person in particular in a small to medium crowd they already expect them to be in. If the person they are looking for is not in that crowd, then they might get some positive results, but obviously the ratio of "false positive" to "true positive" results is infinite, because there are no true positives, because the person isn't there. Still doesn't make it worthless.
By all means object on the grounds of civil liberties. By all means say that the police shouldn't be allowed to use this technology, and should just stick with super-recognisers in front of cctv screens. But don't bang on repeating the same nonsense about the "98% false positive rate" because you're too lazy to understand how it works.