There's two distinct problems here. A false positive can be handled by using human staff, though the system design and training need to be better. Today it often seems that the human staff at airport security are the big problem, and some people see the AI tech as a solution to that. It isn't a simple answer to the people working on it, but at the more political levels of decision-making, it comes close to Mencken's "simple, plausible, and wrong."
The false negatives are where it gets dangerous. I can't see any way of avoiding those without maintaining the existing human-based monitoring. So the AI-based system is something that maybe can be added in parallel, but it's not something that will save money. It likely will also need continuing professional development, just like a human-based system.
It's abour a hundred years since Mencken wrote his line. And maybe that is an example of a deeper trend. In any field, the simple answers that work get identified soon in its history. Is the marker of a mature field that new, working, simple answers are rare?
I tried looking for a quote on that, It isn't simple.