File under: No Sh1t Einstein
"AI systems perform worse on data that they haven’t seen before."
AI experts, lawyers, and law enforcement urged US Congress to regulate the use of facial recognition technology during a hearing held by the House Committee on Oversight and Reform on Wednesday. The technical issues and social impacts of using AI software to analyse images or videos are well known. There have been repeated …
...and that's why, given that A.I is famously 'hard', A.I. outdoors is even harder.
'Hard' in the I.T. sense of nearly impossible.
Outdoors is where you'll more often see things that you've not seen before. There's no training data set for everything outdoors. It's different everyday.
"A.I. is hard."
"A.I. outdoors is even harder."
These observations can be chiseled into the granite walls of I.T. campuses everywhere.
The hopelessly naive Self-Driving fanboi crowd will eventually relearn this.
False Positives aren't a big deal. Innocent people will be occasionally dragged out of a restaurant, threatened with guns, violently handcuffed, thrown in jail for days or weeks, but then they'll eventually be released without charge (usually).
As long as they're not actually convicted, then there's really zero harm at all. Right?
/sarcasm alert
"False Positives aren't a big deal"
You jest of course, but this is the crux of a problem that nay people seem to struggle with. 100% success rate is impossible, so either the system is calibrated to catch as many people as possible (but will create lots of false positives), or the system is calibrated to be as sure as possible of it's results (resulting in many false negatives).
Not sure if I remember correctly but wasn't it one of the US founding fathers who said something like (paraphrasing generously here) "better 10 guilty men go free than 1 innocent man be jalied"?
William Blackstone, in the case of Blackstone's Ratio. You're right about British and 18th century, though.
Henry Blackstone (the 18th century British legal scholar) was William Blackstone's nephew. It was William who wrote Commentaries on the Laws of England and coined the Ratio. He formulated it as "It is better that ten guilty persons escape than that one innocent suffer".
Benjamin Franklin cranked it up by an order of magnitude. to 100 guilty : 1 innocent. But Franklin liked to be hyperbolic.
either the system is calibrated to catch as many people as possible (but will create lots of false positives), or the system is calibrated to be as sure as possible of it's results (resulting in many false negatives)
I'm no fan of facial recognition systems, but I have to point out that the binary you present is not accurate.
In machine learning, researchers generally talk about "precision" (the inverse of the false-positive probability, or the ratio of correct results to all results) and "recall" (the inverse of the false-negative probability, or the fraction of desired results that were actually returned).
Those are often combined in a single metric, the F-measure. F1 weighs precision and recall equally; for other values of the β-parameter (the subscript) the F-measure returns a metric weighted by assigning β-times as much importance to recall as to precision. (So β > 1 values recall over precision - you're interested in more results, even if more of them may be wrong - and 0 < β < 1 values precision over recall, preferring fewer but more accurate results.)
With most classifying systems, you can decide what β value you believe best matches your goals (possibly by using a held-aside corpus or other training set and an optimization process). Then you can tune the parameters of your system to maximize that Fβ, again using an optimization process such as Expectation Maximization (assuming your system's parameterization is convex; there are more complex training approaches for non-convex models).
But while the situation is more nuanced than a "prefer false positives" / "prefer false negatives" toggle, you and other commentators are of course correct that these systems will always be imperfect and hampered by biases in their training data, among other problems.
The hearing took place at the same time as Amazon shareholders tried to stop Rekognition being sold to law enforcement. The proposal was defeated, but the vote tallies were not immediately disclosed
Would anyone expect otherwise? Corporates aren't about ethics but profits. AI still has a long way to go and it's really not intelligence but pattern matching. But it has a catching buzz phrase. <sigh>
And this is why the current definition of capitalism just doesn't work.
It cares about profit. It doesn't care about longevity, people, the environment. A new definition of how well a company is doing needs to be implemented that takes into account the harm the company does and subtract that from its profits for shareholder value...
But as long as the rich keep holding the majority stakes in companies, this probably won't happen. Monetary wealth is just so much more important than being able to breathe or being free...
According to pure capitalist doctrine, companies SHOULDN'T care about "longevity, people, the environment.". In pure capitalism, if the people/consumers actually cared about those things they wouldn't buy the product. In reality people don't give a shit. And THAT unfortunately is why we can't have nice things.
"According to pure capitalist doctrine, companies SHOULDN'T care about "longevity, people, the environment.". In pure capitalism, if the people/consumers actually cared about those things they wouldn't buy the product. In reality people don't give a shit. And THAT unfortunately is why we can't have nice things."
In Capitalism as defined historically, companies would care because the people who purchased the item lived in the same place as the manufacturer. If the manufacturer acts malevolently, the people can stop purchasing and put the company out of business. The problem is that the folks who defined capitalism never imagined globalism... that you can manufacture and pollute and basically just shiite on all those around you because THEY aren't your customers... some guy halfway across the world is. And he doesn't care. And that is the problem:
Capitalism does not work very well any more because products are not made where they are sold. The consequences of doing business (pollution, low wages, etc) no longer matter because they aren't relying (on those who they are screwing) to purchase the product. Business can be as evil as it wants, and suffer no consequences on the sell side. Thus, Bejing and the Chinese countryside is blanketed by air pollution, and no one in the UK or US feels the brunt of that pollution enough to stop buying their products (even if they could).
When manufacturing location and destination markets are divorced, Capitalism has no incentive to play nice. Pure capitalist doctrine works locally, fails globally.
The problem is also that a lot of people give a shit, but lose their enthusiasm when they realize that companies are being controlled by shareholders that don't care about the company at all beyond the few bucks they can make int he short-term. The biggest problem in our modern form of capitalism is that companies are being controlled more and more by short-term traders that care little about the future of the company and only care about short term gains so they can pump the stock then dump for a few bucks of profit. The way everything is set up is that it encourages shareholders to buy into a company, force shoddy, but heavily-hyped, products onto the market. Then before the flaws int eh product are noticed by the public, the shareholders have rode off into the sunset with their piles of cash, heading off to ruin another company.
For the last few years, I've been in favor of moving away from a stock market system and replace it with a corporate bond market. At least then you'll get investors that give at least a fraction shit about what happens with the company 5, 10 years in the future.
"A new definition of how well a company is doing needs to be implemented that takes into account the harm the company does and subtract that from its profits for shareholder value."
Even under old-fashioned definition successful suits from those false positives or fines for breaking privacy laws can subtract from those profits. And on the subject of the latter, if any EU citizens are included in the training data I'd expect that would introduce GDPR charges.
The big problem is that even if someone was acquitted due to a false positive, their life is still in shambles. Some years ago I was running to get to a meeting, but unbeknownst to me, I was also running in the same direction as a suspect in a murder. I was held for 3 days until they figured out I wasn't their guy (I was arrested on Friday, they couldn't get an eyewitness in until Monday evening), but in that time I lost my job, everyone who new me found out I was picked up as a murder suspect and since the police never picked up anyone else for it, I got saddled with the stigma of being the sole suspect in a murder. Every time a prospective employer did a background check, they'd find my arrest and would refuse to hire me even though I was cleared of all suspicion. A lot of background check companies won't bother updating their databases, so even if the police expunge your record, its still there in these company's databases, which getting them to fix their mistake is a Kafkaesque / Sisyphean nightmare.
Several years of my life out the window because I ran from where I took lunch to the office. Wasn't even an important meeting anyway.
The problem is not that it works badly.
The problem is what it enables if it works well.
Smile for the nice Chinese social credit score system coming to a place near you.
You will smile for it, or you won't be able to get home, or a home or online, or log into a computer or phone. Get caught signing "ok" to a friend and get fired, lose your banking etc.
You think people in the West don't want this? Take another look at twitter.
"Take another look at twitter."
"Why would I do that? I took a couple of looks at twitter when it first started and concluded that twitter is for twits. Nothing that I've seen in the decade since has caused me to question that conclusion."
But I am sure that an awful lot of the 'Training Data' generated over that decade has reinforced the correctness of your initial conclusion to somewhere in the 99.999% region !!! ;) :)
This is my concern as well.
I know policing is a hard job. And I have respect for the people who take it on. But I strongly believe, unfortunately, that it has to remain at least as hard.
In order to keep society free, it is important that "the establishment" (of whatever type, colour, view, etc) cannot control the population. Government must be by consent.
That means that campaigners, researchers, journalists (formal and informal), radicals, politicians (formal and informal) and demonstrators must all be protected from identification (at least if they wish to be). Realistically that means it must be too hard/expensive to collect and identify them.
The most effective way, over the years, to achieve that seems to be to limit the resources of the tools (particularly, police, military and secret services) the establishment can bring to bear so that they will, at least, prioritise and be selective.
Yes, sorry, that means some small criminals will get away. That is what we mean when we say that liberty means giving up some safety.
"Yes, sorry, that means some small criminals will get away. That is what we mean when we say that liberty means giving up some safety."
But if a small criminal can get away, so can a big one as there's no way to tell the difference until after the fact, by which point many people (including your loved ones) could well be horrifically dead. Then what?
The solution is not to protect anyone - including the police, government officials, and everyone else. Surveillance data collected by government is public property and should be available to everyone. In the United States of America, this falls under the equal protection amendment. If the police can track members of the public, the members of the public should be able to track them and their political masters. After all, if they have nothing to hide....
I don't recall any particular political commentary in Midnight Riot. The first Rivers of London novel, yes? Am I forgetting something?
(I admit I had mixed feelings about MR. Not bad, but why did it take so long for Peter to make the association to a certain well-known fictional character? I've never seen that sort of performance in the flesh, as it were, but I recognized it on the first description. I'm always annoyed when I figure out aspects of a mystery or crime drama before the detectives.)
Michael Punke ... said at the time it has “not received a single report of misuse by law enforcement.” It’s difficult to verify that claim, however, considering that the police haven’t been transparent about how it’s used.
More to the point, is there any definition of what legitimate use and, by implication, and misuse would be? Without that it would be impossible to make such a report.
There is an option here which would be simple and also allow properly managed deployment of systems like this.
You simply put into law that biometric storage/analysis systems are unlawful EXCEPT when specifically permitted and only under the terms specifically listed. This prevents scope-creep while providing a route for these systems to go beyond the labs
"...Stingray devices by their police to hijack mobile phone connections..."
As a public service, perhaps the police could do us a favor by connecting their Stingray devices directly to a local fiber optic connection. At least we'd get much better data connections to the internet while we are being tracked and surveilled.