Home Office QA:
"Have you extensively tested the AFR on all face types and colours?"
"Does it work?"
"We'll buy it."
It's another Reg summary of recent AI news. UK gov launched a facial recognition system that didn’t work on darker skin: The British government knowingly rolled out a facial detection system that scans passport photos even though it was less accurate on people with darker skin. The Home Office, a department responsible for …
El Reg's subheading:
UK gov launched a facial recognition system that didn’t work on darker skin: The British government knowingly rolled out a facial detection system that scans passport photos even though it was less accurate on people with darker skin.
The quote within the article: “User research was carried out with a wide range of ethnic groups and did identify that people with very light or very dark skin found it difficult to provide an acceptable passport photograph,” the New Scientist first reported.
So it isn't a big racist "screw those black people" which you've tried to imply with the subheading, it doesn't do well with very light or very dark skin. Now don;t get me wrong, it's pretty poor that the one job of the system (matching faces) doesn't work well if your face isn't the right shade, but it's also pretty poor how it was reported here too.
Yes, I agree that the term 'racist' is being used far far too regularly and it's not just by the Register but pretty much all media for apparent shock value. The problem is that when we overuse such terms it actually devalues its true meaning. In my view for someone or something to be 'racist' there needs to be an intent to discriminate (the ism part), whereas in this example and many others it's just inaccuracy that is underlying problem. I'd even go as far to suggest that (not racist but will be attacked by those who can't think for themselves) non-Caucasian persons tend to have less variance in distinguishing features. For example 'black' people tend to have similar hair colour (black) and similar eye colour (black) whereas 'white' people tend to have a wider range of such distinguishing features. Once again this is a fact based on clearly observable differences, However, many people will struggle to accept this concept as they seem to have been fully conditioned to truly believe that there is no such thing as race and we are 'all the same' despite clear and profound differences. As an aside it's worth noting that any proposed research looking into such differences is immediately quashed with the entire academic community apparently preferring the path of ignorance over the matter and the public being too brainwashed to be able to think for themselves. It's another fact that Asian people have slightly bigger brains than Europeans or Africans but can you imagine the absolute shock and hysteria that would be caused if this were to be printed in a newspaper?
The big problem is that due to the media's fondness of the term we are apparently all now 'racist' as we show 'unconscious' bias (a complete BS term) to gravitate to groups of people who look broadly similar to use rather than those with a different skin colour, antennae, six legs or an exoskeleton. I'd suggest that such preferences are exactly that and can probably be more attributed to tens of thousands of years of evolution of independent populations rather than 'racism'. I find recent claims in the media that even dating is now 'racist' to be especially concerning in this regard as this pretty much makes *everyone* to be guilty of discrimination by only being attracted to people of their own ethnic group. In the same way am I homophobic not to be attracted to men? Species phobic not to be attracted to animals? Object phobic not to be attracted to garden machinery?
So dear media please stop using the term 'racist' where there is no actual intent to discriminate as this isn't racism but stupidity, inaccuracy, laziness or a combination thereof. Many thousands of years of the evolution of independent populations has also resulted in many differences that we need to acknowledge rather than ignore. We are *NOT* all the same and to suggest we are is, erm, racist....
LDS, aesthetic qualities DO make a different race... A race is defined as "A race is a grouping of humans based on shared physical or social qualities into categories generally viewed as distinct by society.".
Can you see that last bit? "generally viewed as distinct by society" ?
Can you also point out where I implied at any point that any race was superior to another?
So if they're not different races then how come we're discussing racism?
Surely to imply that something is 'racist' we are suggesting that we have difference 'races' to discriminate against? Hence raceism. One again the liberal ideology that "we are all the same" is to blame.
If you take a minute to Google the term "race" then the following definition is provided "A race is a grouping of humans based on shared physical or social qualities into categories generally viewed as distinct by society." So it would seem to be a method of classifying groups of people based on shared features and/or qualities.
As the previous poster mentioned in zoology the difference between entirely different animals can actually be quite small. Lions can even breed with tigers for example. I think the liberals would like to do away with 'race' altogether, to them it's a dirty word.
"In my view for someone or something to be 'racist' there needs to be an intent to discriminate (the ism part)"
your personal opinion on the definition is not relevent. racism exists where there've been advantages or disadvantages based on the *perceived* race of a person or persons.
there's nothing in the definition about 'intent', which is why structural racism or systemic racism is included in the definition because whilst everyone agrees "we dont hate black people", for example, its commonly accepted that if it turns out that black people routinely get disavantaged by a specific process, technique or system, then that process, technique or system is racist whether or not anyone intended it to be.
now, please attend a diversity awareness course for reeducation.
"there needs to be an intent to discriminate"
At work we have automatic doors. Wave your hand in front of a glowing red sensor, the door opens. Unless you are anything other than pastey white, in which case the sensor simply can't see you.
Deficient design, certainly. But likely it was a mixture of "good enough" for the majority and more importantly "cheap enough".
Maybe this falls into some people's definitions of racism, but I wonder how a door can be racist, as opposed to just badly designed?
Dictionary definition of racism...
"prejudice, discrimination, or antagonism directed against someone of a different race based on the belief that one's own race is superior."
See the last bit? Where it says "based on the belief that one's own race is superior". Doesn't that suggest to you that intent is required?
I strongly doubt that farcical recognition tech is paying attention to your eye colour, otherwise it'd have as much trouble with brown eyed white people as it does with black people. Older facial rec systems often looked for an inverted triangle of dark patches (eyes and mouth) with the nose outlined by its shadow. The contrast between the 'canvas' of the face and facial features is crucial for the AI. This is why it fails with dark skin where the contrast is lower (or even reversed) or with very pale people as their eyebrows blend into their face.
I agree, but if ones features are the same colour, i.e. skin, hair and eyes then doesn't that mean that there isn't much contrast at all? If you have pale skin, blond hair but brown eyes then we have contrast.
What I really wanted to say originally is that there is more diversity in certain races than others, but this would be instantly flagged as racist because such differences do not exist. I therefore chose to use a (admittedly flawed) example of different hair colour, eye colour and skin tone as this is harder to apply the "differences don't exist" claim.
It's very well known in photography these people are harder to get a good image of, and require special techniques a good photographer knows, but other image making equipment may not. So I'm not surprised that many of them could have submitted photographs that could be of lower quality than those of people more average and easier to get a decent image of.
It has nothing to do with racism, but how light works. If you don't believe me, read the chapters about lighting portraits in "Light Science and Magic: An Introduction to Photographic Lighting" - which explains how to overcome the issue. For example very dark skins don't create enough diffuse reflection (like all dark surfaces - it's a matter of Physics) and the photographer has to take advantage of direct reflection to take a good image.
In fairness to the AI, I suspect most people wouldn't go to the effort of getting a professional photographer to take a passport photo. They'd probably go to a photo booth somewhere, which likely contains what is essentially a webcam. It would certainly not adapt any aspect of it's photo taking (lighting or anything else) to the subject's skin colour.
The AI can only go on what data it is given. if that data is a badly lit photo, it's going to have problems.
Don't get me wrong. The government has said and done things that could very easily be considered racist, but I don't actually think this is one.
Still, if you are unaware of the matter you are designing a system with built-in bias because the training set is flawed, and the image capture system probably as well. I'm quite sure without much tonal range, and missing details in highlights (very light skins) or shadows (very dark skins), the algorithm can't extract enough data for a good match.
I would expect anybody designing such a system does their researches and understand the failure modes. Not "hey, there's this new machine learning thing, let's cobble something together and try to sell it to the guv, there's the money!"
And it this bias hits disproportionately a given subset of people, it becomes quite alike a "racist" one - if it is used to discriminate people in any way. It's quite uncomfortable to be always controlled at an airport or the like because you don't fit someone else's restricted knowledge and bad designs.
What we have here is a system that works fine in the bulge of the bell curve of skin tones. At the margins it struggles. So far, so effin what? Exactly what you'd expect, not even remotely racist. And if it is, are any woke types protesting on of the poor excluded Scots? I doubt it.
A similar story came out where some AI sometimes thought people of colour were chimps. How racist! Turns out it also tended to think paler folks were manitees.
As for Asian's bigger brains, the stateside woke types have already come up with a new acronym for the group of minorities that doesn't include Asians, cos Asians are too successful so are oppressors.
God what a mess.
'A similar story came out where some AI sometimes thought people of colour were chimps. How racist! Turns out it also tended to think paler folks were manitees.'
It's cos the its attempt at categorising black people happened to coincide with a racial stereotype. If it thought they looked like bears or something you probably wouldn't have heard about it. If that AI had thought white people looked like crackers you'd've certainly heard about it.
Well, last year I took photos of an African model with long curly black hair and uploaded it to a web site created and run by a well known camera manufacturer to let her see them. The site does automatic tagging - and tagged the photos with the tag "bear fur" - yeah, great AI....
I'm happy I could spot them in time and delete them before she could have seen them - I don't think she would have been flattered.
Not surprisingly, now the site allows for disabling automatic tagging...
Obviously they need to have a notice above the emergency alert button saying "Do NOT press this button".
The button should be labelled in black on a black background and when you press it, a small black light lights up black to let you know help is not on its way.
As far as political statements are concerned, virtually everybody I speak to, Spanish, English or otherwise doubt the veracity of what they are being told.
It doesn't seem to matter about the source either, previously reliable or unquestioned newspapers or TV channels are all looked upon with doubt equal to that applied to well known propaganda outlets.
Now, I wonder who we should blame for that?
In principle I like your idea of trust but verify. The trouble is how do you verify. Rightly or wrongly outlets such as the BBC and major newspapers were the main way we would verify our news in the past when someone said something had happened.
Now, "someone" on facefail says that something has happened. Do we still trust The Guardian or Telegraph to confirm that the thing has happened. I personally can't travel to Syria to verify that the reported atrocity has or hasn't happened.
I wish I had an answer but I don't.
Learn another language, watch news from multiple countries. Somewhere in the reporting, there will be common themes. That will be about as close to "truth" as you're going to get.
It's also interesting to note the vast differences in slant on some stories. Accessing news from other places tends to show up the propaganda machine at work, and, my god, hasn't Brexit been an attention-deficit hyperactivity version of propaganda for the past three years...
this isn't about the mechanics/limitations/complications of the facial recognition algorithm (which obvioulsy needs refinement).
it's about the Home Office knowing about these limitations and deploying the application regardless. that is definitely an example of systemic racism, because they knowingly applied a technique that would inconvenience/disadvantage people of certain skin complexions.
The other 90% of the problem is that, in the rush to automate all the things, the gov seem to have completely stopped giving a fuck about edge cases and the edges seem to getting bigger all the time. All they had to do was add a human review step for pics that the AI rejects or give users a 'no, this really is a face' button to send their pic to review but that's too simple. God forbid anything should imply that digital.gov services are less than perfect.
"Several experiments have shown that they’re less accurate when looking at women and people of color"
My wife and I recently used HM Govt On-Line Passport Service (beta) to renew our passports. We acquired digital photographs as directed.
The system accepted my photograph first time. It required three attempts to acquire an acceptable photograph of my wife.
With all this AI bollocks I just keep seeing Robocop. When they show all the adverts and when they show the desperate attempt to create a competing Robocop that are rushed through testing and, oddly, give the test units live ammunition so they can go rogue and shoot the execs :)
I wish the Borders Agency (and other bits of government) weren't so keen on getting rid of the humans. Perhaps because I loathe the things but the new gates never seem to be able to match me with my passport where a human would have no problem. I am a typical fat, balding Brit while my Wife has no problem and her parents arrived here at the tail end of the Windrush era. Go figure.
I understand the importance of AI research, but why are we deploying AIs when its obvious they are total crap? And for the most part, its costing a hell of a lot more than the people that have been doing the job previously. I could understand that if there were a shortage of people, but we're far from that, and if anything, we have a surplus of people / deficit of jobs. For the most part, the jobs that are being replaced with AIs / Automation are those that require little or no actual training (The skills needed to compare a person with a photograph are learned before we are old enough to eat solid food...)
My suspicion is that its because a computer doesn't question orders even if they are blatantly unethical. You can subtly manipulate an AI into furthering some terrible agenda, and when found out you could throw your arms up in feigned ignorance while blaming it all on bad data or something unforeseeable.
,,, For the most part, the jobs that are being replaced with AIs / Automation are those that require little or no actual training...
You're very wrong. First of all, we are talking about replacing highly qualified specialists. Indeed, AI answers questions, finds answers in huge amounts of data, which takes years for hordes of highly qualified specialists work. That is, AI is primarily a business project: how to save money and make much more on what today is not used.
" replacing highly qualified specialists. "
No, the AI is just doing some rudimentary facial recognition, comparing the photo in a database to the image taken from a camera. A job that I've seen successfully done by a drunk and sleep-deprived teenager at the corner store. Hell, that teenager was able to identify the fake ID I used as a kid, but the Border Cop accepted it when I re-entered the country after returning from a vacation where I lost my original ID.
Being a border cop is a matter of looking at an ID, ensuring that it has specific features, entering in the ID number into a computer, then comparing the photo in database, the photo on in the passport, and the person standing in front of them. Something that can be learned in an afternoon 'orientation'.
Every other article on AI I've seen has been attempts to identify objects in images, identify faces in videos, move objects around, or other such stuff that can be completed by a toddler (Well, a appropriately strong toddler in the case of moving objects).
I don’t know why face recognition technology suddenly became AI ... Probably it makes sense to ask those who decided so? I follow the tradition of NIST TREC QA, which believes that AI should find textual answers to textual questions.
As far as I know face recognition has neither commercial nor practical value, it's good only for espionage. But my AI has already brought hundreds, if not trillions - Google, FB. all Internet companies use its earliest version. (See PA Advisors v Google)
Biting the hand that feeds IT © 1998–2019