The Passport Office has stopped 600 fraudulent passport applications using a facial recognition system it rolled out as a pilot across its fraud units last year. The Identity and Passport Service has since deemed the trial a success and, having built a database of mugshots belonging to around 25,000 known fraudsters, has …
And how many people were denied passports or had considerable trouble getting one because of the system.
Does the system discriminate on facial colour?
I would assume that the scanning system finds "potential" matches and the final confirmation that the two people are, in fact, the same person is done by a member of staff
This is first time I've actually heard of a clear advantage to the digitising of passport photos. You detect one bad application and you can use the image to find all the others by the same guy. Or you use run comparisons on all the pictures you have to find those which are too similar. I like it.
... all you need to do is snap a digital photo, distort it slightly with photoshop, re-photograph it on film and, bingo, something that looks enough like you to fool the eye of an Immigration Officer, but not enough to ring alarm bells on this system.
Meanwhile, of course, the people who do, by co-incidence, look like a "known terrorist"...
I have to agree
Using this system they found 0.1% of applications were fraudulent, and have an estimated number of fraudulent applications pegged at around .25%.
That means using the system on a trial basis it successfully picked out 40% of fraudulent applications. Not bad for a first run IMHO.
Distort it with Photoshop?
I'd be surprised if that made a difference to the software which humans can't detect. I'd have thought the software was more granular and less likely to pick up on slight changes than humans are.
What seems to be a non privacy invading use of technology by our government. Maybe they are getting the hang of this technology thing now?
Distort it with Photoshop?
> I'd have thought the software was more granular and less likely to pick up on slight changes than humans are.
From what I've seen of such software in documentaries, it relies on absolutes such as the distance between the eyes, length of nose, width of mouth etc or some combination thereof.
If the eyes are a bit closer together or the nose a bit longer in the photo, that could be enough to fool the machine whilst the human eye (especially backed up with a bored brain!) is quite capable of ignoring or missing such changes.
- On the matter of shooting down Amazon delivery drones with shotguns
- Review Bring Your Own Disks: The Synology DS214 network storage box
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- IT MELTDOWN ruins Cyber Monday for RBS, Natwest customers
- Google's new cloud CRUSHES Amazon in RAM battle