Feeds

back to article FBI techs shy away from facial recognition

A senior FBI technologist declared last month that after decades of evaluation, the agency sees no point in facial recognition. Speaking at last month's Biometrics 2009 conference in London, James A Loudermilk II, a senior level technologist at the FBI, outlined the agency's future biometrics' strategy. He said that 18,000 law …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

If it does not work

Then no reputable Law Enforcement agency wants to use it as it just fabricates false leads that waste the agency's time & money. Like the polygraph does.

0
0
Thumb Up

Lol

"...probably face recognition and fingerprints"

0
0
Anonymous Coward

The flesh is weak

Ultimately you're trying to measure distances from soft flesh that moves over time. About the only good one is the distance between the centres of the eyes. Because at least that is a bone socket, yet even then, the eyes sink as you get older.

If human face matching is hit and miss, and we've spent decades developing our algorithm, and cases of mistaken identity still happen, then they're wrong to expect miracles from computers. They can't fix a shifting biometric.

http://video.msn.com/video.aspx?mkt=en-us&vid=508947e1-457d-4a93-a674-2859ccbf310c

Still, this was known, lucky nobody rolled out any facial recognition machines at key places like air..... wait.... she did what??? What next, please tell me she didn't roll out those touch pad... WTF.... that too? Next you'll be telling me the Home Secretary is using DNA profiling to determine place of birth... OMG they really have departed from science.

0
0
Thumb Down

gutted

Judging from the headline, I was thinking this article was good news, but then you threw in:

"Once the agency gets turnaround time to an hour, then perhaps the idea of sampling an entire planeload of passengers starts to look feasible."

Oh fab, so my probably-ignored fingerprints given to the US DHS when I flew there will now be analysed just as carefully as the local doom-monger terrorist General.

0
0
Big Brother

Passwords can be changed

But if someone duplicates your fingerprints, irises, knobprint or whatever what do you do? My years as a BOFH made me skeptical any identification that can't be replaced or changed. Biometrics recognition belongs in /dev/null for practical reasons alone.

0
0
Gold badge

Indeed...

Indeed, face recognition is useless. A few companies claim 99% accuracy -- so, yeah, they snoop on some camera in Manhattan looking for Bin Laden.. "Oh there he is! Oh, there he is again! Oh my god it's him *AGAIN*!!!! He's multiplying!! OMFG!!!" They'd have dozens, hundreds, thousands of hits against whomever they are looking for. 99.9% cuts it down but still not enough to be useful. And, of course, I think this success rate was in controlled conditions -- like taking photos for a license, not wearing dark glasses, variable lighting, clouds, etc.

"Once the agency gets turnaround time to an hour, then perhaps the idea of sampling an entire planeload of passengers starts to look feasible."

Unlikely. The tests are expensive, and making them faster won't necessarily lower the cost. Also, I'm sure labs are using the fastest techniques feasible, for forensics at least they apparently have huge backlogs. If they cut the turnaround time from say 24 hours to 1 hour, but throw on 700x the samples (indiscriminately testing a whole plane instead of 1 suspect...), they won't have 1 hour turnaround for long. Also, at least here in the states, believe it or not fishing expeditions are frowned upon and I expect they'd have SEVERE legal problems once they indiscriminately try to sample someone that knows their rights.

0
0

DNA – clarification

I must make one thing very, very clear.

The speculation that Jumbo jetfuls of passengers could one day have their identity verified by DNA is mine and mine alone and not Mr Loudermilk's or the FBI's.

0
0
Paris Hilton

@john Hawkins

RFID implants for everyone then? Only a little op to replace them.

Paris, 'cos I'd like to implant something in her.

0
0
Stop

When they start to deny something thats a good warning sign to start to worry.

@1st AC. Wow you really live in an idealistic world. Meanwhile here on Earth some lead is still better than no lead. Also Profiling (and law enforcement in general) is a Heuristic process exploiting a divide and conquer strategy to sub-divide targets down. False positives and False negatives are to be expected. Its why investigators have to build a case. Collecting ever more evidence until the strength of evidence excludes all but one target.

Almost every time data mining of any kind is discussed, someone has to come up with the false argument if its not 100% correct then its useless. Its utterly insane and frankly very ignorant to keep perpetuating this myth. News Flash world, human face recognition isn't 100% and thats even with good quality photos. Try humans looking for hours at poor quality CCTV footage in poor lighting conditions with people at a distance from the low frame rate cameras and at an angle to the cameras and you'll find human face recognition is no where near 100%.

Plus that is all before you also add in where face recognition is going... http://fedtechmagazine.com/article.asp?item_id=473

So lets all kill this idea that "if its not perfect, its not useful" myth once and for all. Its exceptionally ignorant and a very useful myth the people who want to exploit ever better technology are only to happy to keep perpetuating, because then the fooled people/sheeple don't stand in their way.

Anyway I suspect this FBI guy is saying it as a business opportunity, to the agencies and vendors to show them better technology.

0
0
Joke

I don't understand...

They've been doing this very successfully on NCIS for several seasons now. Wait: I'll get out the season 2 DVD and prove it!

0
0
FAIL

@MinionZero

There is a difference between using biometrics to find criminals and using biometrics to monitor the general public. In the first case it can be useful because the point is to confirm or eliminate suspects and the number of small false matches can be handled. In the case of monitoring the general public, it is just not possible and the number of false matches increases to unmanageable levels very quickly and to such an extent that working through the false matches defeats the purpose of using the biometrics to make quick checks. Sampling the whole population to do crime solving doesn't work for the same reason, false matches become unmanageable.

J

0
0
Anonymous Coward

Tell you what

Why don't the Home Office set themselves up an advisory committee staffed by subject matter experts who know about the science and the applications and the impacts of this particular subject?

Then when the bloke in charge of the committee, supported by the research and the team, points out that facial recognition is scientifically numerically proven to be a waste of time, the Home Office can change their policy accordingly.

I mean, what could possibly go wrong with an idea that simple?

0
0

This post has been deleted by its author

Big Brother

That's weird...

...because recent news is that they're trawling the drivers license databases belonging to the States looking for license photos that match wanted pictures. Got some matches, too.

Talk about the left hand not knowing what the right hand is doing (or, possibly, not saying....).

0
0

Reality check

If the FBI are looking for a high assurance 'Yes/No' answer to the question 'does this face belong to this name' then they are correct, all they'd get would be a 'maybe', or perhap a 'no way'. What FR can do is produce a short list of possible (and often ranked) identities for a face, and this is very useful for some purposes.

0
0
Terminator

James A Loudermilk II ?

There was a time when names that unwieldy were saved for Royalty. Honestly, do people not just think it would be quicker to introduce themselves as "James Loudermilk"?

Yours Sincerely

Daniel P Wilkie XVII

0
0
Silver badge
Black Helicopters

One for the Conspiracy Theorists

The abstract mathematics underlying facial recognition are pretty much the same as the abstract mathematics underlying decompilation.

Once there's a usable face-recogniser out there, then it's a matter of time before there'll be an open-source face-recogniser out there. And once there's an open-source face-recogniser out there, then it's a matter of time before there'll be a decompiler out there. And once there's a decompiler out there, there's finally no such thing as closed-source anymore.

The benefits to the authorities of facial recognition could well be an acceptable price to pay for keeping ordinary people out of the internals of their computers. It wouldn't surprise me at all if the order to stop this research hadn't come via someone high up.

0
0

Sorry Mr Stiles, but

One for the Conspiracy Theorists ...

By A J Stiles Posted Wednesday 4th November 2009 12:11 GMT

"... The benefits to the authorities of facial recognition could well be an acceptable price to pay for keeping ordinary people out of the internals of their computers. It wouldn't surprise me at all if the order to stop this research hadn't come via someone high up."

----------

Nobody has stopped any research into face recognition, as far as I know. It's just that after 46 years it still hasn't led to any usable technology. Less exciting than a conspiracy, but that's all there is to it.

0
0

Assume a population of 60 million and then answer the following, please

Reality check

By Nigee Posted Wednesday 4th November 2009 02:03 GMT

"If the FBI are looking for a high assurance 'Yes/No' answer to the question 'does this face belong to this name' then they are correct, all they'd get would be a 'maybe', or perhap a 'no way'. What FR can do is produce a short list of possible (and often ranked) identities for a face, and this is very useful for some purposes."

----------

Can it? How useful? What purposes?

0
0
FAIL

@AC Tue 3rd Nov 2009 17:22 GMT

AC: "In the case of monitoring the general public, it is just not possible and the number of false matches increases to unmanageable levels very quickly and to such an extent that working through the false matches defeats the purpose of using the biometrics to make quick checks."

While I agree that the most extreme sci-fi example of real time spy cameras everywhere watching the entire population isn't currently possible, there are however areas facial recognition are already used. (For example, offline Passport photo facial recognition is starting to become a lot better), so its a myth to say its useless. Also its extremely unhelpful to keep focusing on the most extreme example and then attacking that extreme as impossible. Thats effectively a straw man argument. Yet that straw man argument is then used to wrongly imply people are paranoid and wrong for talking about any spying technology. Plus there are many ways to spy on us that don't need cameras.

Plus all the time all of us are caught up in-fighting over the details of what is and isn't currently possible or practical at the most extreme end of spying technology, then we are all over looking the far bigger issue of a lot of spying technology doesn't need to be perfect for useful profiles to be built up over time (like for example, increasingly better results in Internet, phone and car spying) also all these areas are continuously improving. What was mostly failing to work 10 years ago was still impractical 5 years ago and is now starting to become practical (The whole field of data mining is a very good example of this of ever improving results). Give it another 5 or 10 years and it becomes common place like car number plate recognition is now, yet once upon a time, car number plate recognition was unthinkable technology. Also all our in-fighting over details is distracting us from the far more important issue of the overall growing danger of the people in power increasingly abusing technology to destroy everyones privacy, liberty and even democracy and its getting ever worse.

Also the importance of spying over time is often overlooked. Spy on someone for 1 day and you won't learn much. So its easy to assume they can't learn much from spying on us. But spy on someone for 10 years and you will build up a very detailed profile picture of their life, yet we are moving into a world that is going to be increasingly building up a continuous profile picture of our entire life. For example, even from just profiling what we buy, a very detailed picture of us can be built up over time. Yet thats just one example area of spying technology amongst so many more areas of spying they are adding over us all. The point is they are spying on us ever more and its going to get far worse. You only have to look at where so much data mining research is going to see how it can be abused by people in power for their own gain and they are using it ever more and often funding a lot of the research.

Technology is the front line in this new growth in political power resulting in ever more abuses of privacy, liberty and even destroying democracy, so it falls upon all of us to be the front line to warn of the growing dangers. But our problem at the moment is to many technical people are caught up and distracted by our in-fighting over the details of the most extreme aspects of spying technology. Thats interesting discussions but it totally misses the vastly more important point that the majority of spying technology is mundane data gathering and data processing being tied into ever more areas of our lives, allowing ever more spying on our lives for the personal gain of the people in power over us.

When you learn what kind of people you are really dealing with the growing danger cannot be underestimated. For example, here's a glimpse of the nightmare to come...

http://www.theregister.co.uk/2009/11/03/tories_vetting/comments/#c_616638

0
0

Evidence, please

By MinionZero Posted Wednesday 4th November 2009 16:00 GMT

"... offline Passport photo facial recognition is starting to become a lot better"

----------

It would be interesting to know what this claim is based on, can you provide references, please.

0
0
Happy

@D Moss Esq

I don't have all the article links I remember reading, (as I read a lot of stuff all the time), but anyway this link should give you a very good understanding of what I mean...

http://fedtechmagazine.com/article.asp?item_id=473

For example, from that above link, "We’ve discovered that the best algorithms are better than human performance"

To get to even that level of progress, even with limitations, is still a hell of a lot better than the early days of facial recognition. (I can still remember some of the earliest days of facial recognition as I was very interested in it (and all of AI) even back then (like many programmers) and so I have been eagerly watching its progress for a few decades.

0
0

Try it yourself

Try it yourself

http://www.pictriev.com/facedb/fs2.php

tja

0
0

MinionZero @ Wednesday 4th November 2009 17:30 GMT

Good. You go straight to the nub of the issue.

On the one hand, we have decades of evidence that face recognition technology doesn't work. On the other, we have one report – one – which suggests that it is as good as fingerprints and irisprints.

That report is FRVT2006, http://www.frvt.org/FRVT2006/docs/FRVT2006andICE2006LargeScaleReport.pdf, also known as NISTIR 7408. It was produced by NIST, the US National Institute of Standards and Technology. There were five authors. The lead author was P. Jonathon Phillips, the man quoted in your FedTech article.

Either he is right and the FBI are wrong. Or vice versa. They can't both be right.

And, as it happens, I sent the following email yesterday, to ask him which it is. There's no particular reason why he should answer. But maybe he will. If so, I'll report back:

----------

From: David Moss

Sent: 04 November 2009 13:45

To: P. Jonathon Phillips

Cc: 'itl_inquiries@nist.gov'

Subject: Face recognition. NIST? Or the FBI? You can't both be right.

Dear Dr Phillips

I refer to the March 2007 report NISTIR 7408 on which you are the lead author,

'FRVT 2006 and ICE 2006 Large-Scale Results'.

This report is always taken to state

that face recognition is as good as fingerprint and irisprint

when it comes to identity verification.

This conclusion is used in the UK to defend the use of face recognition,

particularly when it comes to the use of smart gates at airports.

The authorities here in the UK have no other support for relying on face recognition --

NISTIR 7408 is a lonely report.

At the Biometrics 2009 conference held in London 20-22 October 2009,

http://www.biometrics2009.com/,

Mr James A Loudermilk II of the FBI

announced that the FBI would love to be able to use face recognition,

it would be the killer application of biometrics,

but they can't because the algorithms simply do not exist

to provide the highly reliable verification required.

NIST say yes. The FBI say no.

Which is it, may I ask?

... <redacted> ...

Yours sincerely

David Moss

0
0
Boffin

@D Moss Esq

For a start I never said it was as good as fingerprint and iris print. Also you are totally ignoring all imagine processing research including image sorting research and just focusing on a very narrow criteria of just facial recognition as good as fingerprint and iris print. If that is your very narrow criteria then under that very narrow criteria, congratulations you are right, facial recognition systems cannot match fingerprint and iris print, but then no human can either!

Video sources of faces are very noisy even under very good conditions with many *potential* sources of image interference as I mentioned in my first post in reference to the problems even a human would suffer watching any video and that noise alone would strongly bias against the potential of 1 lone fool proof camera achieving 100% facial recognition. But that is also the wrong way to think about camera recognition.

If you want 100% facial recognition you are going to fail. If you want 100% of many complex recognition tasks you are living in a dream world. Even human recognition doesn't achieve 100% recognition. The whole history of Optical Illusions is proof of this. Yet humans with their less than 100% recognition don't have such crippling problems with dealing with the world around them.

The problem isn't the recognition technology its your expectations of it. We have all got very accustomed to computers giving us almost 100% repeatable results but that is computers working on their own well formatted data giving 100% repeatable results. As soon as you feed real world data into any computer you get less than 100% results because the data is noisy. But the solution to that is blindly obvious (or should be) to all researchers in AI which is Stochastic Sampling approaches deal with less than 100% perfect results.

All recognition design solutions have to be designed around Stochastic Sampling to make them robust enough to function in the real world. For example, using the recognition problem of car tracking, if you want to track cars entering and leaving a city, you cannot place 1 camera on each road into (and out of) a city and expect such a system to work. It will fail. But that is an example of the expectations of the designer of the camera network failing rather than the technology. The real world is noisy. For the sake of this discussion, say all cameras can only recognize 50% of the number plates they see. That would mean by the time you put say 5 or 10 cameras on each road into (and on the other side, out of the city) then you will catch the vast majority of cars. However even if you placed 100 cameras on each road, you would still fail to catch the occasional car from time to time with any camera error rate less than 100%, but frankly that's life. (yes I know, if that car has a bomb in it then its a major disaster), but the fact is the world isn't perfect. No system is fool proof and any that are marketed as fool proof are marketed by lying sales people seeking to get the big multi-million contracts (probably most sales people don't even understand the technology limitations enough to see they are not 100% so will happily talk with complete confidence that their system is brilliant (in other words, they are trying to say, please buy it now). Politically thats not what they want to hear and they are only to happy to use that fact to give them an easy scape goat to cover their arses, when something goes wrong, but the fact of life is nothing is perfect. However and this is the important point, such an automated car camera tracking system would still be vastly beyond what any team of humans could ever achieve in recognizing cars in and out of a city. Even just 1 automated camera about every mile over a city is vastly beyond what hundreds of thousands of humans watching cameras every day could ever achieve and such a system is very achievable now, if you have enough money. (However buying more than one camera per road is another political pressure point, as they will argue endlessly that they only want to buy 1 camera per road rather than 10 cameras per road etc..).

Such a city wide camera system could in time be applied to human face recognition as I discussed with car recognition. Its not yet, but it is becoming feasible. But the question then becomes what is it feasible for.

Facial recognition hasn't been widely exploited yet but other forms of data mining and automated pattern recognition are being ever more widely exploited. The problem is any crime prevention automated system requires a higher percentage of accuracy than what is required for any political control automated system, because political control is dealing with millions of people seeking to identify large group movements, so a lossy system of political control is still very effective and becoming ever more effective. (It doesn't matter if some people fall through the net as political control is seeking to break up opponent groups and stop large group movement against their political plans). However with the problem of crime prevention, you are seeking to find one dangerous person in potentially millions of people and then track them. Thats much harder to do. Noise in a political system of control doesn't matter anywhere nearly as much as noise in a criminal detection system. The problem is any system designed to achieve criminal detection is therefore going to be far more likely to useful for political control rather than criminal control.

I have no problem with crime prevention because I know only to well the psychology of what kind of people they are all to often trying to catch, and I would be only to happy to see them all caught. But political control scares the hell out of me, because its being used to undermine democracy ever more. The problem is the two goals are intertwined and the political control is much easier to achieve, than all crime prevention. But the danger is as politicians boost crime prevention technology they are also seeking to use the systems for political data mining and so political control as well.

For example, if we could halve all crimes that would save many lives and also save a lot of innocent people suffering being victims, but if the price of that saving is to double political control (to achieve half the number of crimes) then the extra numbers of people facing major hardship would more than wipe out the total numbers of lives save from crime prevention. Political control has the potential to cause immense hardship for the majority of people, just so the arrogant, greedy, self centered, two faced, lying, Narcissistic, power hungry minority of control freaks can have a very comfortable lifestyle. This has been demonstrated throughout history and its happening again now. People are dying needlessly every year in every country, in their thousands, as a result of it even now. The Political elite just see us as numbers. For example if an extra 10000 die needlessly this winter they don't care. Publicly they say the do but thats all just part of their two faced Machiavellian act. Ever better technology is socially very dangerous and we cannot trust the lying two faced Machiavellian control freaks with their empty promises to use it wisely. They have no intention of being fair. They want power not fairness. They will use it to hold everyone else down and if you hold millions down many suffer great hardship and you will find even hundreds of thousands each year dying from their hardship. The powerful elite don't care.

So on the one hand we are trying to stop the Narcissists, Sociopaths and even Psychopaths etc.. from committing crimes causing immense suffering and hardship for large numbers of people in society and on the other hand, we have the ever growing danger of the Narcissists and Sociopaths (and some worse still) in political power causing immense suffering and hardship for large numbers of people in society. We are all trapped between two groups of the more extreme end of cluster B disorders.

I think the only way out of this trap isn't a perfect solution, but its to help educate everyone world wide about the behaviors and dangers of cluster B disorders to help everyone protect themselves far more by avoiding being deceived and hurt victims, while at the same time we bring in ever better technology, because no matter what anyone says this technology isn't going away. Its just going to get ever better, but we have to balance the dangers of it with the extreme dangers posed by some of the very dangerous political (and business) elite. Eventually everyone in society will have to act against the extreme end of cluster B disorders because that is the ultimate goal of crime prevention, but I very much doubt it'll ever happen. The most powerful people in society will never allow it to happen and they have the power and that will never change. So if they are not stopped then they will create Big Brother with them in power using it to stay in such extreme power over us all. The more power they gain the more suffering the vast majority of people will have to endure. The problem is the Dystopia for the majority of people is the Utopia for the people who seek so much power over others.

0
0
Silver badge
Alert

@ D Moss Esq

Oi, who says it's Mr?

It's just plain A J to you, thank you very much. My life is lived in the common gender .....

0
0

@AJ

My apologies, some days I just can't get anything right ...

0
0

MinionZero @ Thursday 5th November 2009 13:30

Fair bit of noise in that post of yours, MinionZero, and two strong signals.

Signal #1, the horrors of a surveillance state. Utopians believe that there is a perfect state of affairs. They look at mankind and see a terrible gap between the way life is – imperfect – and the way it should be. That makes mankind hateful to the utopian. Mankind needs to be perfected, according to the utopian. And luckily he, the utopian, is just the man to do the perfecting, he is the exception, he is perfect, and so he sets about destroying the institutions that have evolved to support mankind and replace them with perfect ones. We know the result. Whether they call themselves communist of fascist, these utopians inevitably make life hell for people. Inevitably, because anyone who believes they know how to perfect mankind must believe that they are some sort of a god and that is a delusion we would normally diagnose as insanity. Inevitably also, because what they start with is a hatred for mankind.

Utopianism = insane hatred. Enough.

Signal #2, we shouldn't expect 100% reliability from biometrics, they can be useful even if reliability is lower than that and, anyway, they improve over time.

Couldn't agree more. That is why, in my unread disquisition on biometrics, http://dematerialisedid.com/Biometrics.html#homework, I ask the reader to decide in advance what he or she finds to be an acceptable level of error. Take the UK population to be 60 million. A 1% error rate in the biometrics used by the state would give 600,000 of our fellow countrymen a problem. Maybe that's acceptable, considering that 59,400,000 would benefit from those biometrics. If the error rate hits 10% and 6,000,000 people face problems as a result, then perhaps it's less of a problem to decide that that's unacceptable.

The point is that the error rate (false non-match rate, FNMR) for flat print fingerprints seems to be around 20%*. No-one, I suggest, would decide in advance that 12,000,000 people should face problems. It's just off the scale.

And when it comes to face recognition technology, the error rate is in the range 30-50%+*. This isn't a technology that's more or less there, it just needs a bit of tweaking. It's an outright failure. It's certainly nowhere near ready to be released on the public.

Compare the drugs industry. We allow drugs onto the market even though they have some side-effects. The decision to release them is based on the expert scrutiny of acres of evidence.

It should be the same in the biometrics industry, you may say, but it isn't. In the UK, the Home Office argue that the 2004 UKPS biometrics enrolment trial wasn't really a trial. That allows them to say that the FNMR for flat print fingerprinting isn't 20% and for face recognition it isn't 50%. But then, what *is* the FNMR for these biometrics for a 0% false match rate? They won't say, please see http://forum.no2id.net/viewtopic.php?p=107567&highlight=#107567 As noted in the article we are commenting on, the Australians are the same, they won't release any statistics on the reliability of smart gates.

There is something seedy about a technology industry that won't publicise its results. That's no way to do business. It's irresponsible.

And that is why it is a breath of fresh air when any of the luminaries of the industry *do* speak publicly. Nigel Sedgwick, for example. And Tony Mansfield.

Tony Mansfield is, I think, something of a kingmaker in the world of UK biometrics. If you get his backing for your biometrics technology, you've got a good chance in the market. He told me (or emailed me, to be more precise) that when he and Marek Rejman-Greene were doing their feasibility study (*) for the Home Office, they just couldn't believe how bad face recognition technology is. They got worried about their results. So they looked at other people's evaluations and found even worse results. Which gave them added confidence in their recommendations against face recognition technology.

The problem seems to be that faces keep changing shape, remarkably quickly. Two months after your photograph is taken, face recognition technology is utterly useless, according to their feasibility study. In fact, it may be worse than that. At the UKPS biometrics enrolment trial, verification was performed 5 *minutes* after the photograph was taken. And still we got FNMRs in the range 30-50%! That may have been an unintended consequence of the trial and the trial may not have been run very well but unintended consequences are still consequences and a badly run trial may be precisely how the technology is used at UK airports and elsewhere.

This matter came up at the Biometrics 2009 conference, at lunch with James A Loudermilk II of the FBI and John Mears, Director of Biometric Solutions at Lockheed Martin. There was much chortling at how photographs become more and more useless to face recognition technology, the older they are. A point made to support the fact that for 46 years the FBI have failed to endorse face recognition technology and there is still no reason to change that view even though other departments of state occasionally lean on them to relent.

Mr Loudermilk also explained one reason why flat print fingerprinting is so unreliable compared with traditional fingerprints, rolled prints, taken by police experts, using ink. It's simple. Flat print fingerprints are flat. They miss 40% of the fingerprint, the bits on the side, that you can only get at by rolling.

But that's just one man emailing another man or three men talking over their sandwiches. It's not in the public domain.

The marvellous breath of fresh air at Biometrics 2009 was to have Mr Loudermilk standing up there on the stage in front of hundreds of delegates saying explicitly that the algorithms simply do not exist to allow face recognition technology to deliver the highly reliable verification required. For several hundred pounds, you can buy the DVD and watch him say it. It's public domain, at last, and now the Home Office must answer the questions about the reliability of the biometrics they are buying with our money, http://dematerialisedid.com/PressRelease19.html

They can't pretend that face recognition technology works for verification even if it doesn't work for identification as Tony Mansfield sometimes does (*). Or that it works for small populations even if it doesn't work for big ones. It's too late. The bag no longer contains the cat. The FBI, thank God, let it out.

MinionZero, face recognition technology doesn't deserve you. Your stochastic sampling is wasted on it. It just doesn't work.

----------

* Please see http://www.theregister.co.uk/2009/08/14/biometric_id_delusion/ for supporting references

0
0
This topic is closed for new posts.