Most biometric APIs I have played with allow you to trade off your false accept rate (FAR) vs false reject rate (FRR). FAR and FRR are opposite sides of the same coin. You can't improve one without making the other worse. There are usually two broad use cases.
1. The person claims an identity and this is a second factor where they prove it. (Well technically they only prove they have your finger/iris/hand but you need to understand your threat model)
2. Out of a large number of candidates, decide which identity has presented their digit.
With 1, you can tolerate a much higher FAR (it's the FRR that makes usability suck). With 2, you need a very small FAR but that does require a nicer template and a nicer scan than 1
If you take a mobile phone use case, it's actually much closer to 1. You want it to unlock even with the vaguest of touches in any orientation and with any light level. You could tolerate a 1:10000 FAR quite easily. For blame purposes, you want FAR to be 1:10s of millions+.