Is comparing fingerprints really "no better than reading a chicken's entrails?" I could understand things like trying to guess based on similarities with a partial print, but comparing two full prints seems like it would be pretty easy.
You would think, but even "experts" on fingerprinting call it at best an art. It can be affected by bias[1], and there's no standard for declaring a match[2](same case as [1]).
> I could understand things like trying to guess based on similarities with a partial print
That's really the rub, pun intended I guess, there are no clean neatly pressed fingerprints left on object collected by police, generally speaking.
It's easier for evidence to say the person is innocent than guilty. Ex: a blurry video of a white guy. It's in no way enough to show which white person did it, but it is enough to show no black person did.
The top comment under which my comment falls is about police use of forensic "science".
>The big crime is how little epistemological support there is for some of the big forensic tools (fingerprints, DNA, bite marks, arson spread, etc), how little interest there is in researching these areas, and how trusted they are.
But I am curious: apart from TV shows, how important is this kind of evidence in most trials? Is it actually uncommon?
Some years ago I worked on a project where they wanted to do fingerprint classification and comparison with computer vision. I was amazed how much human judgment is involved in the process. Even with full prints a lot is up to the examiner and two examiners won't necessarily give you the same result.
Regarding fingerprints, in the wake of the Brandon Mayfield case which raised serious questions about the accuracy of fingerprint identification by the FBI, the National Academy of Sciences was asked to perform a scientific assessment. Initial results were published in:
Proceedings of the National Academy of Sciences (PNAS)
Bradford T. Ulery, 7733–7738, doi: 10.1073/pnas.1018707108
Accuracy and reliability of forensic latent fingerprint decisions
Bradford T. Ulerya, R. Austin Hicklina, JoAnn Buscagliab,1, and Maria Antonia Robertsc
Edited by Stephen E. Fienberg, Carnegie Mellon University, Pittsburgh, PA, and approved March 31, 2011 (received for review December 16, 2010)
ABSTRACT
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
aNoblis, 3150 Fairview Park Drive, Falls Church, VA 22042;
R. Austin Hicklin
aNoblis, 3150 Fairview Park Drive, Falls Church, VA 22042;
JoAnn Buscaglia
bCounterterrorism and Forensic Science Research Unit, Federal Bureau of Investigation Laboratory Division, 2501 Investigation Parkway, Quantico, VA 22135; and
Maria Antonia Roberts
cLatent Print Support Unit, Federal Bureau of Investigation Laboratory Division, 2501 Investigation Parkway, Quantico, VA 22135
One might wonder why such an assessment was not done a long time ago.
Just a comment, whether a 0.1 percent false positive rate is "small" is a subjective value judgement. Would you drive across a bridge that had a 1 in 1000 (0.1 percent) chance of collapsing and killing you as you drove across it? No, probably not.
In addition, the 0.1 percent false positive rate is based on a small sample of around 1000 cases. The Federal fingerprint databases such as the ones used in the Brandon Mayfield case have millions of people in them and may eventually have all US citizens (over 300 million people) in them. How does this "small" rate extrapolate when a fingerprint is compared to every fingerprint in the US or the world?
First, we have no idea how "unique" fingerprints actually are. Silly, eh?
Second, have we programmed a computer to do it? At this point, something like "fingerprint analysis" should be quite open to machine learning. If we can't program a machine to do it, then probably humans aren't actually accurate either.