Transcript
Page 1: Quality Metrics for Pattern Evidence

University of Virginia, Charlottesville VA 22904

This work was partially funded by the Center for Statistics and Applications in Forensic Evidence (CSAFE) through Cooperative Agreement #70NANB15H176 between NIST and Iowa State University, which includes activities carried out at Carnegie Mellon University, University of California Irvine, and University of Virginia.

Project Rationale & Goals Results & Discussion

Materials & Methods

Conclusions Acknowledgements

• Click to add text

• Click to add text

• Click to add text • Click to add text

Quality Metrics for Pattern EvidenceKaren Pan, Karen Kafadar

Project Rationale & Goals Results & Discussion

Materials & Methods

Conclusions Acknowledgements

Given a latent fingerprint, can we use print quality to determine the probability LPEs will find the right match?

• Develop objective measure of quality, correlate with accuracy

• Estimate probability LPEs makecorrect ID or exclusion

• Focus elsewhere if QM < threshold

Analysis of entire process, from quality score calculation to final assessment after ACE-V

Latent Print Examiners (LPEs)

• Need examined prints of known quality not to evaluate LPEs, but only to provide data on relationship between print quality and accuracy

Fingerprint Databases

• NIST SD27a pairs not necessarily ground truth

• Creation of database (Professor Keith Inman, California State East Bay)

• Houston Forensic Science Center (HFSC)

• Blind verification latents, LPEs

• Challenges: replication on a single print (physical card)

Global quality scores for three NIST SD27a latents CTS proficiency test latent print images

Good (G008) Bad (B106) Ugly (U2335)

Contrast Gradient Algorithm provides feature scores

• Objective assessment of quality and empirical measure of accuracy for varying quality levels

• Objective assessment of expected performance

• Include other QMs as available (NIST, MSU, etc.)

• Other pattern evidence (ballistics, tool marks, tire treads, shoe prints) where evidence comes as images

• Objective assessment of “level of difficulty” in proficiency tests and experiments comparing different approaches

• Assessment of entire fingerprint comparison process

• CSAFE (NIST), UVA, Isaac Newton Institute

• A. Peskin (NIST), K. Inman (CSU-EB), H. Swofford, A. Rairden (HFSC), S. Huckeman (Gottingen), R. A. Hicklin (Noblis), B. Gardner (UVA), HFSC

Quality Metric (QM) Score

Type

Score

Range

Requires

features

Description

Contast Gradient

(Peskin and Kafadar)

Feature 0-100 Y Examines gradient of contrast

intensity around a feature

DFIQI

(Swofford)

Feature 0-100 Preferred Combination of 5 aspects (e.g., ridge

width, acutance (sharpness),

contrast, etc.)

Latent Quality Metrics

(LQM)

Global 0-100 N Score indicates predicted probability

an image only search returns the

mate; *VID and VCMP; and 9

metrics calculated from a latent

SNoQE

(Richter et al. 2019)

Global 0-1 N (ROI if

possible)

Wavelet-based measure of amount

of smudge in image

* VID (value for individualization) – a latent is VID if an examiner would assess it to have sufficient quality for individualization; VCMP (value for comparison) – print is of sufficient quality for individualization or exclusion

Print LQM SNoQE

Good 71 0.7549

Bad 41 0.7930

Ugly 15 0.6165

1

2

3 4

5

Feature X Y Score

1 53 13 29.4628

2 48 23 33.7415

3 14 66 79.8615

4 73 70 31.6978

5 48 128 23.7921

LQM VID, VCMP SNoQE

1 88 100, 100 0. 9693

2 72 98, 99 0. 9438

3 69 98, 99 0. 9123

4 60 96, 99 0. 9148

5 99 100, 100 0. 9607

6 72 98, 99 0. 9798

7 77 98, 100 0. 9144

8 87 100, 100 0. 9576

9 78 99, 100 0. 8526

10 96 100, 100 0. 9647

11 67 97, 99 0. 8679

1 2 4

5

6 7 8

9 10 11

3

Top Related