etm friction ridge - evidence technology magazine · 2 from evidence technology magazine •...

18
The magazine dedicated exclusively to the technology of evidence collection, processing, and preservation Special Supplement • November 2012 THE FRICTION RIDGE COLLECTION

Upload: others

Post on 20-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

The magazine dedicated exclusively to the technology of evidence collection, processing, and preservationSpecial Supplement • November 2012

THEFRICTION

RIDGECOLLECTION

Page 2: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • January-February 2011 1www.EvidenceMagazine.com

MANY EXPERT WITNESSESfeel they should not be requiredto know about judicial proce-

dures. After all, expert witnesses arenot lawyers; they are simply offeringspecialized information that may bebeneficial to the investigation of acase. Nevertheless, the courts requireeveryone working on behalf of thegovernment to understand their rolesand responsibilities to the criminal-justice system. One of the least under-stood requirements of expert witnessesmay have to do with the disclosureand testimony of “Brady material”.

Disclosure RequirementAll state and federal courts have rulesabout what type of information mustbe revealed to the defense through adisclosure request. Federal courtsgenerally follow the Federal Rules ofCriminal Procedures Rule 16, whilestate courts may choose to followother guidelines.

Regardless of the court, when thedefense requests disclosure material,those working on behalf of the govern-ment must provide all relevant infor-mation (in accordance with localrequirements) and not simply theinformation they want revealed to thedefense. The government’s obligationto disclose information that may bevaluable to the defense is commonlyreferred to as Brady material. Thisterm comes from the US SupremeCourt’s decision in Brady v. Maryland(1963) where the government with-held information from Brady that mayhave been useful in undermining thegovernment’s case against him. Thegovernment’s failure to disclose thisinformation violated due process underthe 14th Amendment.

Since providing this information isa requirement, withholding such infor-mation is commonly referred to as aBrady violation, and will likely leadto a reversal of conviction on appeal.

What is Brady Material?Brady material is any information thegovernment, or those acting on behalfof the government, has that may bebeneficial to the defense. This includesinformation that may weaken thegovernment’s case or undermine thetestimony or credibility of the witness.

Giglio v. United States (1972) isan extension of Brady to includematerial that would impeach the char-acter of a government witness.Impeachment material can includehonesty, integrity, impartiality, andthe credibility of an expert witness.

United States v. Henthorn (1991)is an extension of Giglio to includerequests for personnel records of agovernment witness. These recordsmay contain exculpatory informationabout the witness.

ExamplesIn an effort to understand Bradymaterial, it may be helpful to considersome examples of disclosure requestsand Brady violations.

� In 2009, San Jose Police wereaccused of withholding informationthat was favorable to the defense byfailing to note when another expertdid not agree with the conclusion of afingerprint comparison. An expert notidentifying a print could underminethe strength of the identification. Thisfailure to note disagreement was dueto a lack of knowledge regarding dis-closure requirements. No cases wereoverturned due to this failure, but SanJose did change its policies to docu-ment and report on non-agreementsin the future.

� In 2010, the San Diego CountyDistrict Attorney’s Office was accusedof withholding fingerprint evidencewhen latent fingerprints were deter-

mined not to be clear enough to match.In the case of Kenneth Ray Bowles,six latent prints were identified as his,and one latent print was declared notclear enough to match. On appeal, SanDiego Superior Court Judge HarryElias found this to be a serious viola-tion and ordered a new trial.

� Over the years, several motionshave been filed claiming the AFIScandidate list produced by a computersearch is Brady material. In severalappeals, judges have determined thatthe candidate list would not have ben-efited the defense. It is possible, how-ever, that in specific cases this infor-mation may be useful to the defense.The candidate list could display addi-tional information that was not recog-nized by the initial examiner.

Agencies should be aware of Bradyrequirements in order to implementpolicies to retain necessary informa-tion. Agencies should also implementpolicies on how to handle disclosurematerial. While some agencies leavethis responsibility up to the expertwitness, other agencies require thatall dissemination be done throughmanagement or through a legal unitso they can ensure the disclosurerequirements of Brady are properly met.

In addition, agencies should ensurethat expert witnesses are aware ofBrady/Giglio/Henthorn so they areprepared to testify to exculpatorymaterial. Testimony could be inregard to office policies, procedures,or events that took place in a specificcase, or information about the govern-ment witness. If there is ever a ques-tion regarding Brady material, contactyour agency’s legal unit or the prose-cutor’s office. ���

About the AuthorMichele Triplett is the Latent OperationsManager of the King County Sheriff’sOffice in Seattle, Washington. She holdsa Bachelor of Science degree inMathematics and Statistical Analysisfrom Washington State University andhas been employed in the friction-ridgeidentification discipline for more than19 years. She can be reached at:

[email protected]

Brady Material and the Expert WitnessWritten by Michele Triplett

theFRICTION

RIDGE

Page 3: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • March-April 20112www.EvidenceMagazine.com

DURING THE PAST DECADE,one of the most actively changingaspects of latent-print examina-

tion has been in the legal arena. TheDaubert hearing in the 1999 UnitedStates v. Byron Mitchell trial sparkeda trend toward the “scientification” oflatent-print examiner testimony.Practitioners hurried to brush up ontheir ACE-V and Ridgeology trainingso they could explain the scientificmethodology they used in the case.The use of the word “identification”became old-fashioned, and while someexaminers stuck with it, many werequick to change to the more scientific“individualization”. Readers of DavidR. Ashbaugh’s then-fresh book,Qualitative-Quantitative FrictionRidge Analysis, came away with alexicon that would serve them well asthey portrayed their science to a juryof laypersons. But since that time, thelegal domain has continued to evolve.

Current “scientific” latent-printtestimony has been portrayed by critics,academics, and some legal authoritiesas pushing too far into a certainty theyclaim cannot exist. We are told that theresults of our examinations can neverreach 100% “scientific” or “absolute”certainty, and any examiner claimingsuch should be disallowed to testify.

Some examiners will say this is justfine, and attaining that level of scientificcertitude simply is not necessary. Theirposition is that the court’s acceptanceof testimony is not based on its deci-sion of whether or not the disciplinecan reach scientific certainty but,rather, if the technical expertise andopinion of the examiner will assistthe trier of fact.

Just to be clear, I am not implyingthat ACE-V should be removed fromtestimony. Rather, I am suggesting thatcurrent trends indicate that examinersshould reference it more as a “frame-work” or “process” they use instead ofreferencing it as an error-free scientif-ic methodology.

One of the most widely cited cases,the 2004 Brandon Mayfield case, hasprovided much fodder for those whowish to emphasize “the error-pronenature” of our discipline. We hearabout bias and other human factors,and how they can and do affect our

decision-making thresholds. Frequentlyreferenced sources—such as the Officeof the Inspector General’s Review ofthe FBI’s Handling of the BrandonMayfield Case or the National Academyof Sciences’ report, StrengtheningForensic Science in the United States:A Path Forward—and numerous legalchallenges to the latent-print disciplinehave provided a trend over the last fewyears toward more caution being shownby latent-print examiners on the witnessstand. That trend has caused latent-print examiners to venture back towardthe more conservative manner of tes-tifying to an “identification,” and hasdecreased the emphasis on scientificindividualization.

With a careful look inside our owndiscipline, we can even recognizesome indicators of this trend. Take,for instance, SWGFAST’s removal of“to the exclusion of all others” fromthe most recent definition of the wordindividualization in the latent-printglossary (which continues to citeidentification and individualization assynonymous).

There are also some examples ofexaminers testifying in a fashion thatdoes not invite challenges from theastute defense attorney. In a Minnesotacase, we saw Josh Bergeron statingthat person X “made” latent Y—andupon follow-up, stating that theoreti-cally there could be two individualsthat share enough ridge formations

similar enough to each other that anexaminer might be fooled:http://www.clpex.com/Articles/TheDetail/300-399/TheDetail382.htm

We even saw a legal decision inMassachusetts (Commonwealth v.Gambora) that rebuked the examinerwho used the term individualization,but praised the examiner who usedthe term made:http://www.swgfast.org/Resources/101011_MA-v-Gambora-Judge’s-

Opinion-Sept-2010.pdfSo what does the future hold in

store? Continuing the trend towardmore conservative testimony is onelikely possibility. We can also expectmore talk about the use of statistics tosupport our testimony and what it willtake for examiners to actually use sta-tistical modeling on the witness stand.I think we are still several years awayfrom their acceptance in court, butindications are that we can expect atrend toward probabilities and likeli-hood ratios in the future. Take forexample the International Associationfor Identification’s recent repeal ofearlier resolutions prohibiting certifiedexaminers to testify to probable orlikely conclusions.

For now, a wise latent-print exam-iner should continue to stay abreast ofcurrent legal challenges to the discipline.SWGFAST provides access to manyof the challenges on their “Resources”page at www.SWGFAST.org.

Examiners should also considerthe trend away from absolute testimonyand consider how they can state theirfindings in a manner that is easier todefend and less likely to invite a chal-lenge from the defense. ���

About the AuthorKasey Wertheim is an IAI Certified LatentPrint Examiner and a DistinguishedIAI member. He serves on SWGFAST asWebmaster and hosts www.clpex.com,the largest web resource for latent-printexaminers. He publishes an electronicnewsletter focusing on latent-printexamination, The Weekly Detail, everyMonday morning. He is Co-Chair ofthe NIST Expert Working Group onHuman Factors in Latent PrintExamination. He can be reached at:

[email protected]

Current trends in latent print testimonyWritten by Kasey Wertheim

theFRICTION

RIDGE

Page 4: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

IBECAME A MANAGER of alatent-print unit in 2006. For thoseforensic disciplines that rely on

humans as the analytical instrument,management can be very daunting. I oncerelated the experience to our chemistrysupervisor in this way: “Imagine that Itweaked the sensitivity of each of yourGC-MSs (gas chromatography-massspectrometers) to a different setting…then adjusted those sensitivities randomlythroughout the day on each instrument…and then asked you to run a complexsample through two instruments andcome up with the same answer.” Thesupervisor just shook her head.

In spite of the inherent difficultiesinvolved with managing a latent-printunit, there are steps that can be taken toidentify, address, and reduce technicalerror. A culture of accuracy and thor-oughness is the first step in the process.If the analysts know that the quality-assurance process is designed to ensurethe most accurate results and is notpunitive, it allows the analysts to operatewithout fear of repercussion or becomingparalyzed, unable to render conclusions.

The second step is setting up clearverification procedures. Based on con-versations with many analysts around thecountry, most agencies verify identifi-cations. Interestingly, the analysts alsoindicated that the most frequent technicalerror is a “false negative” (a.k.a. “erro-neous exclusion”). However, manyagencies do not verify “negative”, “notidentified”, “exclusion”, or “inconclusive”results. It is impossible to manage tech-nical errors if not all of the conclusionsare reviewed. It is impossible to learnfrom mistakes if the mistakes are notunearthed.

The most frequently cited reason fornot reviewing all conclusions is a short-age of manpower. It has been my expe-rience that reviewing all conclusions inall cases takes approximately 25%more time (compared to only verifyingidentifications). The benefit of thisprocess is that the verifier can focus hisattention on the latent prints (not theentirety of the case) and there is imme-diate feedback to the case analyst if atechnical error is noted. Anotherapproach is to review all conclusions onselected cases (e.g. those that are ran-domly selected prior to assignment, orthose that are selected based on crimetype). And yet another approach is to

perform random case audits. The down-side to random case audits is the timedelay between making the error anddiscovering the error; the analyst willlikely not recall the circumstances thatwere involved.

The third step to managing error isto decide what to do when a technicalerror is discovered. Are there allow-ances for the number or frequency oftechnical errors? Are there differentresponses for different kinds of technicalerrors? The answers to these questionsare largely agency driven.

As a manager, I have found that aformal corrective action has been bene-ficial in analyzing the factors that led toa false identification. These factorsshould not simply center on the analyst!Supervision and organization issuesshould also come to light during theinvestigation. Some factors may lendthemselves well to preventive measures(such as the supervisor limiting thetypes of cases assigned to analysts underhigh levels of stress) and others may notbe easily prevented (such as detectivesrepeatedly asking the analyst to hurry).

I do not recommend removing ana-lysts from casework if a rare false iden-tification is discovered; they havealready punished themselves enough.However, I recommend that the analystdoes not perform verifications for aperiod of time (at least 30 days). Afterthe requisite time has passed, the ana-lyst should successfully complete a pro-ficiency test prior to performing verifi-cations. Obviously, if an analyst repeat-edly makes false identifications, thenthe response should be escalatedbecause the analyst’s competency maybe compromised.

False negatives are not as easy tomanage because you need to track themand look for trends. Much can be learnedfrom tracking the errors, including valu-able feedback to the training program.Sometimes the reason for false negativesis relatively easy to address. For example,if a particular analyst is routinely failingto identify latent palm prints due to ori-entation problems, then dedicated practiceorienting and searching palms will likelyimprove their performance.

Other problems, like backlog pressure,are harder to address. How do you insu-late the analysts from feeling rushedbecause so many cases are waiting? Ihave found it helpful to keep the back-log out of sight and to throttle ten casesat a time to the analysts. The analystscan finish a batch of cases at a time(with occasional interruptions for casesthat must be rushed, of course) andclear their desks.

The forensic examination of evidenceis a high-stakes endeavor. Failure toconnect a criminal to a crime mayallow the criminal to continue to endan-ger society—while connecting thewrong person to a crime could takeaway an innocent person’s life or liberty.As such, the analysts in the forensiclaboratory strive to be as accurate ashumanly possible. I want to stress theword humanly. As humans, we are allprone to error. Forensic analysts willnot be perfect. Mistakes will happen.Focusing attention only on the analystis short-sighted at best. Analysts oper-ate in a system, and that system can setthem up for failure. Instead of pointingfingers and blaming the analyst, weshould be asking these questions:

� How did the system allow theerror to occur?

� What can we learn from theerror?

� How can we improve the systemto minimize the number of errors? ���

About the AuthorAlice Maceo is the Forensic LabManager of the Latent Print Detail ofthe Las Vegas (Nevada) MetropolitanPolice Department. She is an IAICertified Latent Print Examiner and aDistinguished Member of the IAI.Maceo continues to serve on SWGFASTand the NIST/NIJ Expert Working Groupon Human Factors in Latent PrintAnalysis. She can be reached at:

[email protected]

Managing Latent-Print ErrorsWritten by Alice Maceo

theFRICTION

RIDGE

From Evidence Technology Magazine • May-June 2011 3www.EvidenceMagazine.com

Page 5: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • July-August 20114www.EvidenceMagazine.com

AWHILE BACK, I was contactedby an immigration lawyer whorepresented an individual facing

deportation. According to the attor-ney, U.S. Immigration and CustomsEnforcement (ICE) had found—basedon an automated fingerprint search—that his client had been deported oncebefore under a different name. Theattorney sought my assistance inexamining the fingerprints to makesure that they were those of his client.

I asked the attorney for the nameof the fingerprint examiner who hadmade the identification so I couldmake contact and arrange to see theprints. I was surprised to learn that nofingerprint examiner had ever con-firmed the results from the automatedsearch—and the immigration courtwas prepared to proceed with depor-tation based solely on the results ofthe automated search.

I prepared an affidavit for theattorney explaining how automatedfingerprint identification systems(AFIS) work and the protocol thatfollowed a search hit: that is, a quali-fied fingerprint examiner comparesthe prints selected by the computer tomake sure it is, in fact, an identifica-tion. The attorney filed his motion tostay the deportation proceedings untila qualified fingerprint examiner couldcompare the fingerprints. The immi-gration court, however, rejected themotion and insisted that the automatedresults were sufficient to continuewith the process.

In another example, I was recentlyconsulted on two similar fingerprintcases. In each case, a local policedepartment had submitted to their stateAFIS fingerprints recovered fromcrime scenes. The state AFIS unitreported back to the local police thatthe submitted prints had hit on a sus-pect. The report listed the SBI numberof the suspect and the number of thefinger that was matched to the crime-scene prints. These reports were usedas evidence of the suspect’s complici-ty in the crimes under investigationand resulted in grand-jury indictmentsin both cases.

Once again, I asked the attorneysfor copies of the fingerprint examin-ers’ reports so I would know what toask for when arranging for my exami-nation. I was advised that there wereno fingerprint examination reports andthat the indictments were made with-out a qualified fingerprint examinerever reviewing the automated searchresults or testifying before the grandjury regarding their findings.

These cases are now proceeding totrial, and still no fingerprint examinerfrom the prosecution has ever issueda report regarding the identifications.

In my experience, it is becomingmore and more common in AFIS-hitcases to find only a screen print of theAFIS “match”—with no other reportfrom a qualified fingerprint examinerconfirming the identification. There isusually no indication of whether thereported AFIS hit was from candidate#1 or candidate #20. What is astonish-ing is that in recent cases I encountered,the state AFIS units stamped theirreports with the following statement:

This package contains potentialsuspect identification. It isincumbent upon the submittingagency to provide positive IDfor prosecutorial purposes.

However, this caveat seems to befrequently ignored by some of thesubmitting agencies.

In the criminal-justice system, mostcases never go to trial. They are oftensettled through negotiation between thedefense attorney and the prosecutor, orthrough a guilty plea from the defen-dant. It would be interesting to knowhow many cases were settled or pleadon the strength of an AFIS hit, with-out further corroborating testimonyfrom a qualified fingerprint examiner.

Between 2008 and 2010, theNational Institute of Standards andTechnology (NIST) sponsored theExpert Working Group on HumanFactors in Latent Print Analysis. Thegroup developed a flow chart of the latent-print examination process basedon the ACE-V method (analysis, com-parison, evaluation, and verification)used by fingerprint examiners. Thissame chart has been adopted by theScientific Working Group on FrictionRidge Analysis, Study and Technology(SWGFAST) as part of their proposed“Standards for Examining FrictionRidge Impressions and ResultingConclusions”. The flow chart clearlyshows the path to follow once an AFIShit has been made. AFIS is shown aspart of the analysis phase of the exam-ination method and is not part of thecomparison, evaluation, or verificationphases. AFIS hits require a full exam-

Automated Fingerprint Identification Systems(AFIS) and the Identification Process

Written by Robert J. Garrett

theFRICTION

RIDGEAFIS hits must be

examined by a qualifiedfingerprint examiner and

results of the examinationmust be verified

before any proceedingsare commenced againsta potential suspect. It is

unethical, unprofessional,and—most likely—

unconstitutional to do otherwise.

Page 6: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • July-August 2011 5www.EvidenceMagazine.com

ination by a qualified fingerprintexaminer.

The SWGFAST Press Kit includesthe following entry:

14 Does an AFIS make latentprint identifications?

14.1 No. The Automated FingerprintIdentification System (AFIS) is acomputer based search systembut does not make a latent printindividualization decision.

14.1.1 AFIS provides a rankedorder of candidates based uponsearch parameters.

14.1.2 A latent print examinermakes the decision of individu-alization or exclusion from thecandidate list.

14.1.3 The practice of relying oncurrent AFIS technology to indi-vidualize latent prints correctlyis not sound.

U.S. Supreme Court decisions inMelendez-Diaz v. Massachusetts and,more recently, Bullcoming v. NewMexico, reiterated a defendant’s SixthAmendment right “to be confrontedwith the witnesses against him.”Reports of a laboratory or investigativefinding do not satisfy the requirement.

Our society and its governmenthave embraced technology in variousforms for its efficiency and economy.In the areas of law enforcement andpublic safety, these technologicaladvances have included AFIS, theCombined DNA Index System(CODIS), airport security screeningdevices, and red light/traffic cameras.But these advances bring with themcompromises of privacy and our right“…to be secure in their persons,houses, papers, and effects…”

AFIS hits must be examined by aqualified fingerprint examiner and theresults of that examination verifiedbefore any proceedings are com-

menced against a potential suspect. Itis unethical, unprofessional, and—most likely—unconstitutional to dootherwise. ���

About the AuthorBob Garrett spent more than 30 yearsin law enforcement, including tenyears supervising a crime-scene unit.He is a past president of theInternational Association forIdentification (IAI) and currentlychairs the board that oversees theIAI’s certification programs. Nowworking as a private consultant onforensic-science and crime-scenerelated issues, he is certified as alatent-print examiner and seniorcrime-scene analyst. He is a memberof the Scientific Working Group onFriction Ridge Analysis, Study andTechnology (SWGFAST) and a directorof the Forensic Specialties Accredita-tion Board.

[email protected]

The SWGFAST Press Kit mentioned in the above article is available at this web address:

http://www.swgfast.org/Resources/swgfast_press_kit_may04.html

�������������� ������ ��� �� � � ������

� � � � � � � �� � ��� � � ��� ��� �

�� ���� ���� ���� ��� �� ��� ��� ����� ��� ����� �� � ��� � ������������ �� ����� ��� ��������� �������� ����

����� ��

���� � � �� ����� �

���������� ��������

�������� !��"��������������� ���� ���!������������ ���!����# �$�!%��&

'�( �����&�

"������)$��* "�����������

!�� ��)�

Page 7: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • September-October 20116www.EvidenceMagazine.com

FORENSIC PRACTITIONERSare commonly asked to testify incourt, yet they may have limited

knowledge regarding the rules of tes-timony. This lack of understandingcould unintentionally affect the out-come of a trial. Awareness of a fewsimple concepts could improve yourtestimony and prevent a mistrial orreversal of a court decision.

Recommendation

Avoid testifyingto the conclusions of others

Testifying to the conclusions of othersis detailed in three United StatesSupreme Court Decisions. Crawfordv. Washington (2004) states that underthe Sixth Amendment ConfrontationClause, “the accused shall enjoy theright…to be confronted with the wit-nesses against him,” with an exceptionallowed for business records. Theexception to the Crawford rule resultedin many forensic laboratories consid-ering their reports “business records”and therefore not participating in livetestimony.

Melendez-Diaz v. Massachusetts(2009) clarifies the acceptability offorensic reports as business recordsstating, “The analysts’ certificates—like police reports generated by lawenforcement officials—do not qualifyas business or public records…”

Bullcoming v. New Mexico (2011)clarifies the Confrontation Clauseeven further by stating who shall bepermitted to give the testimony. “Thequestion presented is whether theConfrontation Clause permits theprosecution to introduce a forensiclaboratory report containing a testimo-nial certification—made for the purposeof proving a particular fact—throughthe in-court testimony of a scientistwho did not sign the certification orperform or observe the test reportedin the certification. We hold that sur-

rogate testimony of that order does notmeet the constitutional requirement.”

These three decisions clearly spec-ify that a forensic analyst who per-formed the examination must providethe testimony.

An analyst testifying to the resultsof the reviewer or verifier is a similartype of error since the analyst did notperform these tasks. Past cases havelabeled this as inadmissible hearsayand/or falsely bolstering the primaryanalyst’s conclusion. If an attorney

objects to this type of testimony, thenthe courts must decide if the error washarmful to the case and if a mistrialor reversal is warranted. Whether ornot an error is harmful is specific toeach case.

Recommendation

Prepare to provide the basisunderlying a conclusion

Analysts are required to provide thebasis underlying their conclusions ifrequested. If an analyst has never beenasked for the basis underlying pastconclusions, they may not be aware ofthe requirement to provide this infor-mation. Federal Rules of Evidence Rule705 describes “Disclosure of Facts orData Underlying Expert Opinion”.This rule states: “The expert may tes-tify in terms of opinion or inferenceand give reasons therefore withoutfirst testifying to the underlying factsor data, unless the court requires other-wise. The expert may in any event berequired to disclose the underlyingfacts or data on cross-examination.”

In order to give more weight to aconclusion, a prosecutor may requestdemonstrable materials themselvesprior to cross-examination. Chartenlargements or PowerPoint presenta-tions can be simple methods of provid-ing the basis for comparative evidenceconclusions during testimony.

Recommendation

Avoid referenceto past criminal history

Testifying to a defendant’s past crimi-nal history could be prejudicial towardthe guilt of a defendant. Any referenceto a prior criminal history should beavoided, such as testifying that a latentprint was matched to a fingerprintcard on file from a previous arrest.

theFRICTION

RIDGEMinor errors in courtroomtestimony may be toleratedif there is no objection tothe error or if the error is

considered harmless.Nevertheless, all forensic

practitioners should beaware of testimony rules

so they can avoidtestifying incorrectly andadversely affecting the

outcome of a trial.

Recommendations on how toavoid testimony errors

Written by Michele Triplett

1

2

3

Page 8: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • September-October 2011 7www.EvidenceMagazine.com

Recommendation

Disclose exculpatory information

Analysts should be aware of Brady v.Maryland (1963), Giglio v. UnitedStates (1972), and United States v.Henthorn (1991). These rulings require

government witnesses to disclose

exculpatory information to the defense

(information that may assist in clearing

a defendant). Exculpatory information

may include disclosing all conclusions

—not simply conclusions that impli-

cate the defendant; disclosing infor-

mation about anyone who may have

disagreed with the reported conclusion;

and disclosing unfavorable information

about the analyst. This information is

explained further in “Brady Material

and the Expert Witness,” EvidenceTechnology Magazine, January-February

2011 (Page 10).

Recommendation

Avoid overstating conclusions

Federal Rules of Evidence Rule 702describes “Testimony by Experts”,stating: “If scientific, technical, orother specialized knowledge will assistthe trier of fact to understand the evi-dence or to determine a fact in issue, awitness qualified as an expert by know-ledge, skill, experience, training, oreducation, may testify thereto in theform of an opinion or otherwise, if(1) the testimony is based upon suffi-cient facts or data, (2) the testimonyis the product of reliable principlesand methods, and (3) the witness hasapplied the principles and methodsreliably to the facts of the case.”

Since expert testimony is commonlyreferred to as opinion evidence, somecould incorrectly assume that conclu-sions may be the personal opinion ofthe expert. An important element ofRule 702 is that conclusions must bebased on sufficient facts or data.Testifying to a conclusion that is merelythe personal belief of the expert—notbased on sufficient facts or data—maybe overstating a conclusion andwould therefore be considered anerror in testimony.

Many examples can be shown

where testimony errors have occurredbut not had any negative effect on theoutcome of a case, falsely implyingcertain testimony is permissible. Minorerrors may be tolerated if there is noobjection to the error or if the error isconsidered harmless.

Nevertheless, forensic practition-ers should be aware of testimonyrules to avoid testifying incorrectlythemselves and adversely affectingthe outcome of a trial. ���

About the AuthorMichele Triplett is Latent OperationsManager of the King County Sheriff’sOffice in Seattle, Washington. Sheholds a Bachelor of Science degree inMathematics and Statistical Analysisfrom Washington State University andhas been employed in the friction-ridge identification discipline for morethan 19 years. She can be reached bye-mail at this Internet address:

[email protected]

4

5

Page 9: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • November-December 20118www.EvidenceMagazine.com

EVERY SINGLE DAY, finger-print examiners routinely andreliably determine that questioned

friction ridge impressions possesssufficient information to identify to aknown source. These sufficiencydeterminations are made based on thequality and quantity of informationavailable in an impression, as well asthe ability, experience, training, andvisual acuity of the examiner.

Although there is currently nogenerally accepted standard for suffi-ciency in the fingerprint community,examiners trained to competencycan—and do—reach valid conclusionsthat are supported by physical evidenceand will withstand scientific scrutiny.

Friction ridge examiners can typi-cally discuss their examinations andresults among colleagues without anydifficulty. This often is not the case,however, when the discussion involvesthe concept of sufficiency. As a result,it may also be difficult to explain thisconcept to non-practitioners, such asjurors, judges, and attorneys.

What makes this problematic isthat the non-practitioners mentionedabove—particularly jurors andjudges—often make crucial decisionsbased on the information theyreceive. If they don’t understand howfriction ridge examiners can reliablydetermine sufficiency, then they don’thave all the information they need tomake an informed decision. With thisin mind, several analogies are offeredbelow for helping non-practitionersunderstand the concept of sufficiency.

� Teachers routinely test studentsto determine if they have a sufficientunderstanding of the course material.Teachers are able to do this based ontheir training and experience in testingnumerous students over a long periodof time. They can reliably determinewhether the students have trulygrasped the material, or if they have

simply memorized and regurgitatedthe information for test purposes.

� Mechanics typically performleak checks after patching damagedtires. Once the patch is applied andthe tire re-inflated, the mechanic willapply a soapy solution to the patcharea and subsequently look for anyair bubbles around the patch. If no airbubbles are observed, then themechanic determines the patch job issufficient and that the tire is safe toput back into service. This decision is

governed by the mechanic’s trainingand experience in patching many tiresover time.

� Farmers must constantly moni-tor their crops to determine if they areproviding sufficient water, fertilizer,and pest-control methods to ensure asuccessful harvest. Again, their deci-sions regarding the quantities neededby the plants are determined largelyby the experience of the farmer.

Now, the reader may be thinkingthat these analogies are very simpleand seem to have nothing to do withfriction ridge examination. Hopefully,however, it will be recognized that theseare attempts to explain the concept ofsufficiency, as well as to show thatsufficiency exists in other professions.More important, these analogies showthat sufficiency determinations madein other professions are typically basedon a person’s training and experience.Why would it be any different forfriction ridge examiners?

It doesn’t matter if an examiner isdetermining sufficiency for an initialvalue assessment or if the sufficiencydetermination is for the purpose ofmaking an identification. What doesmatter is that the examiner draws onhis/her experience with numerousimpressions, over time, to assess thequality and quantity of availableinformation in making these suffi-ciency determinations.

Besides, it would not be surprisingif a teacher, mechanic, or farmer is inthe jury box during your next trial.They will likely have sufficientunderstanding of the analogies! ���

About the AuthorJohn Black is a Senior Consultant andthe Facility Manager for Ron Smith &Associates, Inc. in Largo, Florida. Hecan be reached at:[email protected]

theFRICTION

RIDGE

Explaining the Concept ofSufficiency to Non-Practitioners

Written by John P. Black, CLPE, CFWE, CSCSA

If juries and judgesdo not understand how

friction ridge examinerscan reliably

determine sufficiency,then they do not

have all the informationthey need to make

an informed decision.

Page 10: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

theFRICTION

RIDGE

Human Factorsin Latent-Print Examination

Written by Kasey Wertheim and Melissa Taylor

FINGERPRINT EXPERTSnever make mistakes right? Abetter question might be, “Are

fingerprint examiners human?” Theanswer to that question of course is,“Yes.” The reality of all humanendeavors is that errors happen, evenamong the best professionals andorganizations.

The field of human-factors researchfocuses on improving operationalperformance by seeking to understandthe conditions and circumstances thatprevent optimal performance, and tothen identify strategies that preventor mitigate the consequences of error.Understanding how human-factorsissues impact latent print examinationscan lead to improved procedures.

Human-factors research offers avariety of models used to detect andidentify errors. Many of these modelsfocus on a systems approach whereerrors are often viewed as conse-quences of a person’s working condi-tions—the work environment, forexample, or institutional culture andmanagement. Rather than focusingsolely on an examiner when an error

occurs, a systems approach wouldlook at underlying conditions—such asinadequate supervision, inappropriateprocedures, and communications fail-ures—to understand what role theyplay when errors occur. Using a sys-tems approach to understand whyerrors occur will help agencies buildbetter defenses to prevent errors ormitigate their consequences.

In September 2010, the NationalInstitute of Standards and TechnologyLaw Enforcement Standards Office(OLES) recognized the need for fur-ther study on how systems-basedapproaches—such as root-causeanalysis, failure mode and effectsanalysis, and the human factors andanalysis classifications system(HFACS)—could be used in forensicsettings. OLES initiated a contractwith Complete ConsultantsWorldwide (CCW) to investigate theHFACS framework and develop aweb portal to help forensic managerscollect and track error-related data.

HFACS and Swiss CheeseDr. Douglas Wiegmann and Dr. ScottShappell developed HFACS in the

Understanding howhuman-factors issues

impact latent printexaminations can lead

to improved procedures.

This model of error in latent print examination, adapted from Dr. James Reason’s 1990 “Swiss-cheese model” of error, shows how unguarded gapsin policy or procedure can ultimately result in an accident or failure. In Reason’s model, each slice of cheese represents a “defensive layer” that hasthe opportunity to prevent an error from impacting the outcome or to keep the error from leaving the system undetected.

From Evidence Technology Magazine • January-February 2012 9www.EvidenceMagazine.com

Page 11: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • January-February 201210www.EvidenceMagazine.com

F R I C T I O N R I D G EUnited States Navy in an effort toidentify why aviation accidents hap-pened—and to recommend appropriateaction in order to reduce the overallaccident rate. The HFACS frameworkwas based on the Swiss-cheese modelof accident causation, the brainchildof Dr. James Reason.

This Swiss-cheese model gets itsname because Reason proposed thathighly reliable organizations are anal-ogous to a stack of Swiss cheese,where the holes in the cheese representvulnerabilities in a process and eachslice represents “defensive layers”that have the potential to block errorsthat pass through the holes. Eachlayer has the opportunity to preventan error from impacting the outcomeor to keep the error from leaving thesystem undetected.

Applying HFACS to ForensicsWorking at the request of the OLES,CCW has developed an online toolthat provides latent print managersand supervisors with an easy andefficient way to determine and docu-ment the factors that led to humanerror in a latent print case. The web-based portal is now live and ready toreceive input from the latent printcommunity. Users will be able to usethis tool to identify “root causes” oferrors (or near-misses) by reviewinga list of domain-specific issues andselecting the ones that are applicableto the incident in question.

The website will remain live toallow enough time to develop a data-base with a variety of sufficiententries. These responses will then bestudied in the hope of gaining furtherinsight into the nature of human errorin latent print examination. Thereporting process is anonymous, andno data will be collected on humansubjects. The portal was alsodesigned so that it does not collectany law enforcement sensitive data.

Perhaps the best way to gaininsight into the HFACS system forlatent prints is to take a look at theHFACS outline. For our purposes, thefour “slices” in the original Swiss-cheese model have been renamedExaminer Actions, Preconditions,Supervisory, and OrganizationalInfluences.

Factors that Can Contribute toHuman Error in Latent Print Examinations

1. Examiner Actions that Can Contribute to ErrorA) Errors

� Skill-Based Errors� Decision Errors � Perceptual Errors

B) Violations� Routine Infractions—“Bending” of the rules, tolerated by

management� Exceptional Infractions—“Isolated” deviation, not tolerated by

management2. Preconditions that Can Contribute to Error

A) Substandard Environmental Factors� Substandard Physical (operational and ambient) Environment� Substandard Technological Environment

B) Substandard Conditions of the Examiner� Adverse Mental States—mental conditions that affect examiner

performance� Adverse Physiological States—medical or physiological conditions

that preclude safe examinations� Physical / Mental Limitations—situation exceeds the capabilities

of the examinerC) Substandard Personnel Factors (Practice of Examiners )

� Communication, Coordination, & Planning (Examiner ResourceManagement) Failures

� Fitness for Duty3. Supervisory Issues that Can Contribute to Error

A) Inadequate Operational Process� Problems in Operations� Problems with Procedures� Inadequate Oversight

B) Inadequate Supervision or LeadershipC) Supervisor Planned Inappropriate Operations—unavoidable during

emergencies but unacceptable during normal operationsD) Supervisor Failed to Correct a Known ProblemE) Supervisory Ethics or Violations—intentional actions that are

willfully conducted by supervisors4. Organizational Influences that Can Contribute to Error

A) Inadequate Resource / Acquisition Management� Problems with Human Resources� Inadequate Monetary / Budget Resources

B) Problems with the Organizational Climate� Problems with Structure of the Organization� Problems with Organization Policies� Problems with Organization Culture

Page 12: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • January-February 2012 11www.EvidenceMagazine.com

F R I C T I O N R I D G EIf you are a latent print examiner

or supervisor, consider making ourdata collection efforts pay off byentering incidents into the portal.Without input, this effort will not havethe impact that it could. And there isa benefit to those who enter theirinformation: upon submission, areport is generated that details thefactors the user identified as contribu-tors to the incident being entered. Thiscould be a valuable printout to obtainfor the file, detailing those factorsyou deemed important in a particularlatent print error event.

Log on today to check out thelatent print HFACS portal and considercontributing to the project.

www.clpex.com/nist

Melissa Taylor is a management andprogram analyst with the LawEnforcement Standards Office (OLES)at the U.S. Department of Commerce’sNational Institute of Standards andTechnology. Her work within theForensic Science Program focusesprimarily on fingerprint-relatedresearch and integrating human-factorsprinciples into forensic sciences.Taylor currently serves as a memberof the INTERPOL AFIS ExpertWorking Group, associate member ofthe International Association ofIdentification, and co-chair of WhiteHouse Subcommittee on Forensic Science’sLatent Print AFIS InteroperabilityTask Force.

Kasey Wertheim is an IAI CertifiedLatent Print Examiner and aDistinguished IAI member. He serveson SWGFAST as their webmaster andalso hosts www.clpex.com, thelargest web resource for latent-printexaminers. He publishes the WeeklyDetail, an electronic newsletter focusingon latent-print examination, to nearly3,000 examiners every Monday morning.And he is Co-Chair of the NIST ExpertWorking Group on Human Factors inLatent Print Examination. He can bereached at:

[email protected]

About the Authors

Page 13: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • March-April 201212www.EvidenceMagazine.com

theFRICTION

RIDGE

“I am 100% certain of my conclusion.”(But should the jury be certain?)

Written by Heidi Eldridge

FROM TIME OUT OF MIND,forensic scientists have testifiedto results with phrases like “one

hundred percent certain,” and feltcompletely comfortable doing so.After all, why would we testify underoath to something that we did notbelieve to be true? Then, in 2009, theNational Academy of Sciences reporton forensic science was released, andin the aftermath, forensic scientistsbegan to be cautioned against usingthis phrase and others like it. Manyembraced this change, while otherscontinue to ask: But why?

Many arguments have been madeaddressing the lack of wisdom in usinga phrase such as “100% certain”. Hereis how the most common argumentgoes: The assertion of one’s certaintydoes not equate to a scientific stance.Nothing in science is ever 100% cer-tain. The cornerstone of scientificexploration is the formation of a con-clusion, which is open to falsification.

Here’s that argument in layman’sterms: 1) I research a question. 2) Icome up with the answer that I feel isthe most appropriate given the data Ihad to examine. 3) I share my results.4) Other people try to prove me wrongor attempt to fine-tune my answer.

Under this concept of science, theanswer is never absolute. It is alwayssubject to further testing, interpreta-tion, and challenge. Therefore (theargument goes), if I claim that myresult is 100% certain, I am tacitlyadmitting that my result is not scien-tific. For, by definition, a scientificconclusion cannot be absolute.

This argument is fine, as far as itgoes, but it fails to resonate withsome practitioners, particularly thosewho were never scientifically trained,and it fails to address the real crux ofthe problem: We must consider ouraudience.

When we testify in a court of law,our audience is not other scientists.Our audience consists of jurors.Laypeople. Watchers of CSI. Themajority of these people are not sci-entifically trained. They expect us to

a proven fact that every fingerprint isdifferent.”

Similarly, when we say, “I am 100%certain of my conclusion,” we mightmean that we have conducted acareful examination, reached the bestconclusion possible with the dataavailable, and that we would not havereported that conclusion unless wewere confident that we had done ourwork well. But what does the juryhear? They hear, “I’m an expert, andI’m telling you that this conclusion isfact and cannot possibly be wrong.”

But the truth of the matter is,sometimes we are wrong. And whatwe are stating for the jury is not fact;it is opinion. To be clear, the opinionis based on something—it is not justmade up out of thin air. But it is stillopinion. And to state it in terms thatgive it the veneer of fact is both over-stating and just plain misleading.

Remember your audience: The jurythat is accepting what you say at facevalue. They need you to be precise inyour use of language so they under-stand correctly. It is okay to say thatyou are certain—if you qualify thestatement. Talk about your personallevel of confidence in your conclusion.Talk about the work you did to reachyour conclusion and why you feel itis correct. But do not imply that youropinion is an incontrovertible fact.

Juries do not know the subtextbehind our conventional phrases. Allthey hear are the words we say. Weneed to be certain that those wordstruly convey our meaning. We owe itto the jurors who have placed theirfaith in us, the experts. ���

About the AuthorHeidi Eldridge is a Certified LatentPrint Examiner with the Eugene (OR)Police Department in Eugene, Oregon.She has successfully defended finger-print science in a post-NAS Daubert-style Motion to Exclude hearing inOregon and has been writing andteaching on the subject to help othersunderstand how to meet these chal-lenges. She can be reached at: [email protected]

bring to the courtroom that training,experience, and knowledge. And theylook to us with a faith that, for some,borders on reverence. And because ofthis faith, we bear a huge burden ofresponsibility: Clarity.

Our words matter. Language is apowerful weapon. It can be used toinform, but it can also be used to per-suade or mislead. We must rememberthat many of the phrases we use asscientists are a kind of shorthand forlarger concepts that other scientistsunderstand. But juries do not have thatlevel of understanding. Juries acceptthem at face value.

When we say, “Fingerprint compar-ison science has a zero error rate,” wemight mean that—although we knowpeople can and do make mistakes—the application of the process, if fol-lowed correctly, will lead to the cor-rect conclusion. But what the juryhears is, “Fingerprint conclusions arenever wrong.”

When we say, “Fingerprints areunique,” we might mean there is agreat deal of biological research thatsupports the random formation of fin-gerprints, and in the limited researchwe have done, we have not found twothat were exactly the same… so weconclude that it is extremely unlikelythat two fingerprints could be identi-cal. But what the jury hears is, “It is

Page 14: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • May-June 2012 13www.EvidenceMagazine.com

theFRICTION

RIDGE

Practitioner Error vs. Deficient ProceduresWritten by Michele Triplett

andWilliam Schade

FOR MANY YEARS, forensicscience has embraced the ideathat any errors made were due to

practitioner shortcomings or malfea-sance. Testimony reflected that beliefand we were trained that “two com-petent examiners must always arriveat the same conclusion”.

As we entered the 21st Century, theaccreditation requirements and a gen-eral adherence to scientific principlesmade agencies and practitioners moreaware of concepts such as root-causeanalysis and process improvement. Apractitioner is certainly responsiblefor an error that is the result of con-clusions based on his own judgment—but thorough analysis may determinethat the practitioner is notsolely responsible. Whenpractitioners are requiredto use judgment, resultsmay not always meet theexpectations of others. Ifspecific procedures andresults are desired, thenclearly stating expectationsmay be an easy, yet under-utilized, solution.

Proper root-cause analy-sis and suitable correctiveaction are essential for thosecommitted to quality results.

IntroductionAn error is not always theresult of poor decisions.Indeed, a lack of statedexpectations by manage-ment is a systemic error, andmay contribute to an errormade by a practitioner.This deficiency requires thatpolicies and procedures berewritten. A thoroughinvestigation into both thesystem and the practitioner’sjob performance should beconducted to find the causeof an error and to establishappropriate corrective action.Accepting responsibilityfor an error should beginat the management leveland progress to examine

the practitioner’s actions. Only thencan an appropriate solution be found.The list of possible causes of errors(in the center of this page) is a start-ing point and can be expanded further.

Low tolerance levels and overcon-fidence may appear to be practitionererrors, but it is the responsibility ofan agency (or discipline) to set thecriteria and parameters. The agencymust ensure that practitioners under-stand expectations and are capable ofachieving them.

Bias may be considered a systemerror because an agency or disciplineshould have measures to reduce thepotential for bias. Appropriate protocols(see 1c in the chart) can diminish

pressures and influencesthat may affect conclusions.An agency could requireadditional review of a situ-ation in which bias mayhave a greater influence.Applying a deductive sci-entific method to derive aconclusion can diminishbias as well. Such methodsinclude: relying on objectivedata; attempting to falsifyan assumption instead oftrying to confirm it; consid-ering data that does not fit;and reviewing the processas well as the conclusion,as opposed to simplyreproducing the conclusion.

Even a difference ofopinion could be a systemerror when expectations arevague. If a difference ofopinion is troublesome, thenan agency (or discipline)should set parameters tocontrol deviation. An agencycan establish a policy thatconclusions must haveenough justification to holdup to the satisfaction ofother practitioners.

Once the cause behindan unacceptable result isestablished, suitable cor-rective action (controls toprevent unacceptable

Root Cause Analysis

1) System Errorsa) Use of a deficient method.

i) Allowance of human judgment in lieu of a defined methodor criteria.

ii) Poorly stated or improperly communicated expectations. (a) The method or criteria were followed but did not produce

desired results. (b) The criteria may need to account for differences of

opinion and or different tolerance levels.b) Practitioner competence not established (knowledge, ability

and skill).i) Lack of adequate training.ii) Inadequate competency testing prior to allowing a practi-

tioner to perform casework.c) Lack of appropriate protocols, environment, and tools.

i) Failure to address and limit external pressure or reducebias.

ii) Inadequate lighting or poorly maintained equipment.iii) Unavailability of appropriate consultation.

2) Practitioner Errors (Understanding the Criteria but Not Applying it)a) Medical problem that influences results.

i) Degradation of cognitive abilities.ii) Use of medication.

b) Lack of thoroughness. i) Carelessness, laziness, complacency.ii) Physical or mental fatigue.iii) Standards and procedures not followed.

c) Ethics.i) Intentionally disregarding the method.ii) Fabrication / Falsification.

Page 15: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • May-June 2012www.EvidenceMagazine.com

14

results from recurring) can be taken toimprove any system, especially onethat requires human decision-making.Corrective action may include revisingprocedures, establishing more specificcriteria, additional training, andimplementing competency testing.

Significance of ErrorsIt may be important to determine thesignificance of an error. Suppose anerror occurred but was detected priorto any ill effects. There would be noactual consequences from the error,but the potential consequences couldhave been substantial. The significanceof an error should be determined byconsidering the potential effects inlieu of the actual effects, so that seri-ous errors are addressed appropriately.

In both the medical field andforensic comparative sciences, somemay assume a false-negative decisionis not significant since no one isgiven an incorrect medical treatmentor falsely imprisoned due to the error.In general, this idea is known as theprecautionary principle: “It is betterto err on the side of caution.”Forensic science has often quotedBlackstone’s ratio: “…it is better thatten guilty persons escape, than thatone innocent suffer.”

It is true that no one is wrongfullytreated or falsely imprisoned due to afalse-negative conclusion, but it mayleave a patient untreated or a suspectfree in the community to commit morecrimes. On the other hand, an erro-neous exclusion may be harmless if alatent print, shoe print, or tire trackshould have been identified to thevictim. Until an agency gains experi-ence in determining the root cause ofan error, perhaps it is better toaddress all errors instead of trying todetermine the significance of an error.

DiscussionA hypothetical example can demon-strate this form of root-cause analysisand possible corrective action. Supposesome analysts in an office believe apiece of evidence is linked to a spe-cific exemplar, while others disagree.Of course, varying conclusions arenot acceptable. It is tempting formanagement to try to decide whichpractitioners are in error. Evaluatingthe conclusion against the written cri-

teria will determine where the errorlies. Analyzing the six sections fromthe chart will determine potential rea-sons behind errors.

Question 1: Were clear parametersin place to establish the identification?One reason people disagree is becausethey do not have a clear idea of thecriteria that must be met. Without aclearly stated expectation, practition-ers are free to use self-imposed criteriathat may differ from person to person.If written criteria did not exist, thenthis may have contributed to theinconsistent conclusions (an error bymanagement in not stating an expec-tation). A standard could be imple-mented, requiring that conclusions bebased on clearly visible data—nottraining and experience; or that con-clusions require general consensus.

Question 2: Was each practitionercompetent? Many times the compe-tency of practitioners is presumed. Ifthis is the case, then competency hasnot been established and this mayhave led to the problem (an error bythe agency). The agency shouldimplement a formal system to estab-lish practitioner competency. This isa basic requirement of accreditationand should be universally adopted byagencies performing forensic com-parative analysis.

Question 3: Were appropriatetools provided? If practitioners usediffering tools, perhaps a 4.5x magni-fier compared to digital enlargement,then it is possible for conclusions todiffer between analysts. Managementshould ensure that practitioners haveappropriate tools available, and areadequately trained to use the toolsproperly.

Question 4: Did one or more ofthe examiners have medical or visualissues? Although not a frequentoccurrence, this is a realistic concern,and it should not be dismissed as apossibility.

Question 5: Did one or more ofthe examiners lack thoroughness? Ifan experienced practitioner becomescomplacent, thoroughness maydecrease. It can be difficult to findsuitable corrective action for a practi-tioner who lacks thoroughness. Manysupervisors simply ask practitionersto try harder, but this seldom works.

Implementing additional safeguardsto ensure thoroughness can resolvethis problem. This may includerequiring additional documentation,ensuring that practitioners performwork more methodically. Changingan environment can reduce pressuresand limit distractions that may con-tribute to a lack of thoroughness.Limiting extra duties may help a per-son focus on a specific responsibilityas well.

Question 6: Were the errors due toethical issues? It may seem unlikelythat ethics would be the problem, butit should always be considered.

The answers to these questionsshow there are several reasons ana-lysts could have differing conclusions.The cause of an error may be systemicand not simply a practitioner error. Alack of good policies and procedures(i.e., the cause) by an agency can resultin an error made by a practitioner (i.e.,the resulting problem).

ConclusionQuality results come from a qualitysystem. Just like the airline industrylearns from crash data and implementsbetter procedures, so too shouldforensic science learn from the errorsas they occur and implement betterpractices to mitigate their occurrence.In the past, practitioners have beenblamed for most unacceptable results.After reassessing various situations,it can be concluded that many errorscan be avoided if suitable expectationsand procedures are in place. Agenciesand disciplines should continually re-evaluate their expectations and proce-dures in an effort to strive forimprovement. True leadership is dis-played by accepting responsibility forand correcting systemic mistakes. ���

About the AuthorsWilliam Schade is the FingerprintRecords Manager of the PinellasCounty Sheriff ’s Office in Largo,Florida. He has experience in allareas of biometric identification.

Michele Triplett is the Latent PrintOperation’s Manager for the KingCounty Regional AFIS IdentificationProgram in Seattle, Washington. Shehas worked for the program for thepast 20 years.

F R I C T I O N R I D G E

Page 16: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

From Evidence Technology Magazine • July-August 2012 15www.EvidenceMagazine.com

theFRICTION

RIDGE

The Weight of Subjective ConclusionsWritten by Michele Triplett

FORENSIC CONCLUSIONSare typically expressed as being“matches” or “identifications”.

Without statistical probabilities, theseconclusions may sound like facts butare more accurately categorized asdeductions, inferences, or affiliations.Most forensic conclusions are basedon such a wide variety of factors thatthey are not currently suitable tobeing represented mathematically.This has led some people to questionthe value of forensic conclusions,holding that the conclusions aremerely the analyst’s personal beliefsand not solid scientific conclusions. Is this a valid concern?

The answer may lie in understand-ing the benefits and limitationsbehind different types of statisticalprobabilities. There are three basictypes of statistical probabilities.These are known as classical, empiri-cal, and subjective probabilities.

Classical probabilities are com-monly used when there are a finitenumber of equally probable events,such as when tossing a coin. Whentossing a coin, the probability of theoutcome, either heads or tails, is one-half or 50 percent (one chosen out-come divided by the possible numberof outcomes).

There are times when classicalprobabilities do not accurately repre-sent the probability of an event hap-pening, either because there are infi-nite possible outcomes or because thelikelihood of the outcomes areunequal. In these situations, empiri-cal probabilities are used to estimatethe possibility of the event.

When using empirical probabili-ties, the frequency of an event is esti-mated by observing a sample grouprather than considering the possiblenumber of outcomes. As an example,consider the probability of it rainingin Texas. The classical probabilitywould consider the possible out-comes (rain or no rain), and statethere is a one-half, or 50 percent,chance of rain. This is clearly inaccu-rate because the likelihood of eachhappening is not the same. The prob-ability would be more accurately esti-mated by examining a sample groupof the number of days it has rained in

say the probability of getting an A isone-fifth, or 20 percent. This wouldbe inaccurate if the likelihood ofattaining each grade is not the same.

Empirical probabilities may moreaccurately represent the situationbecause the frequency of past gradescan be considered. However, oneproblem with empirical probabilitiesis that past events may not representfuture events unless all factors aresimilar. Suppose someone had goodgrades in the past but currently is notmotivated to study. In this case, anempirical probability may not accu-rately represent the current situation.

Instead, subjective probabilitiesmay be able to account for additionalfactors that cannot be consideredwith classical or empirical probabili-ties, allowing for the best representa-tion of the information. One concernassociated with subjective probabili-ties is that a person may base hisprobability on a gut feeling, a guess,or on intuition, rather than on currentrelevant information. A commonexample is when a person gives asubjective probability of the Yankeeswinning their division. The person istypically basing this probability onpersonal beliefs and desires, resultingin a personal opinion instead of asound conclusion.

Those trained in science under-stand the need to refrain from relyingon personal feelings; instead onlyrelying on information that can bedemonstrated to others. Stating a sub-jective probability of the Yankeeswinning their division based on rele-vant information, such as the numberof injured players, would result in avalued logical deduction.

The value of forensic conclusionsis not in their ability to be numerical-ly quantified but rather in the sound-ness behind the conclusion. In certainsituations, subjective probabilitiesmay give the most accurate represen-tation of the information at hand. ���

About the AuthorMichele Triplett is the Latent PrintOperation’s Manager for the KingCounty Regional AFIS IdentificationProgram in Seattle, Washington. Shehas worked for the program for thepast 20 years.

the past year. Obviously, examiningprobabilities in this manner may stilloverlook other important information.

Certain situations are better repre-sented by allowing the user to deter-mine the probability of an eventbased on knowledge not consideredin a mathematical equation. These areknown as subjective probabilities.Accurately diagnosing a skin rashmay involve analyzing the appear-ance of the rash, additional symp-toms, recent exposures, the person’soccupation, and past occurrences ofsimilar rashes. A doctor may diag-nose a rash based on all of these fac-tors without formally associatingnumerical weights with each factor.This is acceptable and highly valuedif used properly and in the right situ-ation. The value of subjective proba-bilities is that they can assess moreinformation than currently accountedfor in a mathematical equation.

No single type of statistical proba-bility is superior to another. The typeof probability preferred is the onethat most accurately represents thesituation at hand. Numerically basedprobabilities may sound more persua-sive, since there are objective weightsassociated with each factor, but theycan be artificially influential if theweights are inaccurate or if the equa-tion does not account for all relevantinformation.

Consider the probability of gettingan “A” in a class. There are five pos-sible outcomes (i.e., A, B, C, D, andF). The classical probability would

Page 17: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

Evidence Technology Magazine • November-December 2012www.EvidenceMagazine.com

16

FOR THE PAST FEW YEARS,there has been ongoing debateabout whether pattern evidence

identifications are “to the exclusionof all other sources”. The concern isabout overstating conclusions, andusing the phrase to the exclusion ofall others implies that a conclusion isirrefutable with no possibility of error.The same concern has been statedregarding the use of words like definite,absolute, conclusive, 100-percentconfidence, or 100-percent certainty.

2008 Hull Frye-Mack HearingPrior to the Frye-Mack hearing forState of Minnesota v Jeremy JasonHull, conclusions of identity for finger-print impressions were considered bymost practitioners to be “to the exclu-sion of all others”. The Frye-Macktestimony stated a fingerprint impres-sion could be identified to a sourcebut not individualized. This distinctionwas made because the analysts feltthat the word individualize presentedthe conclusion as a fact while theword identify left the door open forthe remote possibility that someoneelse possessed a similar arrangementof friction ridge detail.

The effort to make this distinctionwas not a matter of questioning theprinciple of uniqueness; instead, it washighlighting the amount of informationneeded to determine that uniquenesshad not been established. At somepoint, the information under consider-ation may be so minimal or ambiguousthat it becomes plausible that anothersource could have produced a similarpattern.

An additional reason for using theterm identify over individualize was tospecify that the unknown impressionwas not compared to every possiblesource.

SWGFAST ModificationIn September 2008, based on the ideaspresented in the Hull case, theScientific Working Group on FrictionRidge Analysis, Study and Technol-ogy (SWGFAST) started the processof removing the phrase “to the exclu-sion of all others” from their defini-tion of individualization. However,

SWGFAST did not differentiatebetween the meaning of identificationand individualization as the HullFrye-Mack testimony did.

The IAIOn February 19, 2009, in a responseto the National Academy of Sciencesreport Strengthening Forensic Sciencein the United States: A Path Forward,the president of the InternationalAssociation for Identification (IAI),Robert Garrett, wrote a letter to IAImembers stating: “Although the IAIdoes not, at this time, endorse the useof probabilistic models when statingconclusions of identification, mem-bers are advised to avoid stating theirconclusions in absolute terms whendealing with population issues.”

Practitioners’ ViewsMany practitioners put these eventstogether and claimed they could nolonger exclude all others when mak-ing a comparison. Others disagreedand felt there was nothing to forbidthem from making a determination“to the exclusion of all others”.

SWGFAST had removed the phrasefrom their terminology but they hadnot specified that it could not be stated.Similarly, the IAI letter was not a for-mal resolution nor did it specificallysay “to the exclusion of all others”.

Those opposed to the phrase claimit is a statement of fact, where nopossibility exists that the impressioncould have come from another source.Others think of it as a statement indi-cating the range of those under con-sideration, acknowledging that con-clusions are never absolute.

Everyone would agree that physi-cally comparing an impression to allindividuals is unrealistic. Nevertheless,some maintain their conclusions areto the exclusion of all others regard-less of whether it is stated. Thosepeople reason that if all fingerprintsare accepted as unique, and they haveconcluded that a fingerprint impres-sion was made by a certain source,then they are excluding everyone else—not physically, but theoretically.The possibility of an alternative con-clusion is so remote that it can be dis-regarded as implausible. If anothersource could have plausibly made animpression, then the analyst wouldhave given a conclusion of inconclu-sive.

The argument that a person wouldhave to compare a fingerprint impres-sion to every person in order toexclude all others may apply to exactsciences, but fingerprint comparisonsare not an exact science. Fingerprintcomparisons are logical deductionswhere appropriate rules of inferenceare permitted; viewing all possibilitiesis unnecessary.

ConclusionRegardless of which view a personholds, clearly articulating the strengthof a conclusion is essential. Statingthat a conclusion is “to the exclusion

theFRICTION

RIDGE

To the Exclusion of All OthersWritten by Michele Triplett

The argument thata person would have tocompare a fingerprintimpression to every

person in order to excludeall others may apply

to exact sciences, butfingerprint comparisons

are not an exact science.

Page 18: ETM friction ridge - Evidence Technology Magazine · 2 From Evidence Technology Magazine • March-April 2011 D URING THE PAST DECADE, one of the most actively changing aspects of

Evidence Technology Magazine • November-December 2012 17www.EvidenceMagazine.com

F R I C T I O N R I D G Eof all others” may be an overstate-ment.

Differentiating between the wordsidentify and individualize may be onesolution, but attorneys and jurors mayhear the same message regardless ofthe term used and perceive theconclusion as a fact instead of adeduction. This misrepresentationmay inject a debate between opposingcourt counsel and undermine thecredibility of otherwise accurate tes-timony.

Another suggestion has been tostate that conclusions are the opinionof the analyst. Labeling conclusionsas opinions helps avoid overstatingresults but it may severely underminea conclusion if it is perceived as beingthe personal opinion of the analystand not a scientific opinion that wouldbe corroborated by others as clearlybeyond debate.

Perhaps a better way to state anypositive pattern evidence conclusionis to use a statement instead of sim-plifying the conclusion down to asingle word that can be easily mis-construed. Some possibilities may be:

“The information between theimpressions (latent prints, tire tracks,toolmarks, etc.) indicates that theimpression was deposited by the givensource.” Or…

“After analyzing the data, the onlyplausible conclusion I can arrive atis that this impression was made bythis source.” Or…

“I have thoroughly examined thedata between the impressions and Iwould attribute impression A ascoming from source B.”

Using a statement in lieu of usinga single word for conclusions may bebeneficial because the weight of theconclusion can be indicated alongwith the conclusion itself. Phrasessuch as these present a belief ground-ed in reasoning while one-wordanswers present a conclusion asabsolute fact. ���

About the AuthorMichele Triplett is the Latent PrintOperation’s Manager for the KingCounty Regional AFIS IdentificationProgram in Seattle, Washington. Shehas worked for the program for thepast 20 years.

THEEVIDENCE TECHNOLOGY MAGAZINE

DIGITAL EDITIONwww.EvidenceMagazine.com/v10n6.htm