dsai.se cse, chalmers · •machine learning predictions are more accurate for some •ai system...

58
Algorithmic Fairness & Machine Learning Fredrik D. Johansson DSAI.se CSE, Chalmers

Upload: others

Post on 11-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

AlgorithmicFairness&MachineLearningFredrikD.Johansson

DSAI.se CSE,Chalmers

Page 2: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

2

Artificialintelligence(AI)isalreadypartofsociety

Autonomoustransportation

Recommendationsystems

“Smart”homes

Clinicaldecisionsupport

Page 3: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

3

Decisionmaking

Illustration:De-Arteaga,2019,http://demo.clab.cs.cmu.edu/ethical_nlp/slides/BiasInBios.pdf

Page 4: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

4

MLindecisionmaking

Illustration:De-Arteaga,2019,http://demo.clab.cs.cmu.edu/ethical_nlp/slides/BiasInBios.pdf

Prediction offuture crime

Prediction ofjob success

Prediction ofrisk of death

Page 5: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

5

Supervisedmachinelearning(ML)

• Systemsthatlearn topredictlabel 𝒀 forinput 𝑿

• Example:Recognition

Whatisthis? 𝑿 = , 𝒀 = ”dog”

• Data:Labeled images

𝑿 = , 𝒀 = ”cat”

Page 6: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

6

SupervisedlearningII

1Dengetal.,CVPR,2009

1.Observation 2.Prediction

“Cat”Supervisedlearningloop

“Dog”“Cat”

3.Supervision4.Learning

Categorizeimagesinto1000classes1

0,0%5,0%10,0%15,0%20,0%25,0%30,0%

Errorinpredictingcorrectlabel

Page 7: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

7

Empiricalriskminimization

• Findthemodel𝜽 withtheleastobservedpredictionmistakes

Learning

minimize`

𝔼b 𝑌d̀ ≠ 𝑌

Predictedlabel

Frequencyofobserved mistakes

Truelabel

Data

Page 8: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

8

MLindecisionmaking

Illustration:De-Arteaga,2019,http://demo.clab.cs.cmu.edu/ethical_nlp/slides/BiasInBios.pdf

Prediction offuture crime

Whathappenswhenthispredictionisbiased?

Whatifdecisionsmadebasedonitarediscriminatory?

Page 9: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

9

Biaseddata&biasedalgorithms

• Machinelearningpredictionsaremoreaccurateforsome

• AIsystemforbaildecisions ismorelenientforwhitepeople,morestrictforblackpeople1

• MLmaypreserve humanbiasesorcreatenewones—orboth!

1Angwinetal.ProPublica.2016

MachineBias1

Page 10: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

10

Biaseddata&biasedalgorithms

• Machinelearningpredictionsaremoreaccurateforsome

• AIsystemforbaildecisions ismorelenientforwhitepeople,morestrictforblackpeople1

• MLmaypreservehumanbiasesorcreatenewones—orboth!

1Angwinetal.ProPublica.2016

MachineBias1

Howcanweavoidthis?

Page 11: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

11

Outline

1. Definitionsoffairnessandbias

2. Reducingbias

3. Limitationsofgroupfairness

4. Wrappingup

Page 12: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

12

Shout-outtoMoritzHardt’s andSolonBarocas’

NeurIPS ‘17TutorialonFairnessinMachineLearning

Lookitupformoredetails&examples!

https://mrtz.org/nips17

Page 13: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

13

Algorithmicfairness

• Asystemmaybedeemedunfairifitdiscriminatesindividualsbasedoninformationthatisirrelevant tothesystem’spurpose

• Thedetailsareinherentlydomain-specific

• Algorithmicfairnessattemptstoformalize thismathematically

Page 14: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

14

Exampleofunfairness

• Scenario: Onaverage,theriskofblackcriminalscommittingcrimesafterbeingreleasedisoverestimated moreoftenthantheriskforwhitecriminals.

• Thisisaconcernstatedonthegrouplevel• Wewillseeotherexampleslater!

• Thestatementistiedtotheattributes ofblack/white

Page 15: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

15

Protectedattributes

• ItisillegalinSwedentodiscriminateonthebasisofsex,gender,ethnicity,religion,disability,sexualpreferenceorage1

• Theseareexamplesofso-calledprotectedattributes

• Wewillfirstdiscussfairnessw.r.t.these

1 https://www.do.se/lag-och-ratt/diskrimineringslagen/

Page 16: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

16

Direct&indirectdiscrimination

• Directdiscrimination(disparatetreatment):Individualsare explicitly treateddifferentlyonthebasisofaprotectedattribute

• Indirectdiscrimination(disparateimpact):Individualswithcertainprotectedattributesaredisadvantagedasaresultofseeminglyneutral policies

Page 17: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

17

Actingonpredictions

• Forsimplicity,weassumethatthedecision-makerisonlyinterestedinthetargetofprediction1

1I.e.,if𝑌 wasknown,wewouldbaseourdecisiononthat

Prediction of future crime

Protected attribute A

Future crime(unobserved)

𝑌d(𝑋, 𝐴)

𝑌

X

Page 18: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

18

Howtoformalizefairness?

• Attempt1:IndependenceToavoiddirect discrimination,enforceindependencebetweenprediction𝑌d andtheprotectedattribute𝐴

• orequivalently,Pr 𝑌d = 1 ∣ 𝐴 = 0 = Pr 𝑌d = 1 ∣ 𝐴 = 1

𝑌d ⊥ 𝐴 𝑌d andA statisticallyindependent

Page 19: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

19

Independencecriterion

• Independenceisaverycrude metric• Alsocalledstatisticalparity,demographicparity

• Doesnottakeintoaccountthecontext𝑋 (e.g.,criminalrecord)ortheprobabilityoftheoutcome𝑌 (e.g.,recidivism)• Allowsaccuracyinonegroup,randomnessintheother

• Outcome𝑌 couldbecorrelatedwith 𝐴• Then,theperfectprediction𝑌d = 𝑌 doesnotsatisfyindependence

𝑌d ⊥ 𝐴

Page 20: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

20

Howtoformalizefairness?

• Attempt2:SeparationAttemptstoeliminaterelianceonprotectedattributegivenperfectinformationoftheoutcome

• orequivalently,Pr 𝑌d ∣ 𝐴 = 0, 𝑌 = Pr 𝑌d ∣ 𝐴 = 1, 𝑌

𝑌d ⊥ 𝐴 ∣ 𝑌

Page 21: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

21

Separationcriterion

• Fixessomeshort-comingswithindependence:• Allowsperfectprediction:𝑌d = 𝑌

• Capturesequalityinopportunity,

𝑌d ⊥ 𝐴 ∣ 𝑌

Pr 𝑌d = 1 ∣ 𝐴 = 0, 𝑌 = 1 = Pr 𝑌d = 1 ∣ 𝐴 = 1, 𝑌 = 1

Giventhatindividualswithdifferentprotectedattributes arebothgoingtosucceed,theyaregiventhesameopportunity

Page 22: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

22

Howtoformalizefairness?

• Attempt3:SufficiencyAttemptstoobtainaprediction𝑌dtowhichadding𝐴 givesnoextrainformationabouttheoutcome𝑌

• orequivalently,Pr 𝑌 ∣ 𝐴 = 0, 𝑌d = Pr 𝑌 ∣ 𝐴 = 1, 𝑌d

𝑌 ⊥ 𝐴 ∣ 𝑌d

Page 23: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

23

Sufficiencycriterion

• Sufficiencyholdsforgroup-calibrated models

• Let𝑅 denotethescoreusedtodetermine𝑌d

• Forexample𝑌d = 1 𝑅 > 𝑡 forsomethresholdt

• Then,𝑅 iscalibratedforgroup𝑎 if

𝑌 ⊥ 𝐴 ∣ 𝑌d

Pr 𝑌 = 1 ∣ 𝑅 = 𝑟, 𝐴 = 𝑎 = 𝑟

Theprobabilityoftheoutcomewhenweassignscore𝑟isequalto𝑟

Page 24: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

24

Statisticalfairnesssummary

Independence Separation Sufficiency

𝑌d ⊥ 𝐴 𝑌d ⊥ 𝐴 ∣ 𝑌 𝑌 ⊥ 𝐴 ∣ 𝑌d

• Thesearepotentiallyalldesiredpropertiesofpredictivesystems

Statisticalparity E.g.,equalopportunity Calibration

Page 25: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

25

Outline

1. Definitionsoffairnessandbias

2. Reducingbias

3. Limitationsofgroupfairness

4. Wrappingup

Page 26: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

26

Wheredoesbiasenter?

Illustration:De-Arteaga,2019,http://demo.clab.cs.cmu.edu/ethical_nlp/slides/BiasInBios.pdf

Dataencodespre-existinghumanbiases

Machinesmayamplifyorcreatebias

Humansactbiased onpredictions

Page 27: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

27

Wheredoesbiasenter?

Illustration:De-Arteaga,2019,http://demo.clab.cs.cmu.edu/ethical_nlp/slides/BiasInBios.pdf

Let’sfocusonthisone

Page 28: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

28

Biasinlearning

• Assumethatourdataisunbiased,i.e.,thatwelearntopredictbasedonthethingsweactuallycareabout• E.g.,recidivismvspreviousbaildecisions

• Now,saywewanttoensureseparation(e.g.,equalityinopportunity)

• Whycan’twejustdosupervisedlearningasnormal?

Pr 𝑌d = 1 ∣ 𝐴 = 0, 𝑌 = Pr 𝑌d = 1 ∣ 𝐴 = 1, 𝑌

Page 29: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

29

Biasinlearning

• Underseparation,weareconcernedwithunfairnessinerrors

Pr 𝑌d = 0 ∣ 𝐴 = 0, 𝑌 = 1 ≠ Pr 𝑌d = 0 ∣ 𝐴 = 1, 𝑌 = 1Differenceinfalsenegativerates (underestimatingrisk)

Page 30: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

30

Biasinlearning

• Underseparation,weareconcernedwithunfairnessinerrors

• If𝑌d = 𝑌,separationissatisfied(unlikely)

Pr 𝑌d = 0 ∣ 𝐴 = 0, 𝑌 = 1 ≠ Pr 𝑌d = 0 ∣ 𝐴 = 1, 𝑌 = 1

Pr 𝑌d = 1 ∣ 𝐴 = 0, 𝑌 = 0 ≠ Pr 𝑌d = 1 ∣ 𝐴 = 1, 𝑌 = 0Differenceinfalsepositiverates (overestimatingrisk)

Differenceinfalsenegativerates (underestimatingrisk)

Page 31: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

31

Supervisedlearningrecap

• Recall: Supervisedlearningattemptstominimizeexpectederror

• Whatiftheerrorisdifferentfordifferentprotectedgroups?

• Theobjectiveabovedoesnothingtopromotefairness• Neitherdoesstratifyingitbygroupandminimizingseparately

minimize`

𝔼b 𝑌d̀ ≠ 𝑌

Page 32: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

32

Decomposingpredictionerror

• Predictionerrors,falsepositiveratesandmeansquarederrorsmayallbedecomposedintermsofbias,varianceandnoise1

• Decomposingerrormayguidereductionofbias!2

1Domingos,2000,2Chen,J.,Sontag,NeurIPS,2018

Predictionerror = Bias + c�Variance + c�Noise

Errordueto…poormodel … smallsample … inadequatecovariates

Page 33: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

33

Decomposingunfairness

• Violations ofourfairnesscriterion

• maybedecomposed asdifferencesinbias,variance,noise1

1Chen,J.,Sontag,NeurIPS,2018

Γ = Bias� − Bias� + Variance� − Variance�+ Noise� − Noise�

Γ = Pr 𝑌d = 1 ∣ 𝐴 = 0, 𝑌 − Pr 𝑌d = 1 ∣ 𝐴 = 1, 𝑌

Page 34: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

34

Example:Differentvariance

• Muchfewersamplesforonegroup

SubjectfromgroupASubjectfromgroupB

Riskofrecidivism,𝑌

Context,𝑋

Page 35: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

35

Example:Differentvariance

• MuchfewersamplesforonegroupTrueriskgroupB

TrueriskgroupA

SubjectfromgroupASubjectfromgroupB

Page 36: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

36

Example:Differentvariance

• Muchfewersamplesforonegroup

ModelforB

ModelforA

Page 37: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

37

Example:Predictingfutureincome1

• ExaminingimpactofvarianceonunfairnessintheAdultdatasetforpredictionhigh/lowincome

• Collectingmoresamplesreducesthedifferenceinfalsepositiveratesandfalsenegativerates

1Chen,J.,Sontag,NeurIPS,2018

Unfairn.FalsePositivesUnfairn.FalseNegatives

Page 38: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

38

Example:Differentbias

• Modelbettersuitedtoonegroup

ModelforA

ModelforB

Page 39: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

39

Example:DifferentNoise

• Context𝑋 morepredictiveforonegroup

Context,𝑋

Page 40: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

40

Combattingsourcesofunfairness

• HigherbiasforgroupA?

• HighervarianceforgroupA?

• HighernoiseforgroupA?

TailorthemodeltoA

Collect moresamplesfromgroupA

MeasuremorevariablesrelevanttogroupA

Page 41: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

41

Combatting sourcesofunfairness

• HigherbiasforgroupA?

• HighervarianceforgroupA?

• HighernoiseforgroupA?

• WhatifI’vedoneallIcan?

TailorthemodeltoA

CollectmoresamplesfromgroupA

MeasuremorevariablesrelevanttogroupA

Page 42: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

42

Enforcingfairnesscriteria1

• Alltheremediesonthepreviousslideattempttoreduceerrorforthegroupwithhighererror

• Ifthisisinfeasible,andwearewillingtosacrificesomeaccuracy,wecanconstrain orpost-process predictionstosatisfycriteria

• Example: LearnonlymodelssuchthatPr 𝑌d = 1 ∣ 𝐴 = 0 = Pr 𝑌d = 1 ∣ 𝐴 = 1 (independence)

1Hardt,Price,Srebro,NeurIPS,2016

Page 43: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

43

Outline

1. Definitionsoffairnessandbias

2. Reducingbias

3. Limitationsofgroupfairness

4. Wrappingup

Page 44: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

44

Statisticalfairnesssummary

Independence Separation Sufficiency

𝑌d ⊥ 𝐴 𝑌d ⊥ 𝐴 ∣ 𝑌 𝑌 ⊥ 𝐴 ∣ 𝑌d

• Theseareoftenalldesiredpropertiesofpredictivesystems

Page 45: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

45

Statisticalfairnessincompatibility

Independence Separation Sufficiency

𝑌d ⊥ 𝐴 𝑌d ⊥ 𝐴 ∣ 𝑌 𝑌 ⊥ 𝐴 ∣ 𝑌d

• Problem: Innon-trivialcaseswhere𝑌 ⊥ 𝐴 or𝑌d ⊥ 𝑌,anytwoofthesethreecriteriaaremutuallyexclusive—wehavetochoose!

Page 46: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

46

Machinebias—asecondlook

1Angwinetal.ProPublica.2016

1. Theriskscore,COMPAS,thatwasusedtopredictrecidivismwasclaimedtobecalibrated

• Theirriskscoresatisfied

MachineBias1

Pr 𝑌 = 1 ∣ 𝑅 = 𝑟, 𝐴 = 𝑎 = 𝑟

Approximatelysatisfiedsufficiency

Page 47: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

47

Machinebias—asecondlook

1Angwinetal.ProPublica.2016

2. ThepublicationProPublicashowedthatfalsenegativerates andfalsepositiverateswere differedacrossraces

Pr 𝑌d = 1 ∣ 𝐴 = 0, 𝑌 = 0≠ Pr 𝑌d = 1 ∣ 𝐴 = 1, 𝑌 = 0

1. Theriskscore,COMPAS,thatwasusedtopredictrecidivismwasclaimedtobecalibrated

• Theirriskscoresatisfied

Pr 𝑌 = 1 ∣ 𝑅 = 𝑟, 𝐴 = 𝑎 = 𝑟

Approximatelysatisfiedsufficiency Didnotsatisfyseparation

Page 48: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

48

Machinebias—asecondlook

1Angwinetal.ProPublica.2016

2. ThepublicationProPublicashowedthatfalsenegativerates andfalsepositiverateswere differedacrossraces

Pr 𝑌d = 1 ∣ 𝐴 = 0, 𝑌 = 0≠ Pr 𝑌d = 1 ∣ 𝐴 = 1, 𝑌 = 0

1. Theriskscore,COMPAS,thatwasusedtopredictrecidivismwasclaimedtobecalibrated

• Theirriskscoresatisfied

Pr 𝑌 = 1 ∣ 𝑅 = 𝑟, 𝐴 = 𝑎 = 𝑟

Whichdowecaremoreabout?

Approximatelysatisfiedsufficiency Didnotsatisfyseparation

Page 49: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

49

Machinebias—asecondlook

• Neithercalibrationnorseparationruleoutunfairpractices

• Forexample,calibrationisinsensitive todifferencesintruerisk

Pr 𝑌� 0.1 0.2 0.4 0.6 0.8 0.9

Average

=0.5

𝑌�� 0.5 0.5 0.5 0.5 0.5 0.5 Calibrated!Overestimaterisk Underestimaterisk

Page 50: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

50

Machinebias—asecondlook

• Neithercalibrationnorseparationruleoutunfairpractices

• Forexample,calibrationisinsensitive todifferencesintruerisk

Pr 𝑌� 0.1 0.2 0.4 0.6 0.8 0.9

Average

=0.5

𝑌�� 0.5 0.5 0.5 0.5 0.5 0.5 Calibrated!

Unfairevenwithinthegroup!

Overestimaterisk Underestimaterisk

Page 51: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

51

1. Comparesoutcomesforcomparablesubjects

2. Notionof“comparable”shouldbetask-specific

3. Hardtomakepractical

Individualfairness

Group&individualfairness

Statistical/groupfairness

1. Comparesstatisticsofprotectedgroups

2. Differentstatisticstelldifferentstories

3. Definitionanduseofgroupsisoftensensitive

Page 52: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

52

Counterfactualfairness

• Individualfairnessisintimatelytiedtocausality

IfIwasawoman,wouldIhavebeenlesslikelytogethired?

• Thisisacounterfactualquestion1.Largeliteratureforthese,someofwhichhasbeenadaptedtofairness2,3

• Difficulttospecifywhat“ifIwasawoman”means• Canuseproxies—e.g.,putafemalenameontheCV

1Pearl,Causality,2000,2Kusneretal.,NeurIPS,2017,3Nabi&Shpitser,AAAI,2018

Page 53: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

53

Biaseddata

• Sofar,we’veassumedthat𝑌,theoutcomeitself,isfair

• Measurementof𝑌 mightitselfbebiased.Forexample,if• 𝑌 representspasthiringdecisions• or𝑌 isanaggregateoffeaturesbettersuitedforonegroup• or𝑌 isoutdated(e.g.,anoutcomeunderanoldpolicy)• …

Page 54: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

54

Outline

1. Definitionsoffairnessandbias

2. Reducingbias

3. Limitationsofgroupfairness

4. Wrappingup

Page 55: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

55

Wrappingup

Thereisnoone-size-fits-allfairnessandwecan’tsatisfyallalternatives

• Wefundamentally havetochoosedependingondomain

Page 56: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

56

Wrappingup

Statisticaldifferencesdon’talwaysimplydiscrimination(correlation≠ causation)

• Differentfairnessinterpretationsforsamestatistics

Page 57: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

57

Wrappingup

TheseissuesarenotspecifictoAI!

• Sameproblemsforhumans,rule-basedsystems,andAIalike…butweneedtomonitorallofthese

Page 58: DSAI.se CSE, Chalmers · •Machine learning predictions are more accurate for some •AI system for bail decisionsis more lenient for white people, more strict for black people1

58

FredrikD.Johansson

[email protected]

DSAI.se

IreneChen DavidSontag

Workedonfairnesswith