to err is algorithm: algorithmic fallibility and economic organisation
Post on 22-Jan-2018
123 Views
Preview:
TRANSCRIPT
Juan Mateos-Garcia@JMateosGarciaData for Policy 2017 ConferenceLondon7 September 2017
To Err is AlgorithmAlgorithmic fallibility and economic organisation
Motivation
An explosion of algorithmic decision-making An explosion of algorithmic error
Algorithms will always make mistakes(Even without bias, manipulation or catastrophic failure)
How do we balance the benefits of more algorithmic decision-making and the costs of more algorithmic errors? What are some important factors in these situations?I approach the issue from an economics angle, focusing on 3 questions:
Risk, Supervision and Scale
Literature"...in an information-rich world, the wealth of information means a
dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather
obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information
sources that might consume it" (Simon 1971, pp. 40–41)
• Attention is scarce and decision-making is not perfect• Organisations invest on technologies that economise on attention:
routines, heuristics and algorithms (see also Agrawal el al, 2017).• They design organisational structures to manage error (Sah and
Stiglitz, (1984, 1985, 1988) -> This provides the foundation for the model I sketch in the rest of the talk.
Model #1 (Risk)We consider an organisation where an algorithm (a1) needs to process informational
inputs, making decisions about their quality.• The quality of the input pool is represented by α (share of good inputs)• p11 is the true positive rate and p12 is the false positive rate.• Accepting a good input creates a benefit r1, accepting a bad input has a cost -r2
Input pool quality α
a1
Rejected Accepted
r1 -r2
(1- α )p12α p11
The expected benefit of a decision is positive if:
Algorithms operating in high stakes / low quality environments should be accurate
Model #2 (supervision)We introduce a human supervisor a2 who validates a share t of the algorithm’s decisions
with a true positive rate p21 and a false positive rate p22. Her cost is C2
Input pool quality α
Accepted
a1 Accepted
r1 -r2
α(1-t) p11
Validated
a1 decision-making process
Not validated
r1 -r2
(1-α)(1-t) p12
α t p11 p21 (1-α) t p12 p22The net contribution of the supervisor is positive ifSupervisors more valuable in high stakes /low-quality input environments, and if they are not costly or highly productive
!
Model #3 (scale)We consider what happens when the number of inputs (and decisions) increase. We
assume t stays constant, and that r1, r2 do not change with more decisions.We assume the organisation’s labour costs are a function of its developer and supervisor
workforces La1 and La2
Inputs (+)
? (-)
(+) (+++)
Gains in accuracy vs increase in variance
CostsBenefits
+ incentives to game the algorithm
Highly productive developers (skills
shortage?)
Low productivity supervisors. Baumol’s
disease?
Eventually diminishing returns set in?
Implications
1. Finding the right algorithm-domain fit• Domains with different stakes require algorithms with
different accuracies (e.g. recommendation engine vs criminal justice system)
• Government by algorithm could get expensive if it requires substantial human supervision
2. On the costs and benefits of human supervision• Crowdsourcing can reduce supervision costs...but it creates
a new type of algorithmic unfairness.• Human supervision can help detect & address sudden
declines in performance, specially where costs are harder to measure or external to the organisation
Conclusion
Extensions• Consider alternative and more complex organisational designs• Extend to algorithmic discrimination situations• Endogeneise quality α to cover games between platforms and bad
actors• Explore effects of scale on benefits and costs (r1, r2)
Applications• Operationalise, simulate and experiment• Make Economics part of “a practical and broadly applicable social-
systems analysis [that] thinks through all the possible effects of AI systems on all parties [drawing on] philosophy, law, sociology, anthropology and science-and-technology studies, among other disciplines” (Calo and Crawford, 2016)
nesta.org.uk
@nesta_uk
juan.mateos-garcia@nesta.org.uk@jmateosgarcia
top related