algorithmic discrimination the challenge of unveiling

37
MARCELA MATTIUZZO Algorithmic Discrimination The Challenge of Unveiling Inequality in Brazil Masters Dissertation Supervisor: Professor Virgílio Afonso da Silva UNIVERSITY OF SÃO PAULO FACULTY OF LAW São Paulo 2019

Upload: others

Post on 22-Jul-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Algorithmic Discrimination The Challenge of Unveiling

MARCELA MATTIUZZO

Algorithmic Discrimination – The Challenge of Unveiling

Inequality in Brazil

Masters Dissertation

Supervisor: Professor Virgílio Afonso da Silva

UNIVERSITY OF SÃO PAULO

FACULTY OF LAW

São Paulo

2019

Page 2: Algorithmic Discrimination The Challenge of Unveiling
Page 3: Algorithmic Discrimination The Challenge of Unveiling

MARCELA MATTIUZZO

Algorithmic Discrimination – The Challenge of Unveiling

Inequality in Brazil

Masters dissertation submitted to the graduate

program in law, at the School of Law at the

University of São Paulo, within the field of

concentration of Constitutional Law, under the

supervision of Professor Virgílio Afonso da

Silva.

UNIVERSITY OF SÃO PAULO

SCHOOL OF LAW

São Paulo

2019

Page 4: Algorithmic Discrimination The Challenge of Unveiling

Catalogação na Publicação

Serviço de Processos Técnicos da Biblioteca da

Faculdade de Direito da Universidade de São Paulo

Mattiuzzo, Marcela

Algorithmic Discrimination - The Challenge of Unveiling

Inequality in Brazil / Marcela Mattiuzzo. -- São Paulo, 2019.

145 p. ; 30 cm.

Dissertação (Mestrado) – Programa de Pós-Graduação em

Direito, Faculdade de Direito, Universidade de São Paulo, São Paulo,

2019.

Orientador: Virgílio Afonso da Silva.

1. Discriminação algorítmica. 2. Inteligência artificial. 3.

Igualdade. 4. Big Data. 5. Governança algorítmica. I. Silva, Virgílio

Afonso, orient. II. Título.

Page 5: Algorithmic Discrimination The Challenge of Unveiling

MATTIUZZO, M. Algorithmic Discrimination – The Challenge of Unveiling Inequality in

Brazil. 2019. 145 f. Dissertação (Mestrado em Direito Constitucional) – Faculdade de Direito,

Universidade de São Paulo, São Paulo, 2019.

Aprovado em:

Banca Examinadora

Prof(a). Dr(a). ____________________________________________

Instituição: ____________________________________________

Julgamento: ____________________________________________

Prof(a). Dr(a). ____________________________________________

Instituição: ____________________________________________

Julgamento: ____________________________________________

Prof(a). Dr(a). ____________________________________________

Instituição: ____________________________________________

Julgamento: ____________________________________________

Page 6: Algorithmic Discrimination The Challenge of Unveiling
Page 7: Algorithmic Discrimination The Challenge of Unveiling

Acknowledgments

I would like to thank my adviser, Professor Virgílio Afonso da Silva, who indeed made

this dissertation much better than it initially was. And also my “informal advisers,” Artur

Péricles Lima Monteiro, Flávio Marques Prol, Laura Schertel Mendes, and Victor Oliveira

Fernandes, for your many inputs.

I am grateful to my family and to my friends, who provided the most important kind of

assistance: encouragement. And to everyone at VMCA, especially Vinicius Marques de

Carvalho, who worked extra hours to allow me to finish this work.

Page 8: Algorithmic Discrimination The Challenge of Unveiling
Page 9: Algorithmic Discrimination The Challenge of Unveiling

Quem sabe direito o que uma pessoa é? Antes sendo: julgamento é sempre defeituoso, porque

o que a gente julga é o passado.

Guimarães Rosa

Page 10: Algorithmic Discrimination The Challenge of Unveiling
Page 11: Algorithmic Discrimination The Challenge of Unveiling

Abstract

MATTIUZZO, Marcela. Algorithmic Discrimination – The Challenge of Unveiling Inequality

in Brazil. 2019. p. 145. Dissertation (Master Degree in Constitutional Law) – School of Law,

University of São Paulo, São Paulo, 2019.

Abstract: the objective of this work is to provide some clarity on what the role of the Law can

be in shedding light upon algorithmic discrimination, as well as how legal instruments could

help minimize its risks, with a specific focus on the Brazilian jurisdiction.

To do so, it first engages in a debate about what algorithms indeed are, and how the emergence

of the data-driven economy, Big Data, and machine learning have leveraged the use of

automated systems. Next, it conceptualizes discrimination, and suggesting a typology of

algorithmic discrimination that takes statistics into account to provide a rationalization of the

debate.

It moves on to discussing the path towards enforcing legal norms against discriminatory

outcomes running from the use of algorithms. Because legislation specifically aimed at fighting

automated systems is still scarce (or application of the current legislation to the problem is

contentious), it engages in a debate about the horizontal effects of fundamental rights – given

that a relevant part of discriminatory practices occur among private parties, and the most basic

defense an individual has against discrimination is the constitutional right to equality. It then

analyzes ordinary legislation in three jurisdictions, the United States of America, Germany, and

Brazil, that could also be enforced against discriminatory practices running from algorithms,

with a special focus on the Brazilian legislation. The legislative debate concludes with the

presentation of two concrete cases of algorithmic discrimination, one concerning the

unemployment policy in Poland, and the other regarding credit scoring in Brazil. The cases are

presented so that the applicability of Brazilian legislation to deal with algorithmic discrimination

can be discussed.

The final chapter is focused on debating the path forward and what can and should be done by

experts, legislators, and policymakers to foster algorithmic innovation without losing sight of

its potential for discrimination. It first presents the literature on algorithmic governance and the

many proposals for dealing with the problem – dedicating a specific section to the challenges

brought about by machine learning – and then sets out an agenda for Brazil.

Keywords: algorithmic discrimination, artificial intelligence, equality, big data, algorithmic

governance.

Page 12: Algorithmic Discrimination The Challenge of Unveiling
Page 13: Algorithmic Discrimination The Challenge of Unveiling

Resumo

MATTIUZZO, Marcela. Discriminação Algorítmica – O desafio em desvendar a desigualdade

no Brasil. 2019. p. 145. Dissertação (Mestrado em Direito Constitucional) – Faculdade de

Direito, Universidade de São Paulo, São Paulo, 2019.

Resumo: O objetivo desse trabalho é esclarecer qual é o papel que o Direito pode desempenhar

no debate sobre a discriminação algorítmica, assim como de que maneira os instrumentos

jurídicos podem auxiliar a mitigar os riscos discriminatórios desse tipo de prática, com foco

especial na jurisdição brasileira.

Para isso, primeiro o trabalho propõe um debate sobre o que são algoritmos, e como a

emergência da economia de dados, do Big Data e de técnicas de machine learning impulsionam

o uso de sistemas automatizados. Em seguida, conceitua-se a discriminação, propondo-se uma

tipologia para a discriminação algorítmica que leva em conta questões estatísticas, a fim de

racionalizar a discussão.

A dissertação então parte para o debate sobre os caminhos para a aplicação de normas jurídicas

em face de discriminação algorítmica. Dado que leis e normas especificamente voltados a esse

tema ainda não são muito difundidas (e que a aplicação da legislação existente a essa questão é

controversa), o trabalho propõe um debate sobre a eficácia horizontal dos direitos fundamentais

– tendo em vista que boa parte das práticas discriminatórias via uso de algoritmos se dá entre

partes privadas, e que a defesa mais básica que um indivíduo tem contra a discriminação é o

direito constitucionalmente garantido à igualdade. Passa-se então a uma análise da legislação

ordinária em três jurisdições, Estados Unidos da América, Alemanha e Brasil, legislação essa

que pode também ser aplicada em casos de práticas discriminatórias levadas a cabo via

algoritmos, dando especial destaque ao caso brasileiro. Esse debate legislativo é concluído com

a apresentação de dois casos concretos, um que diz respeito à política de acesso a emprego na

Polônia e outro que trata das práticas de credit scoring no Brasil. Os casos são apresentados de

forma a se pensar a eventual possibilidade de uso de regras brasileiras para lidar com os temas

discriminatórios que se colocam concretamente.

O capítulo final tem como foco o debate do caminho a ser trilhado, e qual pode e deve ser feito

por especialistas, legisladores e aplicadores do direito para promover a inovação no campo

algorítmico sem perder de vista seus potenciais impactos discriminatórios. Primeiro, apresenta-

se a literatura sobre governança algorítmica e as muitas propostas que pretendem endereçar o

tema – com especial atenção aos desafios apresentados pelo machine learning – e então delineia-

se uma agenda para o Brasil sobre o assunto.

Palavras-chave: discriminação algorítmica, inteligência artificial, igualdade, big data,

governança algorítmica.

Page 14: Algorithmic Discrimination The Challenge of Unveiling
Page 15: Algorithmic Discrimination The Challenge of Unveiling

Table of Contents

1 Introduction ............................................................................................................................ 17

2 Algorithms, Models, and Discrimination .............................................................................. 25

2.1 The Algorithmic System ................................................................................................. 25

2.2 The Emergence of Big Data and Machine Learning ...................................................... 27

2.2.1 The risk of discrimination ........................................................................................ 33

2.3 Discrimination and Profiling - Generalizations under the law ....................................... 39

2.3.1 Striving for Particularization - Is individualized decision-making superior? .......... 39

2.3.2 Profiling and Other Legal Matters ........................................................................... 44

2.3.3 Algorithmic Discrimination: A Proposed Typology ............................................... 46

2.3.4 Causation, Correlation, Objectivity and The Limitations of Algorithms ................ 53

2.3.4.1 Objectivity ........................................................................................................ 56

2.3.4.2 Prediction, not Judgment .................................................................................. 58

3 Algorithmic Discrimination in Practice – The Challenges in Enforcement .......................... 61

3.1 The Horizontal Effects of Fundamental Rights .............................................................. 62

3.1.1 United States of America ......................................................................................... 62

3.1.2 Germany ................................................................................................................... 66

3.1.3 Brazil ........................................................................................................................ 69

3.1.4 Relevance for algorithmic discrimination ................................................................ 73

3.2 Antidiscrimination and Data Protection Legislation – What Lies Beyond Fundamental

Rights .................................................................................................................................... 74

3.2.1 The United States and the Civil Rights Act ............................................................. 74

3.2.2 Germany, informational self-determination, and the European Union ................... 77

3.2.3 Brazil, the Right to Equality, and the GDPA ........................................................... 82

3.2.3.1 The Consumer Protection Code ........................................................................ 82

Page 16: Algorithmic Discrimination The Challenge of Unveiling

3.2.3.2 The Credit Information Act ............................................................................... 85

3.2.3.3 The Public Information Access Act .................................................................. 89

3.2.3.4 The Brazilian Internet Framework .................................................................... 90

3.2.3.5 The General Data Protection Act ...................................................................... 90

3.3 Case Studies .................................................................................................................... 93

3.3.1 Labor and unemployment in Poland ........................................................................ 93

3.3.2 Credit scoring in Brazil ............................................................................................ 99

4 Algorithmic Discrimination and the Law – The Way Forward ........................................... 103

4.1 Algorithmic Governance and Policy Proposals – From Transparency to Accountability

............................................................................................................................................. 103

4.1.1 The Policy Challenges of Machine Learning ......................................................... 117

4.2 An Agenda for Brazil .................................................................................................... 125

References ............................................................................................................................... 133

Page 17: Algorithmic Discrimination The Challenge of Unveiling

17

1 Introduction

The game of Go was developed in China over 2,500 years ago, it is the oldest board

game that human kind is aware of, and counts with a legion of fans. The objective of the game

is simple: each of the two players must occupate more board territory than the opponent in order

to win, either by capturing opponent’s stones or by surrounding empty space. The black and

white stones are the tools the players must handle to reach their objective; and they both take

turns placing them on the board. Apparently, the game is very simple, but in reality it can be

terribly complex. The reason is the number of possible plays: whereas in chess the initial move

by any player is limited to 20 moves, in Go it spikes to 361, and grows exponentially thereafter.

To have an idea, the estimated number of possible board configurations in chess is 10120,

whereas in Go it is 10174, meaning there are trillions more configurations in Go than in chess.

In 2016, DeepMind, a research project turned start-up, later acquired by Google’s parent

company Alphabet, challenged world-champion Lee Sedol to a game of Go.1 The challenger:

AlphaGo, a deep neural network developed by a group of programmers. Demis Hassabis,

DeepMind’s CEO and co-founder, a child chess prodigy turned computer scientist, entered into

this endeavor after deciding that to test the true potential of the topic he had been studying since

graduation from Cambridge University – artificial intelligence – he had to crack the game of Go

by beating a professional player.

AlphaGo beat Sedol 4-1 in a 5-match challenge. As Hassabis explains in the AlphaGo

documentary, originally the program learned how to play by watching over 100,000 games by

strong amateurs, and its objective was to mimic human players. What gave it the real leap,

however, was learning from its own mistakes. It has since improved its capacity by playing

games against itself, and became the best Go player in the world.2 This celebrated achievement

of artificial intelligence is by no means the only application of algorithms to problem-solving,

1 Before challenging Sedol, AlphaGo had already beat the European Champion, Fan Hui. The results of the

European matches were published by DeepMind in Nature. See: SILVER, D. et al. Mastering the game of Go

with deep neural networks and tree search. Nature, vol. 529, January 28, 2016. Available at:

<https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf>. Access: January 10, 2019. 2 SILVER, D. et al. Mastering the game of Go without human knowledge. Nature, vol. 550, January 19, 2017.

Page 18: Algorithmic Discrimination The Challenge of Unveiling

18

quite on the contrary; algorithms have been applied to a wide-array of matters with various

outcomes, notably, they have been used by public authorities to inform decisions on various

matters, ranging from sentencing to determining the best treatment for a given illness.

Imagine that one day, instead of leaving decisions of whether or not an individual should

be imprisoned to a judge, an algorithm3 was employed to calculate the risk of convicts’ future

conduct as the basis for the criminal sanction; that is, the prison term would vary depending on

whether the algorithm identified convicts as presenting “low” or “high” risk to society. Imagine

that to reach its decision the algorithm evaluated the convicts’ responses to a wide array of

questions that included whether the person’s parents had ever been incarcerated, whether her

friends had ever consumed illegal drugs, how often this person had been involved in fights at

school, and so on. Imagine, lastly, that the mathematical formulas and weighting used by this

algorithm were not public, such that there was no public access to the input used or outputs

generated by it, and that convicts were thus unable to determine which aspects of their behavior

led to their classification as high or low risk to society.

This scenario might sound like science fiction, but algorithms such as the one described

above are already in use in several jurisdictions within the United States, and their

implementation has also begun in Brazil.4 Much about their use is questionable,5 but this work

3 One of the goals of this work is to clarify what is meant by an algorithm and how it carries out tasks such as risk

assessment. In short, an algorithm is nothing more than a sequence of actions to be performed in a given order,

which generates a specific result. One can have an algorithm for making dinner, an algorithm for taking a bath, an

algorithm for walking to work every day, etc. Thus, algorithms do not have to be computerized, nor even need to

be complex. 4 The most popular tools are COMPAS and LSI-R (Level of Service Inventory Revised). In Brazil, the only instance

of algorithmic use in sentencing so far is in the State of Minas Gerais, where an algorithm was used to identify the

kinds of appeals presented by parties and to provide “model sentences” for review by judges. Other algorithms,

however, are frequently used for purposes such as selecting Reporting Justices at the Brazilian Supreme Court.

See: Tribunal de Justiça do Estado de Minas Gerais. TJMG utiliza inteligência artificial em julgamento virtual.

November 07, 2018. Available at: <https://www.tjmg.jus.br/portal-tjmg/noticias/tjmg-utiliza-inteligencia-

artificial-em-julgamento-virtual.htm#.XC1Vby2ZO1s>. Access: January 05, 2019. and Supremo Tribunal Federal.

Ministra Carmen Lúcia anuncia início de funcionamento do Projeto Victor, de inteligência artificial. Notícias

STF. August 30, 2018. Available at: <

http://www.stf.jus.br/portal/cms/verNoticiaDetalhe.asp?idConteudo=388443>. Access: January 05, 2019. 5 When it comes to the legality of their adoption, though the law varies from one jurisdiction to the next, democratic

nations are unanimous in affirming the requirement of some form of reasoned motivation for court decisions, and

that rationale cannot be entirely random, or must at least be sufficiently clear to allow the interested party to appeal

it. In Brazil, Article 93, IX of the Constitution says that “all judgments of judicial authorities shall be public, and

all decisions shall be motivated”. In common law jurisdictions, the duty to give reasons may not exist when it

Page 19: Algorithmic Discrimination The Challenge of Unveiling

19

primarily focuses n the discriminatory risks of the output generated by algorithmic systems.6

According to Julia Angwin and her team at ProPublica who revealed the use of algorithms in

criminal sentencing and its legal and moral implications, COMPAS, the tool developed by the

company Northpointe (later renamed Equivant) adopted by the justice system in states such as

Florida, “turned up significant racial disparities (…) In forecasting who would re-offend, the

algorithm made mistakes with black and white defendants at roughly the same rate but in very

different ways”. Whereas blacks were falsely flagged as high risk and potential re-offenders at

twice the rate as white defendants, whites were more frequently deemed as low risk than black

defendants.7

Though the use of algorithmic systems for sentencing has yet to reach the same intensity

in Brazil as in the United States, discrimination through automation is already part of our reality

and, given the extensive use of such tools in other jurisdictions, the debate over them will only

comes to administrative decisions, but judicial duty to give reasons is upheld. In: LO, H. The Judicial Duty to

Give reasons. Legal Studies, vol. 20, issue 1, March 2000, p. 42 - 65. 6 Other repercussions include the risks to data privacy and security, cf. section 2.3.2. There is a natural connection

between the two, but in protecting individuals from discrimination we may not always be ensuring greater privacy. 7 ANGWIN, Julia; LARSON, Jeff; MATTU, Surya; KIRCHNER, Lauren. Machine Bias. ProPublica, May 23,

2016. Available at: <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>.

Access: January 10, 2019., in which they state: “We also turned up significant racial disparities, just as Holder

feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at

roughly the same rate but in very different ways. The formula was particularly likely to falsely flag black defendants

as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants

were mislabeled as low risk more often than black defendants. Could this disparity be explained by defendants’

prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race

from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77

percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely

to be predicted to commit a future crime of any kind.”

Page 20: Algorithmic Discrimination The Challenge of Unveiling

20

grow. Algorithms have been deployed for making diagnoses,8 job selection,9 and for even

welfare eligibility processes.10

The present work surveys the consequences of algorithmic discrimination, as well as the

paths for the protection against discriminatory conduct. Four important observations must be

made in this introduction: first, as will be further scrutinized throughout this dissertation,

discrimination is not the only issue that may arise from algorithmic use. Problems such as

privacy infractions may also run from algorithmic use but do not necessarily relate to

discrimination, as section 2.3.2 will demonstrate. These problems are not within the scope of

my work.

Second, if automation and its potential for discrimination are cause for concern, we must

also remember that the world is full of generalizations, and that decision-making largely

depends on generalizing to be feasible. The legal system, in particular, is full of necessary

generalizations. Whenever we set out a rule, for example one that makes it illegal to drive above

80 km/h on a given road, or when we establish that only those over a given age have the right

to vote, we are using, engaging, and reinforcing generalizations. Society as a whole accepts this

manner of decision and rule-making, which leads us to suspect that the problem with profiling

8 “Complex algorithms will soon help clinicians make incredibly accurate determinations about our health from

large amounts of information, premised on largely unexplainable correlations in that data. […] With extraordinary

accuracy, these algorithms were able to predict and diagnose diseases, from cardiovascular illnesses to cancer, and

predict related things such as the likelihood of death, the length of hospital stay, and the chance of hospital

readmission. Within 24 hours of a patient’s hospitalization, for example, the algorithms were able to predict

with over 90% accuracy the patient’s odds of dying. These predictions, however, were based on patterns in the data

that the researchers could not fully explain.” In: BURT, A. and VOLCHENBOUM, S. How Health Care Changes

When Algorithms Start Making Diagnoses. Harvard Business Review, May 08, 2018. Available at:

<https://hbr.org/2018/05/how-health-care-changes-when-algorithms-start-making-diagnoses>. Access: January

10, 2019. 9 “Many companies, including Vodafone and Intel, use a video-interview service called HireVue. Candidates are

quizzed while an artificial-intelligence (AI) program analyses their facial expressions (maintaining eye contact with

the camera is advisable) and language patterns (sounding confident is the trick). People who wave their arms about

or slouch in their seat are likely to fail. Only if they pass that test will the applicants meet some humans.” THE

ECONOMIST. How an algorithm may decide your career. June 21, 2018. Available at:

<https://www.economist.com/business/2018/06/21/how-an-algorithm-may-decide-your-career>. Access: January

10, 2019. 10 Several examples of welfare automation in the United States are investigated in: EUBANKS, V. Automating

Inequality: How High-Tech Tools Profile, Police, and Punish de Poor. St. Martin’s Press. January 23, 2018.

Eubanks provides examples on how welfare eligibility in Indiana, the homeless policy in Los Angeles, and others

were automated.

Page 21: Algorithmic Discrimination The Challenge of Unveiling

21

and discrimination is not the practice of making generalizations in itself, but rather with the

criteria used to create groups that are allowed (or not) to behave a certain way or access a specific

good. Therefore, this dissertation takes the view, further scrutinized in section xxx, that

discrimination is not always harmful, nor is it always illegal.

Third, the advent of machine learning has brought a number of complexities to this

landscape. A system such as AlphaGo, which works with deep neural networks, is very different

from a system like COMPAS, whose functioning is based on “old-fashioned” programming.

The consequences for discrimination are severe, especially when it comes to identifying

solutions. As section 4.1.1 further investigates, asking for transparency of machine learning

algorithms is of little use, and accountability proposals are also challenging to implement.

Lastly, it should be clear from the start that this dissertation does not in any way suggest

that innovation should be contained, for it always leads to unwanted outcomes. It merely intends

to better understand and investigate one potentially harmful outcome, which is quite different.

As with most of technology that came before algorithms, implementation always presents pros

and cons; the challenge for society is to learn how to strike a balance that allows for the benefits

and contains the disadvantages.

For no other reason the discussion is increasingly present in the private sector and also

among public authorities. In March 2018, the Council of Europe, through its Committee of

experts on Internet Intermediaries, finalized and published a study on the human rights

dimensions of automated data processing techniques, focusing primarily on algorithms and

possible regulatory implications.11 The objective of the study is to

map out some of the main current concern from the Council of Europe’s human

rights perspective, and to look at possible regulatory options that member

states may consider to minimise adverse effects, or to promote good

practices.12

11 COUNCIL OF EUROPE PORTAL. Algorithms and Human Rights: a new study has been published. March

22, 2018. Available at: <https://www.coe.int/en/web/freedom-expression/-/algorithms-and-human-rights-a-new-

study-has-been-published>. Access> January 10, 2019. 12 Ibid, p. 4.

Page 22: Algorithmic Discrimination The Challenge of Unveiling

22

Similarly, in May 2018, the Science and Technology Committee of the House of

Commons in the United Kingdom published a report on decision-making by algorithms, in order

to “identify the themes and challenges that the proposed Center for Data Ethics & Innovation

[which shall be created by the British government shortly13] should address as it begins its

work”. The report calls attention to the challenges brought about by automated decisions,

especially bias, as well as the possible solutions for such hurdles, mainly involving mechanisms

for transparency and accountability.14

The United States is also taking part in the debate. In 2016, the Federal Trade

Commission issued a report on the consequences of Big Data and its inclusionary and

exclusionary uses.15 The FTC states that

Big data analytics can provide numerous opportunities for improvements in

society. In addition to more effectively matching products and services to

consumers, big data can create opportunities for low-income and underserved

communities. […] At the same time, workshop participants and other have

noted how potential inaccuracies and biases might lead to detrimental effects

for low-income and underserved populations. For example, participants raised

concerns that companies could use big data to exclude low-income and

underserved communities from credit and employment opportunities.16

Australia was one of the first countries to expressly address the issues brought about by

big data, automation, and algorithms. Back in 2007, the Australian government had already

issued its Better Practice Guide for automated decision-making in the public administration. The

Guide is an effort to consolidate and provide concrete solutions for the best practice principles

of the Automated Assistance in Administrative Decision-Making Report issued by the Attorney-

General in 2004. It states that “[a]utomated systems have been used for some users in areas of

13 DEPARTMENT FOR DIGITAL, CULTURE, MEDIA & SPORT of the UK Government. Consultation on the

Centre for Data Ethics and Innovation. November 20, 2018. Available at:

<https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation/centre-

for-data-ethics-and-innovation-consultation>. Access: January 10, 2019. 14 SCIENCE AND TECHNOLOGY COMMITTEE of the House of Commons. Algorithms in decision-making.

Fourth Report of Session 2017-19. May 23, 2018. Available at:

<https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf>. 15 FEDERAL TRADE COMMISSION. Big Data, A Tool for Inclusion or Exclusion? – Understanding the

Issues. January, 2016, p. 1. Available at: <https://www.ftc.gov/system/files/documents/reports/big-data-tool-

inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf>. Access: January 01, 2019. 16 Ibid, p. i.

Page 23: Algorithmic Discrimination The Challenge of Unveiling

23

financial entitlement for citizens and other agency customer groups, such as welfare, family and

veteran support benefits.”17

Brazil, on its part, has so far been less explicit in this debate. The Brazilian Strategy para

Digital Transformation issued in 2018 recognizes the relevance of the data-driven economy and

sets forth strategies the government should pursue to extract value from this new environment,

namely: (i) promote the approval of incentives to attract data centers to Brazil; (ii) improve the

National Policy on Government Open Data, and foster the adoption of tools, systems, and

processes based on data; (iii) promote cooperation among authorities and the harmonization of

normative frameworks regarding data, in order to facilitate the inclusion of Brazilian companies

in the global market; (iv) promote cooperation among government representatives, universities,

and companies, as to facilitate the knowledge and technology exchange; (v) stimulate the

adoption of cloud services as part of the public administration’s services and data storage

systems; and (vi) evaluate the potential social and economic effects of disruptive technologies,

such as artificial intelligence and big data, proposing policies that aim at mitigating their

negative effects and simultaneously maximize their positive effects.18

It is in this context that this work aims at answering the following question: What ought

be the role of the Law in illuminating and addressing algorithmic discrimination in the Brazilian

context?

Chapter 2 provides the basis upon which this work will be built. It explains in further

detail how the emergence of Big Data was crucial in allowing algorithms to evolve. It also

highlights what algorithmic discrimination is, and what it is not. It is in this chapter that I

propose a typology for algorithmic discrimination, with the goal of contributing to the

discussion and hopefully helping bring more clarity to the topic. Chapter 3 delineates the legal

framework in which algorithmic discrimination will be discussed. It presents the debate in

17 AUSTRALIAN GOVERNMENT. Automated Assistance in Administrative Decision-Making – Better

Practice Guide. February, 2007. Available at:

<https://www.oaic.gov.au/images/documents/migrated/migrated/betterpracticeguide.pdf>. Access: January 01,

2019. 18 MCTIC. Estratégia Brasileira para a Transformação Digital (E-Digital). Brasília, 2018, p. 64-65. Available

at: <http://www.mctic.gov.br/mctic/export/sites/institucional/estrategiadigital.pdf>. Access: January 12, 2019.

Page 24: Algorithmic Discrimination The Challenge of Unveiling

24

constitutional terms, and also in terms of ordinary legislation, focusing primarily (though not

solely) in the case of Brazil. It applies the enforcement debate to two concrete cases, one which

took place in Poland, regarding unemployment public policy, and the other in Brazil, regarding

credit scoring. Chapter 4 is focused on debating policy solutions, by presenting a review of the

literature on algorithmic governance as it currently stands, and discussing an agenda for the

Brazilian jurisdiction.

Page 25: Algorithmic Discrimination The Challenge of Unveiling

133

References

ACHARYA, V. Regulating wall Street: the Dodd-Frank Act and the new architecture of

global finance. Interviewer: DAVIES, V. VOX - CEPR Policy Portal. October 22, 2010.

Available at: <https://voxeu.org/vox-talks/regulating-wall-street-dodd-frank-act-and-new-

architecture-global-finance>.

AGRAWAL, A., GANS, J. and GOLDFARB, A. Prediction Machines: The Simple

Economics of Artificial Intelligence. Harvard Business Press, April 17, 2018.

ANANNY, M. and CRAWFORD, K. Seeing without knowing: Limitations of the

transparency ideal and its application to algorithmic accountability. SAGE journals.

December 13, 2016. Available at:

<https://journals.sagepub.com/doi/abs/10.1177/1461444816676645?journalCode=nmsa>.

ANBC. Por que a ANBC?. Available at:

<https://www.anbc.org.br/materias.php?cd_secao=3#5&friurl=_-Empresas-associadas-_>.

ANDERSON, C. The End of the Theory: the Data Deluge Makes the Scientific Method

Obsolete. Wired. June 23, 2008. Available at: <https://www.wired.com/2008/06/pb-theory/>.

ANGWIN, Julia; LARSON, Jeff; MATTU, Surya; KIRCHNER, Lauren. Machine Bias.

ProPublica, May 23, 2016. Available at: <https://www.propublica.org/article/machine-bias-

risk-assessments-in-criminal-sentencing>.

ASSOCIATION FOR COMPUTING MACHINERY US PUBLIC POLICY COUNCIL

(USACM). Statement on Algorithmic Transparency and Accountability. January 12, 2017.

Available at: <http://www.acm.org/binaries/content/assets/public-

policy/2017_usacm_statement_algorithms.pdf>.

AUSTRALIAN GOVERNMENT. Automated Assistance in Administrative Decision-

Making – Better Practice Guide. February, 2007. Available at:

<https://www.oaic.gov.au/images/documents/migrated/migrated/betterpracticeguide.pdf>.

BANERJEE, A. DUFLO, E. GLENNERSTER, R. and KINNAN, C. The miracle of

microfinance? Evidence from a randomized evaluation. Northwestern University of

Economics and NBER. March, 2014. Available at: <https://economics.mit.edu/files/5993>.

BAROCAS, S. and SELBST, A. D. Big Data’s Disparate Impact. p. California Law Review,

vol. 671, 2016. Available at: <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899>.

BERGER, M. M. To Regulate, or Not to Regulate – Is That the Question: Reflections on

the Supposed Dilemma between Environmental Protection and Private Property Rights.

Page 26: Algorithmic Discrimination The Challenge of Unveiling

134

8 Loy, L. A. L. Rev. 253, 1975. Available at:

<https://digitalcommons.lmu.edu/cgi/viewcontent.cgi?referer=https://scholar.google.com.br/sc

holar?hl=en&as_sdt=0%2C5&q=to+regulate+or+not&btnG=&httpsredir=1&article=1187&co

ntext=llr>.

BILBAO UBILLOS, J. M. La Eficacia de Los Derechos Fundamentales Frente a

Particulares: Analisis de La Jurisprudencia del Tribunal Constitucional. Centro de

Estudios Políticos y Constitucionales, 1997.

BLASS A. and GUREVICH, Y.. Algorithms: A Quest for Absolute Definitions. Bulletin of

European Association for Theoretical Computer Science. vol. 81, 2003. Available at:

<https://web.eecs.umich.edu/~gurevich/Opera/164.pdf>.

BOYD, D. and CRAWFORD, K. Six Provocations for Big Data. A Decade in Internet Time:

Symposium on the Dynamics of the Internet and Society, September 2011. Available at:

<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1926431>.

BRILL, C. J. Scalable Approaches to Transparency and Accountability in Decision Making

Algorithms, Remarks at the NYU Conference on Algorithms and Accountability. FTC.

February 28, 2015. Available at: <https://www.ftc.gov/public-statements/2015/02/scalable-

approaches-transparency-accountability-decisionmaking-algorithms>.

BULLUNCK, M. Histories of algorithms: Past, present and future. Historia Mathematica,

Elsevier, 2015, 43 (3).

CENTRO DE ESTUDOS DA MTERÓLE (CEM). Mapa da Vulnerabilidade Social da

População da Cidade de São Paulo. Centro Brasileiro de Análise e Planejamento-CEBRAP,

do Serviço Social do Comércio-SFSC e da Secretaria Municipal de Assistência Social de São

Paulo, SAS-PMSP. São Paulo, 2004. Available at:

<http://web.fflch.usp.br/centrodametropole/upload/arquivos/Mapa_da_Vulnerabilidade_social

_da_pop_da_cidade_de_Sao_Paulo_2004.pdf>.

CITRON, D. K. Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 2008.

Available at: <http://openscholarship.wustl.edu/law_lawreview/vol85/iss6/2>.

CITRON, D. K. and PASQUALE, F. The Scored Society: Due Process For Automated

Predictions. Washington Law Review, vol. 89:1, 2014. Available at:

<https://digital.law.washington.edu/dspace-

law/bitstream/handle/1773.1/1318/89WLR0001.pdf>.

COMMISSION OF THE EUROPEAN COMMUNITIES. Council Directive on implementing

the principle of equal treatment between persons irrespective of religion or belief,

disability, age or sexual orientation. Brussels, 2008. Available at: <https://eur-

lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A52008PC0426>.

Page 27: Algorithmic Discrimination The Challenge of Unveiling

135

COMMUNICATIONS COMMITTEE. The Internet: to regulate or not to regulate? inquiry.

Parliamentary business. Available at:

<https://www.parliament.uk/business/committees/committees-a-z/lords-

select/communications-committee/inquiries/parliament-2017/the-internet-to-regulate-or-not-

to-regulate/>.

CORMEN, T. H., Algorithms Unlocked. MIT Press, 2013.

COUNCIL OF EUROPE PORTAL. Algorithms and Human Rights: a new study has been

published. May 22, 2018. Available at: <https://www.coe.int/en/web/freedom-expression/-

/algorithms-and-human-rights-a-new-study-has-been-published>.

CRAWFORD, H. The Hidden Biases in Big Data. Harvard Business Review. April 01, 2013.

Available at: <https://hbr.org/2013/04/the-hidden-biases-in-big-data>.

CUMMINGS, M. L. The Social and Ethical Impact of Decision Support Interface Design.

International Encyclopedia of Ergonomics and Human Factors. 2005. Available at:

<https://pdfs.semanticscholar.org/a9b3/ec436508ebfa40f3a3f5b59231bece4f3e34.pdf>.

DEMSETZ, H. Why Regulate Utilities?. Journal of Law and Economics, vol. 11, no. 1, The

University of Chicago Press Journals, April, 1968. Available at:

<https://www.journals.uchicago.edu/doi/abs/10.1086/466643?journalCode=jle>.

DEPARTMENT FOR DIGITAL, CULTURE, MEDIA & SPORT of the UK Government.

Consultation on the Centre for Data Ethics and Innovation. November 20, 2018. Available

at: <https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-

and-innovation/centre-for-data-ethics-and-innovation-consultation>.

DIAKOPOULOS, N. and FIEDLER, S. How to Hold Algorithms Accountable. MIT

Technology Review. November, 2016.

DIAKOPOULOS, N., FIEDLER, S., ARENAS, M. et al. Principles for Accountable

Algorithms and a Social Impact Statement for Algorithms. FAT/ML. Available at:

<https://www.fatml.org/resources/principles-for-accountable-algorithms>.

DOMINGOS, P. Master Algorithm. Basic Books Inc. New York, 2018.

DONEDA, D. and MENDES, L. S. Data Protection in Brazil: New Developments and

Current Challenges. In: GURWIRTH, S., LEENES, R. and HERT, P. De. (Eds). Reloading

Data Protection: Multidisciplinary Insights and Contemporary Challenges. Springer, 2014.

DOSHI-VELEZ, F. and KORTZ, M. Accountability of AI Under the Law: The Role of

Explanation. Berkman Klein Center Working Group on Explanation and the Law, Berkman

Page 28: Algorithmic Discrimination The Challenge of Unveiling

136

Klein Center for Internet & Society working paper, 2017. Available at :

<https://dash.harvard.edu/bitstream/handle/1/34372584/2017-11_aiexplainability-

1.pdf?sequence=3>.

DÜRIG, G. Grundrechte und Zivilrechtsprechung. Vom Bonner Grundgesetz zur

gesamtdeutschen Verfassung : Festschrift zum 75. Geburtstag von Hans Nawiasky, 1956.

EDWARDS, L. and VEALE, L. Slave to the Algorithm? Why a ‘Right to an Explanation’

Is Probably Not the Remedy You Are Looking For. 16 Duke Law & Technology Review 18,

2017. Available at:

<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855&download=yes>.

ELLIS, E. and WATSON, P. EU Anti-Discrimination Law - Second Edition. Oxford EU Law

Library, 2012.

ELLROTT, J. TRITTMANN, R. and WERMEISTER, C. Automated driving law passed in

Germany. June 21, 2017. Available at: <https://www.freshfields.com/en-gb/our-

thinking/campaigns/digital/internet-of-things/connected-cars/automated-driving-law-passed-

in-germany/>.

EUBANKS, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish

de Poor. St. Martin’s Press. January 23, 2018.

EUROPEAN COMMISSION. Charter of Fundamental Rights: the Presidents of the

Commission, European Parliament and Council sign and solemnly proclaim the Charter

in Strasbourg. Brussels, December 12, 2007. Available at: <http://europa.eu/rapid/press-

release_IP-07-1916_en.htm>.

EUROPEAN COMMISSION’S HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL

INTELLIGENCE (AI HELG). Draft Ethics Guidelines for Trustworthy AI. European

Commission, Brussels, March, 2019 (final version). Available at:

<https://ec.europa.eu/futurium/en/node/6044>.

FEDERAL TRADE COMMISSION. Big Data, A Tool for Inclusion or Exclusion? –

Understanding the Issues. January, 2016. Available at:

<https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-

understanding-issues/160106big-data-rpt.pdf>.

FUSTER, G. G. The Emergence of Personal Data Protection as a Fundamental Right of

the EU. Springer International Publishing, 2014.

GARDBAUM, S. The ‘Horizontal Effect’ of Constitutional Rights. Michigan Law Review,

vol. 102, UCLA School of Law Research Paper No. 03-14, pp. 388-459, 2003. Available at:

<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=437440>.

Page 29: Algorithmic Discrimination The Challenge of Unveiling

137

GASSER, U. and ALMEIDA, V. A.F. A Layered Model for AI Governance. IEEE Internet

Computing 21 (6) (November): 2017.

GILLESPIE, T. Chapter 2- Algorithm. In: PETERS, B. (Ed.). Digital Keywords: a Vocabulary

of information society and culture. Princeton: Princeton University Press, 2016.

GOODMAN, B. W. A Step Towards Accountable Algorithms?: Algorithmic

Discrimination and the European Union General Data Protection. 29th Conference on

Neural Information Processing Systems. Barcelona, Spain, 2016.

GORZONI, P. F. A. da C. Supremo Tribunal Federal e a Vinculação dos Direitos

Fundamentais nas Relações entre Particulares. 2007. Available at:

<http://www.sbdp.org.br/publication/supremo-tribunal-federal-e-a-vinculacao-dos-direitos-

fundamentais-nas-relacoes-entre-particulares/>.

GUNNING, D. Explained Artificial Intelligence (XAI). DARPA/I20. November, 2017.

Available at: <https://www.darpa.mil/attachments/XAIProgramUpdate.pdf>.

GUREVICH, Y. What Is an Algorithm?. In: BIELIKOVÁ M., FRIEDRICH, G., GOTTLOB,

G., KATZENBEISSER, S., TURÁN, G. (eds) SOFSEM 2012: Theory and Practice of

Computer Science. SOFSEM 2012. Lecture Notes in Computer Science, vol. 7147. Springer,

Berlin, Heidelberg, 2012. Available at:

<https://www.researchgate.net/publication/221512843_What_Is_an_Algorithm>.

HARFORD, T. Big data: are we making a big mistake?. The Financial Times. 2014.

Available at: <https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1740-9713.2014.00778.x>.

HAWKINS, J. and BLAKESLEE, S. On Intelligence: How a New Understanding the Brain

Will Lead to the of the Creation of Truly Intelligent Machines. Times Books, 2004.

HESSE, K. Verfassungsrecht und Privatrecht. Müller Jur. Vlg. C. F., 1988.

HUET, E. Server And Protect: Predictive Policing Firm PredPol Promises To Map Crime

Before It Happens. Forbes, February 11, 2015. Available at:

<https://www.forbes.com/sites/ellenhuet/2015/02/11/predpol-predictive-

policing/#6efe33ae4f9b>.

IAPP and THE UNITED NATIONS GLOBAL PULSE. Building Ethics into Privacy

Frameworks for Big Data and AI. Available at:

<https://iapp.org/media/pdf/resource_center/BUILDING-ETHICS-INTO-PRIVACY-

FRAMEWORKS-FOR-BIG-DATA-AND-AI-UN-Global-Pulse-IAPP.pdf>.

Page 30: Algorithmic Discrimination The Challenge of Unveiling

138

ITS. Transparência e governança nos algoritmos: um estudo de caso sobre o setor de birôs

de crédito. Rio de Janeiro, 2017.

JIA, J., JIN, G. J. and WAGMAN, L. The Short-Run Effects of GDPR On Technology

Venture Investment. NBER Working Paper no. 25248, November, 2018. Available at:

<https://www.nber.org/papers/w25248>.

JUSTICE AND CONSUMERS. Guidelines on Automated Individual decision-making and

Profiling for the purposes of Regulation 2016/679. Adopted on October 03, 2017. Available

at: <https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053>.

KELLSTEDT, P. M. and WHITTEN, G. The Fundamentals of Political Research. New York:

Cambridge University Press, 2009.

KLEINBERG, J. and MULLAINATHAN, S. Simplicity Creates Inequity: Implications for

Fairness, Stereotypes, and Interpretability. Cornell University, September 12, 2018.

Available at: <https://arxiv.org/abs/1809.04578>.

KNIGHT, W. The Dark Secret at the Heart of AI. MIT Technology Review. April 11, 2017.

Available at: <https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-

ai/>.

KROLL, J. et al. Accountable Algorithms. 165 U. PA. L. Rev. 633, 2017. Available at:

<https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3/>.

LAZER, D. et. al. The Parable of Google Flu: Traps in Big Data Analysis. Science 343:1203-

1205, March 14, 2014. Available at:

<https://gking.harvard.edu/files/gking/files/0314policyforumff.pdf>.

LEMOS, R. et al. A criação da Autoridade Nacional de Proteção de Dados pela MP nº

869/2018. JOTA, December 29, 2018. Available at:

<https://www.jota.info/?pagename=paywall&redirect_to=//www.jota.info/opiniao-e-

analise/artigos/a-criacao-da-autoridade-nacional-de-protecao-de-dados-pela-mp-no-869-2018-

29122018>.

LERMAN, J. Big Data and Its Exclusions. Stanford Law Review Online, 2013.

LESSIG, L. Against Transparency. The New Republic, 2009. Available at:

<https://newrepublic.com/article/70097/against-transparency>.

LEVIN, S. New AI can guess whether you're gay or straight from a photograph. The

Guardian. September 08, 2017. Available at:

Page 31: Algorithmic Discrimination The Challenge of Unveiling

139

<https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-can-tell-

whether-youre-gay-or-straight-from-a-photograph>.

LI, F.-F. How to Make A.I. That’s Good for People. The New York Times, March 7, 2018.

Available at: <https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-

human.html>.

LO, H. The Judicial Duty to Give reasons. Legal Studies, vol. 20, issue 1, March 2000.

LODGE, M. and STIRTON, L. Accountability in the Regulatory State. In: BALDWIN, R.,

CAVE, M. and LODGE, M. (Eds). The Oxford Handbook of Regulation, Chapter 15, September

2010.

MATTIUZZO, M. Voto Vencido, Fundamentação Diversa e Fundamentação

Complementar: um estudo sobre deliberação no Supremo Tribunal Federal. 2011. SBDP.

Available at: <http://www.sbdp.org.br/publication/voto-vencido-fundamentacao-diversa-e-

fundamentacao-complementar-um-estudo-sobre-deliberacao-no-supremo-tribunal-federal/>.

MAYER-SCHÖNBERGER, V. and CUKIER, K. Big Data: A Revolution That Will

Transform How We Live, Work, Think. Houghton Mifflin Harcourt, 2013.

MAYER-SCHÖNBERGER, V. and CUKIER, K. The Rise of Big Data: How It's Changing

the Way We Think. Foreign Affairs, vol. 92, no. 3, May/June, 2013. Available at:

<https://www.jstor.org/stable/23526834?seq=1#page_scan_tab_contents>.

MCTIC. Estratégia Brasileira para a Transformação Digital (E-Digital). Brasília, 2018.

Available at: <http://www.mctic.gov.br/mctic/export/sites/institucional/estrategiadigital.pdf>.

MELLO, J. M. P., MENDES, M. and KANCZUK, F. Cadastro Positivo e democratização do

crédito. Folha de São Paulo, March, 2018. Available at:

<https://www1.folha.uol.com.br/opiniao/2018/03/joao-manoel-pinho-de-mello-marcos-

mendes-e-fabio-kanczuk-cadastro-positivo-e-democratizacao-do-credito.shtml>.

MENDES, L. S. Habeas Data e autodeterminação informativa: os dois lados de uma mesma

moeda. In: Centro de Direito, Internet e Sociedade do Instituto Brasiliense de Direito Público

(CEDIS/IDP) (Orgs). Internet & Regulação Saraiva. To be published.

MILLER, A. P. Want Less-Biased Decisions? Use Algorithms. Harvard Business Review.

July 26, 2018. Available at: <https://hbr.org/2018/07/want-less-biased-decisions-use-

algorithms>.

MINISTÉRIO DA TRANSPARÊNCIA, FISCALIZAÇÃO E CONTROLADORIA-GERAL

DA UNIÃO. Aplicação da Lei de Acesso à Informação na Administração Pública Federal.

Page 32: Algorithmic Discrimination The Challenge of Unveiling

140

2a. ed. rev., atu. e amp. Brasília. 2016. Available at:

<http://www.acessoainformacao.gov.br/central-de-

conteudo/publicacoes/arquivos/aplicacao_lai_2edicao.pdf>.

NEW, J. and CASTRO, D. How Policymakers can foster algorithmic accountability. Center

for Data Innovation, May 2018. Available at: <http://www2.datainnovation.org/2018-

algorithmic-accountability.pdf>.

NIKLAS, J. SZTANDAR-SZTANDERSKA, K. and SZYMIELEWICZ, K. Profiling the

Unemployed in Poland: social and Political Implications of Algorithmic Decision Making.

Fundacja Panoptykon. Warsaw, 2015.

NISSENBAUM, H. Computing and Accountability. In: COHEN, J. Communications of the

ACM, vol. 37, issue 1, January 1994.

OETER, S. Fundamental Rights and Their Impact on Private Law – Doctrine and Practice

Under the German Constitution. 12 Tel Aviv U. Stud. L. 7, 1994. Available at:

<https://heinonline.org/HOL/LandingPage?handle=hein.journals/telavusl12&div=4&id=&pag

e=>.

O’NEIL, C. Weapons of Math Destruction – How Big Data Increases Inequality and

Threatens Democracy. New York: Crown, 2016.

QUOD. Available at: <https://www.quod.com.br/>.

PANOPTYKON. Available at: <https://en.panoptykon.org/about>.

PARASURAMAN, R. and MILLER, C. A. Trust and etiquette in high-criticality automated

systems. Communications of the ACM – Human-computer etiquette, vol. 47 issue 4, April,

2004. Available at: <https://dl.acm.org/citation.cfm?id=975844>.

PASQUALE, F. The Black Box Society. Harvard University Press. January, 2015.

REISMAN, D., et al. Algorithmic Impact Assessment: A Practical Framework for Public

Agency Accountability. AI Now, April 2018.

RIOS, R. R. and SILVA, R. da. Democracia e direito da antidiscriminação:

interseccionalidade e discriminação múltipla no direito brasileiro. Cienc. Cult., São Paulo,

vol. 69, n. 1, p. 44-49, March, 2017. Available at:

<http://cienciaecultura.bvs.br/scielo.php?script=sci_arttext&pid=S0009-

67252017000100016&lng=en&nrm=iso>.

Page 33: Algorithmic Discrimination The Challenge of Unveiling

141

SANDIG, C. et al. An Algorithm Audit. Data and Discrimination: Collected Essays. 2014.

Available at: <http://www-

personal.umich.edu/~csandvig/research/An%20Algorithm%20Audit.pdf>.

SARLET, I. W. Direitos Fundamentais e Direito Privado: algumas considerações em torno

da vinculação dos particulares aos direitos fundamentais. Revista dos Tribunais Online.

Available at: <http://www.direitocontemporaneo.com/wp-content/uploads/2018/03/SARLET-

Direitos-fundamentais-e-direito-privado.pdf>.

SARMENTO, D. Direitos Fundamentais e Relações Privadas. Rio de Janeiro: Lumen Juris,

2004.

SCIENCE AND TECHNOLOGY COMMITTEE of the House of Commons. Algorithms in

decision-making. Fourth Report of Session 2017-19. May 23, 2018. Available at:

<https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf>.

SEATTLE INFORMATION TECHNOLOGY. About the Surveillance Ordinance.

September 1, 2017. Available at:

<https://www.seattle.gov/tech/initiatives/privacy/surveillance-technologies/about-surveillance-

ordinance>.

SCHAUER, F. Profiles, Probabilities, and Stereotypes. Belknapp Press. April, 2016.

SHNEIDERMAN, B. Algorithmic Accountability: Designing for Safety. Raddliffe Institute

for Advanced Study Harvard University. March 22, 2018. Available at:

<https://www.radcliffe.harvard.edu/video/algorithmic-accountability-designing-safety-ben-

shneiderman>.

SHNEIDERMAN, B. Opinion: The dangers of faulty, biased, malicious algorithms requires

independent oversight. Proceedings of the National Academy of Sciences of the United States

of America, November 29, 2016

SILVA, V. A. da. A Constitucionalização do Direito: os direitos fundamentais nas relações

entre particulares. São Paulo: Malheiros Editores, 2014.

SILVA, V. A. da. O proporcional e o razoável. Revista dos Tribunais, 2002.

SILVER, D. et al. Mastering the game of Go with deep neural networks and tree search.

Nature, vol. 529, January 28, 2016. Available at: <https://storage.googleapis.com/deepmind-

media/alphago/AlphaGoNaturePaper.pdf>.

SILVER, D. et al. Mastering the game of Go without human knowledge. Nature, vol. 550,

January 19, 2017. Available at:

Page 34: Algorithmic Discrimination The Challenge of Unveiling

142

<https://www.nature.com/articles/nature24270.epdf?author_access_token=VJXbVjaSHxFoct

QQ4p2k4tRgN0jAjWel9jnR3ZoTv0PVW4gB86EEpGqTRDtpIz-2rmo8-

KG06gqVobU5NSCFeHILHcVFUeMsbvwS-lxjqQGg98faovwjxeTUgZAUMnRQ>.

SIMON, H. A. Spurious Correlation: A Causal Interpretation. Journal of the American

Statistical Association, vol. 49, September 1954. Available at:

<http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33513>.

SUPREMO TRIBUNAL FEDERAL. Ministra Carmen Lúcia anuncia início de

funcionamento do Projeto Victor, de inteligência artificial. Notícias STF. August 30, 2018.

Available at: <http://www.stf.jus.br/portal/cms/verNoticiaDetalhe.asp?idConteudo=388443>.

THE COMMITTEE OF EXPERTS ON INTERNET INTERMEDIARIES (MSI-NET) of the

Council of Europe. ALGORITHMS AND HUMAN RIGHTS - Study on the Human Rights

Dimensions of Automated Data Processing Techniques (in particular algorithms) and

Possible Regulatory Implications. Council of Europe. March, 2018. Available at:

<https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5>.

THE ECONOMIST. How an algorithm may decide your career. June 21, 2018. Available at:

<https://www.economist.com/business/2018/06/21/how-an-algorithm-may-decide-your-

career>.

THE GUARDIAN. The Cambridge Analytica Files – A year-long investigation into

Facebook, data, and influencing elections in the digital age. Available at:

<https://www.theguardian.com/news/series/cambridge-analytica-files>.

THE NEW YORK CITY COUNCIL. Automated decision systems used by agencies – Law

number 2018/049. January 11, 2018. Available at:

<https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=3137815&GUID=437A6A6D-

62E1-47E2-9C42-461253F9C6D0>.

TRIBUNAL DE JUSTIÇA DO ESTADO DE MINAS GERAIS. TJMG utiliza inteligência

artificial em julgamento virtual. November 07, 2018. Available at:

<https://www.tjmg.jus.br/portal-tjmg/noticias/tjmg-utiliza-inteligencia-artificial-em-

julgamento-virtual.htm#.XC1Vby2ZO1s>.

TUSHNET, M. The issue of state action/horizontal effect in comparative constitutional law.

Oxford University Press and New York University School of Law, vol. 1, number 1, 2003.

VIGEN, Tyler. EBay's Total Gross Merchandise Volume Correlates With Visitors to

Disney Worlds Animal Kingdom. Tylervigen, Spurious Correlations. Available at:

<http://tylervigen.com/view_correlation?id=28571>.

Page 35: Algorithmic Discrimination The Challenge of Unveiling

143

VIGEN, T. Spurious correlations. Available at: <http://www.tylervigen.com/spurious-

correlations>.

WALLACE, N. and CASTRO, D. The Impact of the EU’s New Data Protection Regulation

on AI. Center for Data Innovation, March 26, 2018. Available at:

<https://www.datainnovation.org/2018/03/the-impact-of-the-eus-new-data-protection-

regulation-on-ai/>.

WORLD WIDE WEB FOUNDATION. Algorithmic Governance: Applying the Concept to

Different Country Contexts. July, 2017.

ZARSKY, T. Z. Transparent Predictions. University of Illinois Law Review, vol. 2013, no.

4. Available at: <https://www.illinoislawreview.org/wp-content/ilr-

content/articles/2013/4/Zarsky.pdf>.

USA Cases:

Blum v. Yaretsky, 457 U.S. 991 (1982).

Griggs v. Duke Power Co., 401 U.S. 424 (1971).

Marsh v. Alabama, 326 U.S. 501 (1946).

Reitman v. Mulkey, 387 U.S. 369 (1967).

Shelley v. Kraemer, 334 U.S. 1 (1948).

German Cases:

BUNDESVERFASSUNGSGERICHT. Decision in “Stadium Ban” proceedings clarifies the

indirect horizontal effects of the right to equality in private law relations. Press Release No.

29/2018, April 27, 2018. Available at:

<https://www.bundesverfassungsgericht.de/SharedDocs/Pressemitteilungen/EN/2018/bvg18-

029.html>.

BUNDESVERFASSUNGSGERICHT. Headnotes to the Order of the Firste Senate of 11

April 2018 – 1 BvR 3080/09. Available at:

Page 36: Algorithmic Discrimination The Challenge of Unveiling

144

<https://www.bundesverfassungsgericht.de/SharedDocs/Entscheidungen/EN/2018/04/rs20180

411_1bvr308009en.html;jsessionid=34F6550A81C53BC9257433E87D7CF087.1_cid393>.

BVerfGE, 7, 198.

BVerfGE 65,1, Volkszählung.

Brazilian Cases:

5ª PROMOTORIA DE JUSTIÇA DE TUTELA COLETIVA DE DEFESA DO

CONSUMIDOR E DO CONTRIBUINTE DA CAPITAL. Petição inicial, Inquérito Civil n.

347/5ª PJDC/2016.

MINISTÉRIO DA JUSTIÇA. Nota Técnica n. 92. 2018, §39. Available at:

<http://www.cmlagoasanta.mg.gov.br/abrir_arquivo.aspx/PRATICAS_ABUSIVAS_DECOL

ARCOM?cdLocal=2&arquivo=%7BBCA8E2AD-DBCA-866A-C8AA-

BDC2BDEC3DAD%7D.pdf>.

STF. Recurso Extraordinário 201.819-8, Rio de Janeiro, 2005. Available at:

<http://redir.stf.jus.br/paginadorpub/paginador.jsp?docTP=AC&docID=388784>.

Legislation:

BRAZIL. Constitution. 1988. Available at:

<http://www.planalto.gov.br/ccivil_03/Constituicao/Constituicao.htm>.

BRAZIL. Bill n. 7.716. January 05, 1989. Available at:

<http://www.planalto.gov.br/ccivil_03/LEIS/L7716.htm>.

BRAZIL. Bill n. 8.078 (CDC). September 11, 1990. Available at:

<http://www.planalto.gov.br/ccivil_03/Leis/L8078compilado.htm>.

BRAZIL. Bill n. 9.029. April 13, 1995. Available at:

<http://www.planalto.gov.br/CCIVIL_03/LEIS/L9029.HTM>.

BRAZIL. Bill n. 10.406 (Civil Code). January 10, 2002. Available at:

<http://www.planalto.gov.br/ccivil_03/leis/2002/l10406.htm>.

Page 37: Algorithmic Discrimination The Challenge of Unveiling

145

BRAZIL. Bill n. 12.288.. July 20, 2010. Available at:

<http://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2010/Lei/L12288.htm>.

BRAZIL. Bill n. 12.414. June 09, 2011. Available at:

<http://www.planalto.gov.br/ccivil_03/_Ato2011-2014/2011/Lei/L12414.htm>.

BRAZIL. Bill n. 12.527 (LAI). November 18, 2011. Available at:

<http://www.planalto.gov.br/ccivil_03/_ato2011-2014/2011/lei/l12527.htm>.

BRAZIL. Bill n. 12.711. August 29, 2012. Available at:

<http://www.planalto.gov.br/ccivil_03/_ato2011-2014/2012/lei/l12711.htm>.

BRAZIL. Bill n. 12.965, April, 23, 2014. Available at:

<http://www.planalto.gov.br/ccivil_03/_ato2011-2014/2014/lei/l12965.htm>.

BRAZIL. Bill n. 13.709 (LGPD). August 14, 2018. Available at:

<http://www.planalto.gov.br/ccivil_03/_Ato2015-2018/2018/Lei/L13709.htm>.

BRAZIL. Draft Bill n. 441. 2017. Available at:

<http://www.camara.gov.br/proposicoesWeb/fichadetramitacao?idProposicao=2160860>.

BRAZIL. Decree n. 7.724. May 16, 2012. Available at:

<http://www.planalto.gov.br/ccivil_03/_ato2011-2014/2012/Decreto/D7724.htm>.

BRAZIL. Decree n. 8.771. May 11, 2016. Available at:

<http://www.planalto.gov.br/ccivil_03/_Ato2015-2018/2016/Decreto/D8771.htm>.

BRAZIL. Executive Order n. 869. December 27, 2018. Available at:

<http://www.planalto.gov.br/ccivil_03/_Ato2015-2018/2018/Lei/L13709.htm>.

EUROPEAN UNION. Regulation (EU) 2016/679 (General Data Protection Regulation).

May 25, 2018. Available at: <https://gdpr-info.eu/>.

POLAND. The Constitution of the Republic of Poland. April 02, 1997. Available at:

<https://www.sejm.gov.pl/prawo/konst/angielski/kon1.htm>.