ethics in ai - thertoinnovationsummit.eu · human prejudice •who should be policed? •biased...
TRANSCRIPT
Outline
• Human Responsibility
• Human Prejudice
• Human Understanding
• What should we do?
Image made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication
Human Responsibility
• Who should be hired?
Image copyright CC0: https://www.maxpixel.net/Recruit-Hire-Hiring-Recruitment-Interview-1714369Image copyright CC0: https://www.maxpixel.net/Hiring-Hr-Job-Recruitment-Selection-Process-3216063
Vasconcelos, Marisa, Carlos Cardonha, and Bernardo Gonçalves."Modeling Epistemological Principles for Bias Mitigation in AI Systems:An Illustration in Hiring Decisions." arXiv preprint arXiv:1711.07111 (2017).
Human Prejudice
• Who should be policed?
• Biased training data
Image licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license. Homicides in Washington, D.C. November 2004 - November 2006 Crime data downloaded from [http://crimemap.dc.gov/] and map created by User:AudeVivere
Dressel, Julia, and Hany Farid. "The accuracy, fairness, and limitsof predicting recidivism." Science advances 4.1 (2018): eaao5580.
Human Understanding
• Why do algorithms respond as they do?
Hutson, Matthew. "Artificial intelligence faces reproducibility crisis.“Science Magazine (2018): 725-726.
• Proprietary Algorithms
• Reproducibility Crisis
Images made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication
What should we do?
Informatics Europe & EUACM Recommendations
• Education• AI and Ethics at major universities
such as Stanford, MIT, and Columbia
• Policy• EU Statement on
Artificial Intelligence, Robotics and‘Autonomous’ Systems
• IEEE Global Initiative on Ethics ofAutonomous and Intelligent Systems
• Checklists• http://deon.datadriven.org
• Regulation
Image made available under the fee Pexels license