report · 2020-04-09 · during adolescence, so 50% of cases start

29
In partnership with Track 4: Trusting AI – Will Mankind Master the Machine, or Vice Versa? REPORT

Upload: others

Post on 17-Apr-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

In partnership with

Track 4: Trusting AI – Will Mankind Master the Machine, or Vice Versa?

REPORT

Page 2: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Contents

Page 3: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Maximizing AI’s potential for good will depend on building and on earning trust in AI in

several dimensions. If there is no trust, the technology will not be used. Trust has to be built

and earned. The track “Trust in AI” focused on three dimensions of trust:

• Trust by stakeholder communities: Developers of AI solutions must earn the trust of

communities to which such solutions are offered.

• Trustacrossboundaries:AIdevelopersandothersworkingforthebeneficialAImust

trust each other, across cultural and corporate boundaries.

• Trustworthy systems: AI systems must be demonstrably trustworthy.

This track was jointly organized by Mr Stephen Cave, Executive Director of Leverhulme

Centre for the Future of Intelligence at Cambridge University; Professor Francesca Rossi of

Padova University and Professor Huw Price of Cambridge University.

1/27

Page 4: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

The Trust Factory (www.trustfactory.ai) was launched at the Summit as the first

global multi-disciplinary, multi-stakeholder, multi-country project incubator platform

for hosting the trust in AI projects, for facilitating networking of the interested and

participating community, and for tackling new ways of engineering and earning

trust for beneficial AI. The trust factory has been planned with some competitive

grant funding for hosted projects. It is backed by a partnership of respected global

organizations, led by a proposed international Advisory Board. The trust factory is

open to take onboard new project ideas, and welcomes new partners. Nine projects

on trust in AI were presented, discussed, reviewed with feedback given, and

progressed. The proposed projects will be further developed over the next year for

reporting to the AI for Good Global Summit 2019.

2/27

Page 5: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

ABuilding trust for AI – stakeholder communities

Session

3/27

Page 6: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Mr Stephen Cave moderated this session.

Without those relationships of trust having been built,

that technology is at risk not be used to its fullest.

It is essential to earn trust, as well as build it.

4/27

Page 7: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Becky Inkster of Cambridge

University

Becky called for gaps in accountability to be

addressed. Depression is a leading worldwide cause

of disability and ill health, and by 2030 mental health

is predicted to be a leading global disease burden.

However, there is a chronic shortage of professionals

– for example, India’s 1.3 million have access to

only around 5000 psychiatrists. Symptoms emerge

during adolescence, so 50% of cases start <15, and

75% of the cases by age 18. Children and youth

in poorest households are three times more likely

to have mental health issues than kids from better

homes. Charities may act without consent, monitor

data for mental health or sell user data. There are

risks of false labels and emotional manipulation via

social networks, so we need to be ethically aware of

the consequences of what we do. Data breaches can

still arise with anonymized data, when linked with

other information. But there is tremendous potential

for helping people through digital psychiatry –

virtual reality can help identify triggers for people

with PTSD (Post-Traumatic Stress Disorder) and

anxiety. Digital social prescribing can help modify

behavior in trusted environments. AI can be used to

improve community referrals and monitoring in the

community. Tailored treatment choices can factor

in preferences and symptoms to create socially

prescribed mental healthcare.

In many countries, mental healthcare is broken, and

there is a shortage of care. Dr. Inkster called for a

scoping review and analysis.

5/27

Page 8: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Proposed project 1, “Building better care connections: establishing trust networks in AI mental healthcare”, aims to gain trust

among the patients, the healthcare community, and charities. The project will identify where trust has broken down and where

it still exists. A Trust Hackathon is planned this summer to evaluate trust in tech products, followed by a roundtable among

stakeholders to identify where trust is broken and where it still exists. The results will be discussed further among developers to

identify key questions centered around trust in AI mental healthcare to yield a global survey. Finally, a new model for practice

should be established to repair and amplify trust amongst different stakeholders and sectors, and possibly also an app for

serving mental healthcare patients.

Project 1 Building better care connections: establishing trust networks in AI mental healthcare

6/27

Page 9: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Rafel Calvo, of Wellbeing Technologies

Lab

Rafel described the Lab’s work using technologies to build

user trust. He described a project in which computers in

teleconference platforms aim to record video using computer

vision learning techniques to pick up important features of

the conversation, such as autonomy, agency and competence.

There are various tools to track certain features and affective

states (boredom, delight, frustration) and to build trust and

create a sense of autonomy. We need to use human-centred

approaches to identify the most engaging. Technology satisfies

certain psychological needs (such as a sense of autonomy,

competence).

7/27

Page 10: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 2 Building trust in AI for East African farmers

Ezinne Nwankwo, Visiting Scholar at

Cambridge University presented Project 2

Small-scale poultry farmers (200-2000 chickens) in East Africa do not necessarily have

systematic ways to collect, analyze and store agricultural data (they still practice largely

using manual, paper-based notes in the local Swahili language). When they are online, they

mainly share advice and info on WhatsApp and Facebook. This project aims to understand

and address the causes behind the lack of trust in AI focusing on the case of poultry

farmers, where gains are most expected. The objective is to develop a one-stop shop for

trusted solutions to help increase food production, where farmers can get real-time reports,

extension services and information. A first survey will understand the causes behind a

potential lack of trust in AI. AI technologies will be built as part of a hackathon at the

Data Science Africa conference in November 2018. An app will be developed for Swahili-

speaking farmers, and a survey conducted to assess any shift in trust.

8/27

Page 11: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 3 Mitigating the Effects of AI-induced automation on social stability in developing countries & transition economies

Irakli Beridze of UNICRI presented Project 3

The unexplored consequences of AI-enabled automation

threaten to undermine trust and belief in AI through

potential job losses, displacement of workers and/

or social instability through new waves of migration

and increased crime rates. Developing countries and

economies in transition may bear the brunt of disruption.

The project aims to assess and understand the impact

of AI-induced automation on developing countries from

the perspective of social stability (focusing on migration

flows, crime rates and security). The project will identify

actions for countries to take to mitigate potential negative

impact, and foster political support to implement actions

and build trust and belief in AI. Support will be provided

to (a) pilot country/ies to develop a roadmap of actions,

mitigate any potential negative impact, seek political

support and subsequently implement the roadmap

actions.

The session recognized that the notions of trust,

confidence, and trustworthiness are often used in

different, non-consistent ways with various subjective

interpretations; we have to better define the terminology

and to use consistent language. Other factors driving

trust are the questions of ownership of data, data

protection, and trust building measures.

9/27

Page 12: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

BBuilding trust for AI – Trust & Trustworthiness across nations & cultures

Session

10/27

Page 13: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Claire Craig, Director of Science

Policy at the Royal Society

moderated this session

The impact of AI will be global, and managing it for the benefit of all requires international, cross-cultural

collaboration. Perceptions, stories and world views of AI are often culturally specific and will inform the development

and the evolution of the science and the technologies. Diverse religious, linguistic, philosophical, literary and

cinematic traditions have led to diverging conceptions of intelligent machines, as well as cross-cultural variations in

the way issues of trust are perceived and discussed.

In order to ensure obtaining systems that are trustworthy and to have confidence, it is necessary to better

understand these cultural differences, and to remove these potential barriers to global cooperation for beneficial

AI. To build trust across cultures, we have to understand these different ways of seeing what AI can be, and what it

should be.

11/27

Page 14: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 4 Cross-cultural comparisons for trust in AI

Zhe Liu, Professor

of Peking University, presented Project 4

Professor Zhe Liu shared his observations, reflections, and

thoughts on “Trust, Trustworthy and Autonomy” in the

context of AI and robots. Professor Zhe Liu distinguished

between trust and reliance citing automated car crashes in

Florida. Trust in AI can be explained in the context of existing

technologies such as ML with big data, biomechatronics,

and future approaches such as automation and ‘autonomy’,

or closer interactions between humans and AI & robots.

The degree of autonomy exhibited by a robot is key in

determining whether it will be viewed as human-like.

Among problems of trust in AI are mistrust/distrust as well

as overtrust in the ways in which humans and AI or robots

interact– this could be due to different cultural understanding

in the notion of institutional trust or in trust itself, but also

how we understand the human AI relation as some kind

of interpersonal relation. Serious accidents have occurred

already. Some cultures (such as East Asian) are embracing AI

and robots more positively, whereas other cultures (including

Western countries) act more conservatively. The project

plans a larger multi-year cross-cultural investigation of the

dimensions of trust for beneficial AI, to result in a report on

“Cross-cultural perspectives on trust in AI – Opportunities for

impact” to yield a better understanding of the opportunities

and challenges for building trust, as well as regional

variations.

12/27

Page 15: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Kanta Dihal of Cambridge

University presented

Project 5,

a project launched in 2017 to collect and analyze AI narratives prevalent in North America and Europe.

Machines may be viewed as superhuman in capacity and capabilities, but sub-human in status. The Global

AI Narratives project will mobilize academics to share narratives and expand to include other regions and

partnerships to explore how popular hopes and fears of AI are shaped, and how this has influenced local

development and implementation. The results of regional workshops will be published in a book, “Global

AI Narratives”.

Project 5 Global AI Narratives

13/27

Page 16: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Toshie Takjahashi, Professor at Waseda

University

presented the findings from Japan on the

AI narratives project where differences of

media images of AI were observed between

Japan and US, with a workshop in Japan in

September 2018. Asian views and attitudes

of AI tend to be more receptive and positive,

while US narratives tend to be potentially

more wary and negative.

14/27

Page 17: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 6 Cross-national comparisons of AI development and regulation – the case of autonomous vehicles

David Danks, professor at University

presented Project 6

Developers have to be able to trust, and learn from, one another’s experiences as they build technologies that will

be deployed internationally and cross-culturally. How do different countries regulate a technology, and how do

different cultures engage or interact with it? This project will examine interactions between autonomous vehicles

and pedestrians, to understand national and cultural differences, legal, regulatory and cultural constraints. Through

surveys and case studies, the project aims to understand how to build inter-developer trust to speed ethical

deployment of autonomous vehicles and build trust, for different regions. He distinguished between behavioural

trust (we know how they will act) and predictive trust (we know something about their values), which can be

helpful, as it can generalize to novel situations.

15/27

Page 18: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

The session identified a need to better understand the relation of human beings with AI, and how to measure

trust (e.g. via suitable trust metrics). While the issue of privacy is differently valued in the regions, there is a

need to provide education to the public on privacy. Is it ethical for humans to ‘white-lie’ to AI systems? An idea

was raised whether AI systems should feature an “AI inside” indicator for humans to detect the presence of AI

technology in systems and devices?

16/27

Page 19: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

CBuilding trust for AI – trustworthy systems

Session

17/27

Page 20: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Francesca Rossi, Research Scientist

at IBM Research and Professor at

the University of Padova, moderated this session.

Trustworthy AI systems were defined

as those systems that behave in a

way that can generate appropriate

levels of trust in the users or humans

working with the system.

18/27

Page 21: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 7 “Trust in AI for governmental decision-makers”.

Jess Whittlestone of Cambridge

University presented

Project 7

Government use of AI could improve public service,

but only where it ensures the trustworthiness of the

systems used. Without a detailed understanding of how

an AI system works, governments risk either trusting

the output of an AI system too much (with potentially

harmful consequences) or too little (failing to make the

most of AI’s potential). There is currently a disconnect

between policy-makers and technical experts – we need

to ensure that AI is trustworthy and not up to whims

of developers, so we need a common language and

common policy for mutual understanding in and around

AI. AI policies also include technical education, reskilling,

and digital infrastructure aim to create an environment

that can respond positively to advances in AI. Dr.

Whittlestone expressed the hope that the GDPR will

shape both how algorithms are developed and used to

ensure transparency and accountability. The Trustworthy

Technologies Project aims to help decision-makers

communicate better with developers to understand and

identify sources of bias, error or negative consequences

for any given AI system, and to develop policy proposals,

workshops and reports with guidance for policy-makers,

technical experts and developers.

19/27

Page 22: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 8 Trustworthy data: creating and curating a repository for diverse datasets

Rumman Chowdury of Accenture,

presented Project 8

The rapid increase in data is key to recent successes in

AI. Data reflect the society in which they were created,

so even “correct” data can be biased, as they reflect

cultural and social inequalities. When such datasets are

used to train AI systems, the resulting algorithms may

inherit these biases in discriminatory or sexist language

processing. Good and free data is a scarce resource and

barrier to experimenting with AI, as data scientists often

rely on publicly available data banks. This project aims

to build trust in AI by building trust in data, and seeks

to develop a fairness-focused online data repository

that maintains datasets intended for use in data science,

with guardrails where data is categorized, cleaned, and

appropriately labeled. Such a data repository could help

those developing AI systems to manage bias in the data.

It could be used for training purposes or a method for

vetting datasets for bias.

20/27

Page 23: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Project 9 “Cross-cultural perspectives on the meaning of ‘fairness’ in algorithmic decision making”

Krishna Gummadi, Max Planck Institute,

and Adrian Weller, Alan Turing Institute,

presented Project 9

on fairness in algorithmic decision-making and

criminal risk prediction in the US. Algorithms

can help people make decisions about hiring,

assigning social benefits and granting bail. Is

it fair to use any particular feature as an input?

Why do people perceive some features as fair/

unfair, and do people agree in judgments of

fairness? They adopted a normative approach

to prescribe how fair decisions ought to be

made. Anti-discrimination laws are supposed

to take into account sensitive (race, gender)

against non-sensitive features. AI standards

with understandable, common terminology

and overarching framing concepts could be a

useful tool to reflect international consensus

and common understanding across multi-

stakeholders, which governments can then

recognize and use for shaping their policies,

rather than trying to invent it new in every

country. Bias in data is a new phenomenon

which is not well understood, nor is it entirely

clear how to prevent it, and this should all be

more researched.

21/27

Page 24: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

DDiscussion

Session

22/27

Page 25: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Huw Price then moderated a panel discussion.

23/27

Page 26: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Hagit Messer-Yaron of World Commission

on the Ethics of Scientific Knowledge & Technology at UNESCO

spoke on “Trust in AI by educating engineers

to ethically aligned design”. Prof. Messer-

Yaron emphasized the importance for

educating engineers to ethically align design

of autonomous and intelligent systems. Trust

in AI is very much related to ethics in AI, and

so is technology ethics, i.e. the effect of the

technology and the ethical consideration of

the technology is fundamental for fostering

trust in autonomous and intelligent systems

technologies. Ethics is crucial for current

and future engineers to be educated on

ethically aligned designs. However, the

curriculum of most programs around the

world do not include developing tools for

raising awareness to ethical consideration

in autonomous and intelligent systems, and

hence, there is a need to infuse ethics into

development of engineers, and to educate

engineers to ethical thinking for bridging

over cultural gap between technology

and different language, humanities. IEEE’s

global initiative on ethics of autonomous

and intelligent systems has published five

general ethical principles.

24/27

Page 27: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Elena Tomuta of CTBTO

(Comprehensive Nuclear-Test-Ban

Treaty Organization)

described how CTBTO evolved

from a rule-based system to

ML for a verification regime,

including an international

monitoring system to detect

seismic events and nuclear

explosions. AI-based systems

now outperform the quality of the

former rules-based system.

25/27

Page 28: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

Joe Westby from Amnesty

International

called for human rights to be at the heart of discussions

around AI and ethics. We have to respect human rights as the

only ethical framework that is universal, based on binding laws

that virtually every country has signed. Amnesty International

is using ML technology in their human rights research. While

optimistic about AI, there are inherent risks to human rights,

particularly around privacy, discrimination, the right to work,

and the use of AI technology in policing and warfare. AI may

further concentrate power into the hands of a few countries

and companies - Price Waterhouse Coopers estimates that

70% of the economic benefits of AI will flow to China and the

U.S., where a few companies are already leading investment

in AI innovation and have a monopoly on the data that is the

fuel for AI technology. Voluntary self-regulation alone will not

be sufficient. Since AI technology is advancing so quickly, AI

regulations have to be in place already when AI systems are

being deployed; not implemented post mortem after the first

AI disaster, which would be too late. Amnesty International

cited the Toronto Declaration for “Protecting the rights to

equality and non-discrimination in ML systems”.

26/27

Page 29: REPORT · 2020-04-09 · during adolescence, so 50% of cases start

In the final hour, there was an open space discussion involving all attendees, where all nine projects were discussed in parallel on nine tables.

Cross-national comparisons of AI development and regulation – the case of autonomous vehicles

1 Building trust in AI for East African farmers

Building better care connections: establishing trust networks in AI mental healthcare

2 Mitigating the Effects of AI-induced automation on social stability in developing countries & transition economies

3

Cross-cultural comparisons for trust in AI

4 Global AI Narratives

5 6

Trust in AI for governmental decision-makers

7 Trustworthy data: creating and curating a repository for diverse datasets

8 Cross-cultural perspectives on the meaning of ‘fairness’ in algorithmic decision making

9

27/27