dat340/dit866 applied machine learning: machine learning...

Post on 07-Jul-2020

26 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

DAT340/DIT866 Applied Machine Learning:Machine learning and ethics

Vilhelm VerendelComputer Science and Engineering, Chalmers

2020-02-28

My assumptions

You have all seen various types of machine learning

... and applied it to various real data sets in practice

... and seen how tricky it can be to get it (technically) right!

This lecture is about: We could also get ML wrong, but in society

Reasoning about emerging ML technologies is difficult: There are someobvious issues, but also some more speculative and difficult corner cases

In the following, we will start from a few obvious issues, but also movetowards some more tricky cases

My assumptions

You have all seen various types of machine learning

... and applied it to various real data sets in practice

... and seen how tricky it can be to get it (technically) right!

This lecture is about: We could also get ML wrong, but in society

Reasoning about emerging ML technologies is difficult: There are someobvious issues, but also some more speculative and difficult corner cases

In the following, we will start from a few obvious issues, but also movetowards some more tricky cases

My assumptions

You have all seen various types of machine learning

... and applied it to various real data sets in practice

... and seen how tricky it can be to get it (technically) right!

This lecture is about: We could also get ML wrong, but in society

Reasoning about emerging ML technologies is difficult: There are someobvious issues, but also some more speculative and difficult corner cases

In the following, we will start from a few obvious issues, but also movetowards some more tricky cases

My assumptions

You have all seen various types of machine learning

... and applied it to various real data sets in practice

... and seen how tricky it can be to get it (technically) right!

This lecture is about: We could also get ML wrong, but in society

Reasoning about emerging ML technologies is difficult: There are someobvious issues, but also some more speculative and difficult corner cases

In the following, we will start from a few obvious issues, but also movetowards some more tricky cases

My assumptions

You have all seen various types of machine learning

... and applied it to various real data sets in practice

... and seen how tricky it can be to get it (technically) right!

This lecture is about: We could also get ML wrong, but in society

Reasoning about emerging ML technologies is difficult: There are someobvious issues, but also some more speculative and difficult corner cases

In the following, we will start from a few obvious issues, but also movetowards some more tricky cases

My assumptions

You have all seen various types of machine learning

... and applied it to various real data sets in practice

... and seen how tricky it can be to get it (technically) right!

This lecture is about: We could also get ML wrong, but in society

Reasoning about emerging ML technologies is difficult: There are someobvious issues, but also some more speculative and difficult corner cases

In the following, we will start from a few obvious issues, but also movetowards some more tricky cases

Background reading

From the letter

“If any major military power pushes ahead with AI weapon development, aglobal arms race is virtually inevitable, and the endpoint of thistechnological trajectory is obvious: autonomous weapons will become theKalashnikovs of tomorrow. Unlike nuclear weapons, they require no costlyor hard-to-obtain raw materials, so they will become ubiquitous and cheapfor all significant military powers to mass-produce. It will only be a matterof time until they appear on the black market and in the hands ofterrorists, dictators wishing to better control their populace, warlordswishing to perpetrate ethnic cleansing, etc. Autonomous weapons are idealfor tasks such as assinations, destabilizing nations, subduing populationsand selectively killing a particular ethnic group. We therefore believe thata military AI arms race would not be benecial for humanity.”

A fictional example (Stuart Russell/Future of Life institute)

Purpose: to build support for a global ban on autonomous weapon systems

https://www.youtube.com/watch?v=ecClODh4zYk

AI and automated warfare: Risk with remote wars?

Another case: Real concern or fear-based PR?

Hard to factor out the hype, e.g., “Big data is the new oil”

“Machine learning will be the engine of global growth”– Erik Brynjolfsson, Financial times, July 2018

The Hype Cycle (Source: Gartner)

The Hype Cycle: 2018 (Source: Gartner)

The Hype Cycle: Details (Source: Gartner)

Some signs of AI hype: Self-driving cars

Uber, Tesla, and other companies have pulled back some majorpromises during the recent years

Promises included, e.g., U.S. coast-to-coast autonomous driving goingto happen in 2017 and 2018, events that did not happen

Autonomous driving seems harder than what was thought (filled withmany corner cases)

Sources:

https://www.theverge.com/2018/8/1/17641186/tesla-elon-musk-self-driving-coast-to-coast-delay

https://www.npr.org/2018/07/31/634331593/

uber-parks-its-self-driving-truck-project-saying-it-will-push-for-autonomous-car

Difficulties: TESLA autopilot crash

Source: The Wired

The Wired: Job market

Source: The Wired, 2016

The Wired: Job market

The implications of an unparsable machine language aren’t justphilosophical. For the past two decades, learning to code has been one ofthe surest routes to reliable employment—a fact not lost on all thoseparents enrolling their kids in after-school code academies....

Analysts have already started worrying about the impact of AI on the jobmarket, as machines render old skills irrelevant. Programmers might soonget a taste of what that feels like themselves.

The Wired: Job market

The implications of an unparsable machine language aren’t justphilosophical. For the past two decades, learning to code has been one ofthe surest routes to reliable employment—a fact not lost on all thoseparents enrolling their kids in after-school code academies....Analysts have already started worrying about the impact of AI on the jobmarket, as machines render old skills irrelevant. Programmers might soonget a taste of what that feels like themselves.

The Wired: ... and other ethical issues

Just as Newtonian physics wasn’t obviated by the discovery of quantummechanics, code will remain a powerful, if incomplete, tool set to explorethe world. But when it comes to powering specific functions, machinelearning will do the bulk of the work for us....

Already the companies that build this stuff find it behaving in ways thatare hard to govern. Last summer, Google rushed to apologize when itsphoto recognition engine started tagging images of black people asgorillas. The company’s blunt first fix was to keep the system fromlabeling anything as a gorilla.

The Wired: ... and other ethical issues

Just as Newtonian physics wasn’t obviated by the discovery of quantummechanics, code will remain a powerful, if incomplete, tool set to explorethe world. But when it comes to powering specific functions, machinelearning will do the bulk of the work for us....Already the companies that build this stuff find it behaving in ways thatare hard to govern. Last summer, Google rushed to apologize when itsphoto recognition engine started tagging images of black people asgorillas. The company’s blunt first fix was to keep the system fromlabeling anything as a gorilla.

Google Photos: Bias in ML systems

Source: Boing boing, 2018-01-11

What does this story illustrate?

Why do you think the problem was “solved” by removing a class?

1 Many times even the engineers don’t know the details of howmachine learning systems work

We build and use models that operate in ways we don’t understandWe might find it hard to provide a simple explanation of why and howautonomous systems reach their resultsRecent legislation/EU regulations seem to require companies to provide“meaningful” and simple explanations of automated recisions

2 Machine learning systems could incorporate biases we are unaware of:Using them could lead to moral dilemmas and other problems

What does this story illustrate?

Why do you think the problem was “solved” by removing a class?

1 Many times even the engineers don’t know the details of howmachine learning systems work

We build and use models that operate in ways we don’t understandWe might find it hard to provide a simple explanation of why and howautonomous systems reach their resultsRecent legislation/EU regulations seem to require companies to provide“meaningful” and simple explanations of automated recisions

2 Machine learning systems could incorporate biases we are unaware of:Using them could lead to moral dilemmas and other problems

What does this story illustrate?

Why do you think the problem was “solved” by removing a class?

1 Many times even the engineers don’t know the details of howmachine learning systems work

We build and use models that operate in ways we don’t understandWe might find it hard to provide a simple explanation of why and howautonomous systems reach their resultsRecent legislation/EU regulations seem to require companies to provide“meaningful” and simple explanations of automated recisions

2 Machine learning systems could incorporate biases we are unaware of:Using them could lead to moral dilemmas and other problems

A problem with interpreting models: Example

A problem with interpreting models: Example

A problem with interpreting models: Example

Ethics/moral dilemma: Definition

The purpose of an ethical analysis is not to tell anyone what is right, butto reason about ethical/moral questions in a systematic way.

A useful distinction is between:

1 Science: What is the true state of the world?

2 Ethics: What kind of state would be desireable for the world? (Underdifferent assumptions about what is good)

Definition of moral dilemma: Two or more ethical aspects in conflict

Example of two common ethical aspects:

The total good for everyone

Individual rights/duties/freedoms

Ethics/moral dilemma: Definition

The purpose of an ethical analysis is not to tell anyone what is right, butto reason about ethical/moral questions in a systematic way.A useful distinction is between:

1 Science: What is the true state of the world?

2 Ethics: What kind of state would be desireable for the world? (Underdifferent assumptions about what is good)

Definition of moral dilemma: Two or more ethical aspects in conflict

Example of two common ethical aspects:

The total good for everyone

Individual rights/duties/freedoms

Ethics/moral dilemma: Definition

The purpose of an ethical analysis is not to tell anyone what is right, butto reason about ethical/moral questions in a systematic way.A useful distinction is between:

1 Science: What is the true state of the world?

2 Ethics: What kind of state would be desireable for the world? (Underdifferent assumptions about what is good)

Definition of moral dilemma: Two or more ethical aspects in conflict

Example of two common ethical aspects:

The total good for everyone

Individual rights/duties/freedoms

Ethics/moral dilemma: Definition

The purpose of an ethical analysis is not to tell anyone what is right, butto reason about ethical/moral questions in a systematic way.A useful distinction is between:

1 Science: What is the true state of the world?

2 Ethics: What kind of state would be desireable for the world? (Underdifferent assumptions about what is good)

Definition of moral dilemma: Two or more ethical aspects in conflict

Example of two common ethical aspects:

The total good for everyone

Individual rights/duties/freedoms

A simplistic moral dilemma: The trolley problem

Point of an abstract example: to simplify and show relevant distinctions

Thousands of publications mentioning this problem! Many thought forlong this as pure armchair speculation, completely irrelevant for reality, butillustrating individual rights vs the collective good

A simplistic moral dilemma: The trolley problem

Point of an abstract example: to simplify and show relevant distinctions

Thousands of publications mentioning this problem! Many thought forlong this as pure armchair speculation, completely irrelevant for reality, butillustrating individual rights vs the collective good

A simplistic moral dilemma: The trolley problem

Point of an abstract example: to simplify and show relevant distinctions

Thousands of publications mentioning this problem! Many thought forlong this as pure armchair speculation, completely irrelevant for reality, butillustrating individual rights vs the collective good

Moral dilemma: Self-driving cars

For discussion: Classifying private preferences1 2

1Deep neural networks are more accurate than humans at detecting sexualorientation from facial images, 2017. https://osf.io/zn79k/

2Issues have been raised about the study based on limitations in data and method.

For discussion: Rare diseases3

3Diagnostically relevant facial gestalt information from ordinary photos, eLife,Journal of Personality and Social Psychology, 2017.https://dx.doi.org/10.7554/eLife.02020

Discussion: Are these moral dilemmas?

Discuss in small groups (with your 1–2 nearest neighbors) for 2 minutes:

Are there any conflicting ethical aspects in the previous two examples?

1 Classifying sexuality (with ML)

2 Classifying rare diseases (with ML)

What about bias in language models?

Some modern machine learning methods encode words of natural languagein high dimensions (as vectors/embeddings)

Using arithmetic operations in such spaces, it can be possible to

Reason about analogies

Finding words with similar meanings

Describe the sentiment of words (value-laden connotations)

Some methods are powerful enough to pick up biases from language

What about bias in language models?

Some modern machine learning methods encode words of natural languagein high dimensions (as vectors/embeddings)

Using arithmetic operations in such spaces, it can be possible to

Reason about analogies

Finding words with similar meanings

Describe the sentiment of words (value-laden connotations)

Some methods are powerful enough to pick up biases from language

What about bias in language models?

Some modern machine learning methods encode words of natural languagein high dimensions (as vectors/embeddings)

Using arithmetic operations in such spaces, it can be possible to

Reason about analogies

Finding words with similar meanings

Describe the sentiment of words (value-laden connotations)

Some methods are powerful enough to pick up biases from language

Machine learning models can pick up bias4

text_to_sentiment("Let’s go get Italian food")

2.0429166109408983

text_to_sentiment("Let’s go get Chinese food")

1.4094033658140972

text_to_sentiment("Let’s go get Mexican food")

0.38801985560121732

4Example from https://blog.conceptnet.io/posts/2017/

how-to-make-a-racist-ai-without-really-trying/

Machine learning models can pick up bias

text_to_sentiment("My name is Emily")

2.2286179364745311

text_to_sentiment("My name is Heather")

1.3976291151079159

text_to_sentiment("My name is Yvette")

0.98463802132985556

text_to_sentiment("My name is Shaniqua")

-0.47048131775890656

Machine learning models can pick up bias

Applied to names from different ethnic groups:

Hard question: If you train your algorithm on wrong data, are youresponsible for racism?

More biases in natural language (Source: Science5, 2017)

“Machine learning is a means to derive artificial intelligence by discovering

patterns in existing data. Here, we show that applying machine learning to

ordinary human language results in human-like semantic biases. We replicated a

spectrum of known biases, as measured by the Implicit Association Test, using a

widely used, purely statistical machine-learning model trained on a standard

corpus of text from the World Wide Web. Our results indicate that text corpora

contain recoverable and accurate imprints of our historic biases, whether morally

neutral as toward insects or flowers, problematic as toward race or gender, or even

simply veridical, reflecting the status quo distribution of gender with respect to

careers or first names. Our methods hold promise for identifying and addressing

sources of bias in culture, including technology.”

5http://science.sciencemag.org/content/356/6334/183

Recruiting tool showing bias against women6

Amazon:

The team had been building computer programs since 2014 to reviewjob applicants’ resumes with the aim of mechanizing the search fortop talent, five people familiar with the effort told Reuters.

A women’s college in the education section of a resume was anautomatic demerit.

Amazon’s computer models were trained to vet applicants byobserving patterns in resumes submitted to the company over a10-year period. Most came from men, a reflection of male dominanceacross the tech industry.

Some 55 percent of U.S. human resources managers said artificialintelligence, or AI, would be a regular part of their work within the next fiveyears, according to a 2017 survey by talent software firm CareerBuilder.

6https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Recruiting tool showing bias against women6

Amazon:

The team had been building computer programs since 2014 to reviewjob applicants’ resumes with the aim of mechanizing the search fortop talent, five people familiar with the effort told Reuters.

A women’s college in the education section of a resume was anautomatic demerit.

Amazon’s computer models were trained to vet applicants byobserving patterns in resumes submitted to the company over a10-year period. Most came from men, a reflection of male dominanceacross the tech industry.

Some 55 percent of U.S. human resources managers said artificialintelligence, or AI, would be a regular part of their work within the next fiveyears, according to a 2017 survey by talent software firm CareerBuilder.

6https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Recruiting tool showing bias against women6

Amazon:

The team had been building computer programs since 2014 to reviewjob applicants’ resumes with the aim of mechanizing the search fortop talent, five people familiar with the effort told Reuters.

A women’s college in the education section of a resume was anautomatic demerit.

Amazon’s computer models were trained to vet applicants byobserving patterns in resumes submitted to the company over a10-year period. Most came from men, a reflection of male dominanceacross the tech industry.

Some 55 percent of U.S. human resources managers said artificialintelligence, or AI, would be a regular part of their work within the next fiveyears, according to a 2017 survey by talent software firm CareerBuilder.

6https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Recruiting tool showing bias against women6

Amazon:

The team had been building computer programs since 2014 to reviewjob applicants’ resumes with the aim of mechanizing the search fortop talent, five people familiar with the effort told Reuters.

A women’s college in the education section of a resume was anautomatic demerit.

Amazon’s computer models were trained to vet applicants byobserving patterns in resumes submitted to the company over a10-year period. Most came from men, a reflection of male dominanceacross the tech industry.

Some 55 percent of U.S. human resources managers said artificialintelligence, or AI, would be a regular part of their work within the next fiveyears, according to a 2017 survey by talent software firm CareerBuilder.

6https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Bias in machine learning models: Other examples

In a system used by judges to set parole, the likelihood of offendingwas found to be biased against black defendants7

Facial recognition software embedded in most smart phones alsoworks best for those who are white and male8

These, and similar, biases can be hard to find: The creators can reasonablybe said to have plausible deniability. Can we prevent this from happening?

Discuss this question in small groups (2–3 persons) for 2 minutes.

7https://www.propublica.org/article/

machine-bias-risk-assessments-in-criminal-sentencing8https://www.newscientist.com/article/

2161028-face-recognition-software-is-perfect-if-youre-a-white-man/

Bias in machine learning models: Other examples

In a system used by judges to set parole, the likelihood of offendingwas found to be biased against black defendants7

Facial recognition software embedded in most smart phones alsoworks best for those who are white and male8

These, and similar, biases can be hard to find: The creators can reasonablybe said to have plausible deniability. Can we prevent this from happening?

Discuss this question in small groups (2–3 persons) for 2 minutes.

7https://www.propublica.org/article/

machine-bias-risk-assessments-in-criminal-sentencing8https://www.newscientist.com/article/

2161028-face-recognition-software-is-perfect-if-youre-a-white-man/

Bias in machine learning models: Other examples

In a system used by judges to set parole, the likelihood of offendingwas found to be biased against black defendants7

Facial recognition software embedded in most smart phones alsoworks best for those who are white and male8

These, and similar, biases can be hard to find: The creators can reasonablybe said to have plausible deniability. Can we prevent this from happening?

Discuss this question in small groups (2–3 persons) for 2 minutes.

7https://www.propublica.org/article/

machine-bias-risk-assessments-in-criminal-sentencing8https://www.newscientist.com/article/

2161028-face-recognition-software-is-perfect-if-youre-a-white-man/

AI and democracy / Cathy O’Neil

Weapons of Math Destruction?

The math-powered applications powering the data economy were based onchoices made by fallible human beings. Some of these choices were nodoubt made with the best intentions. Nevertheless, many of these modelsencoded human prejudice, misunderstanding, and bias into the softwaresystems that increasingly managed our lives.

...

Like gods, these mathematical models were opaque, their workingsinvisible to all but the highest priests in their domain: mathematicians andcomputer scientists. Their verdicts, even when wrong or harmful, werebeyond dispute or appeal. And they tended to punish the poor and theoppressed in our society, while making the rich richer.

I came up with a name for these harmful kinds of models: Weapons ofMath Destruction (WMDs)

Weapons of Math Destruction?

The math-powered applications powering the data economy were based onchoices made by fallible human beings. Some of these choices were nodoubt made with the best intentions. Nevertheless, many of these modelsencoded human prejudice, misunderstanding, and bias into the softwaresystems that increasingly managed our lives.

... Like gods, these mathematical models were opaque, their workingsinvisible to all but the highest priests in their domain: mathematicians andcomputer scientists. Their verdicts, even when wrong or harmful, werebeyond dispute or appeal. And they tended to punish the poor and theoppressed in our society, while making the rich richer.

I came up with a name for these harmful kinds of models: Weapons ofMath Destruction (WMDs)

The impact on democracy

I wouldn’t yet call Facebook or Google’s algorithms political WMDs,because I have no evidence that the companies are using their net-works to cause harm. Still, the potential for abuse is vast. The dramaoccurs in code and behind imposing firewalls. And as we’ll see, thesetechnologies can place each of us into our own cozy political nook.

The impact on democracy

In The Selling of the President, which followed Richard Nixon’s1968 campign, the journalist Joe McGinniss introduced readers tothe political operatives working to market the presidential candidatelike a consumer good. By using focus groups, Nixon’s campaign wasable to hone his pitch for different regions and demographics....

The impact on democracy

The convergence of Big Data and consumer marketing now providespoliticians with far more powerful tools. They can target microgroupsof citizens for both votes and money and appeal to each of them witha meticulously honed message, one that no one else is likely to see.It might be a banner on Facebook or a fund-raising email....

But each one allows candidates to quietly sell multiple versions ofthemselves and it’s anyone’s guess which version will show up forwork after inaugeration.

The impact on democracy

The convergence of Big Data and consumer marketing now providespoliticians with far more powerful tools. They can target microgroupsof citizens for both votes and money and appeal to each of them witha meticulously honed message, one that no one else is likely to see.It might be a banner on Facebook or a fund-raising email....

But each one allows candidates to quietly sell multiple versions ofthemselves and it’s anyone’s guess which version will show up forwork after inaugeration.

Will legislation increase transparency? GDPR

The General Data Protection Regulation [EU,2016]

Applies to all uses of EU individuals’ data that could identifyindividuals (also for, e.g., companies outside the EU)

Regulates automated decision making using machine learning models

Will legislation increase transparency? GDPR

Refers to “automated decision-making” as any model that makes adecision without a human being involved in the decision directly

Article 13: individuals have a right to “meaningful information aboutthe logic involved, as well as the significance and the envisagedconsequences of such processing for the data subject.”

Article 22: “the data subject shall have the right not to be subject toa decision based solely on automated processing, including profiling,which produces legal effects concerning him or her or similarlysignificantly affects him or her.”

Recital 71 (commentary): Individuals have a right to get anexplanation of automated decisions after they are made

Source: https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/Source: https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning

Will legislation increase transparency? GDPR

Refers to “automated decision-making” as any model that makes adecision without a human being involved in the decision directly

Article 13: individuals have a right to “meaningful information aboutthe logic involved, as well as the significance and the envisagedconsequences of such processing for the data subject.”

Article 22: “the data subject shall have the right not to be subject toa decision based solely on automated processing, including profiling,which produces legal effects concerning him or her or similarlysignificantly affects him or her.”

Recital 71 (commentary): Individuals have a right to get anexplanation of automated decisions after they are made

Source: https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/Source: https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning

Will legislation increase transparency? GDPR

Refers to “automated decision-making” as any model that makes adecision without a human being involved in the decision directly

Article 13: individuals have a right to “meaningful information aboutthe logic involved, as well as the significance and the envisagedconsequences of such processing for the data subject.”

Article 22: “the data subject shall have the right not to be subject toa decision based solely on automated processing, including profiling,which produces legal effects concerning him or her or similarlysignificantly affects him or her.”

Recital 71 (commentary): Individuals have a right to get anexplanation of automated decisions after they are made

Source: https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/Source: https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning

Will legislation increase transparency? GDPR

Refers to “automated decision-making” as any model that makes adecision without a human being involved in the decision directly

Article 13: individuals have a right to “meaningful information aboutthe logic involved, as well as the significance and the envisagedconsequences of such processing for the data subject.”

Article 22: “the data subject shall have the right not to be subject toa decision based solely on automated processing, including profiling,which produces legal effects concerning him or her or similarlysignificantly affects him or her.”

Recital 71 (commentary): Individuals have a right to get anexplanation of automated decisions after they are made

Source: https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/Source: https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning

Will legislation increase transparency? GDPR

Implications:

Users must, in most cases, consent to share their data for analysis

Users be able to withdraw consent of using their data at any time

Need to provide some kind of understanable information for individualhow their data will be used, and also after they have been made

These things could affect many data science programs in practice

Will legislation increase transparency? GDPR

Implications:

Users must, in most cases, consent to share their data for analysis

Users be able to withdraw consent of using their data at any time

Need to provide some kind of understanable information for individualhow their data will be used, and also after they have been made

These things could affect many data science programs in practice

Will legislation increase transparency? GDPR

Zooming out:

There is still some disagreement of how these aspects of GDPR willbe interpreted: For example, what is “meaningful information”?

The values involved are big: Clearly there will be more technicaldemands and possibility for users to remove data

People who understand machine learning, how to work with data, andprivacy will be necessary for many future data science systems

Course assignment

An upcoming assignment in the course along this:

A hand-in on the theme: Machine learning meets the real world

Read a short article about an ethical question relevant tocontemporary machine learning

Reflect on the article, and write down your thoughts

Please see more on the course web page

Conclusions

Models are constructed from data, but also based on the choices wemake about what data to include and what to leave out

Choosing data might have unintended consequences: Computersmight wrongly discriminate without the intention by their creators

Different technical counter-measures have been proposed anddiscussed, but it is a new and active area of research

An improved and critical assessment of data and selection of data willbe increasingly important in applied machine learning

Conventional wisdom in Computer Science: Garbage in, garbage out

Conclusions

Models are constructed from data, but also based on the choices wemake about what data to include and what to leave out

Choosing data might have unintended consequences: Computersmight wrongly discriminate without the intention by their creators

Different technical counter-measures have been proposed anddiscussed, but it is a new and active area of research

An improved and critical assessment of data and selection of data willbe increasingly important in applied machine learning

Conventional wisdom in Computer Science: Garbage in, garbage out

Conclusions

Models are constructed from data, but also based on the choices wemake about what data to include and what to leave out

Choosing data might have unintended consequences: Computersmight wrongly discriminate without the intention by their creators

Different technical counter-measures have been proposed anddiscussed, but it is a new and active area of research

An improved and critical assessment of data and selection of data willbe increasingly important in applied machine learning

Conventional wisdom in Computer Science: Garbage in, garbage out

Conclusions

Models are constructed from data, but also based on the choices wemake about what data to include and what to leave out

Choosing data might have unintended consequences: Computersmight wrongly discriminate without the intention by their creators

Different technical counter-measures have been proposed anddiscussed, but it is a new and active area of research

An improved and critical assessment of data and selection of data willbe increasingly important in applied machine learning

Conventional wisdom in Computer Science: Garbage in, garbage out

Conclusions

Models are constructed from data, but also based on the choices wemake about what data to include and what to leave out

Choosing data might have unintended consequences: Computersmight wrongly discriminate without the intention by their creators

Different technical counter-measures have been proposed anddiscussed, but it is a new and active area of research

An improved and critical assessment of data and selection of data willbe increasingly important in applied machine learning

Conventional wisdom in Computer Science: Garbage in, garbage out

top related