What is AI?
How AI becomes biased
Combating Bias
AI and Ethics
Instances of AI Bias in real life
100

According to the module, what is the most simple definition for artificial intelligence? 

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

100

Considering that training data can contain human decisions, how could this impact the AI’s bias?

Human decision making can be biased.

100

Answer in proper Jeopardy format: 

The type of team needed to reduce or eliminate sexism and racism in AI.

What is diverse?

100

What is one of the questions the Grand Challenge is trying to address regarding ethics in AI?

The question of how we can ensure advances in AI are compatible with and responsive to society’s needs and values. 

100

Name one speech assistant that is in common use on mobile devices and defaults to a woman’s voice.

Siri, Alexa, Google Assistant, and Cortana

200

What is the trolley problem?

The issue where you have to choose if you would kill one person to save five. 

200

Even if certain variables are removed from the data, they can ______ with other variables still in the data, leading to bias.

Correlate

200

Answer in proper Jeopardy format: 

To combat bias we should provide (more/less)_______ data.

What is more?

200

Of Isaac Asimov’s three laws of robotics, what is his first law?

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

200

In 2019, US hospitals used an algorithm that used healthcare cost history as one variable to see which patients needed extra care. This algorithm discriminated against black people due to a correlation between which two variables?

Race and healthcare cost history correlated to cause the algorithm to believe that black people were less likely to need care. This may have been due to the correlation between race and income.

300

What is the most common "con" of AI? 

Moral and ethical values.

300

If a dataset does not have a ______ sample of the population, certain groups could be underrepresented.

Representative

300

Answer in proper Jeopardy format:

Determine if this question is true or false: AI is designed without bias and the bias is created through use.

What is false?

300

What is Kranzberg’s first law of technology?

Technology is neither good nor bad, nor is it neutral.

300

This company made headlines in 2018 for an internal recruiting tool that discriminated on the basis of gender.

Amazon: their tool was so biased that, among other things, it penalized the word “woman” appearing on an applicant’s resume.

400

True or false; Kranzberg’s first law of technology states that technology is neither good nor bad, but that it is neutral. 

False

400

How can the people developing an AI impact the AI’s bias?

If the developers are not diverse, they may make biased assumptions in the AI’s design.

400

Answer in proper Jeopardy format:

Another adjective for the type of experience AI needs to learn from? (Hint: also the answer to a previous question)

What is diverse?

400

Finish the following sentence: when referring to ethics and AI, Dr. Fleischmann asserts that in order to preserve the agency (the capacity to make choices) of humans, we need to _______.

Design technology to be transparent.

400

In 2016, this chatbot was live for less than a day before interactions with Twitter users made it extremely racist and misogynistic.

This chatbot was Microsoft’s Tay. It was shut down in 16 hours because it began to mimic racist and sexist language that it was taught by Twitter users.

500

What is one of the most important things to be aware of when machines are around? 

Our own biases.

500

Even if an AI’s output is unbiased, how can it still be harmful?

If the people using the AI’s output are biased, they can use the AI in harmful ways.

500

Answer in proper Jeopardy format:

Bias can creep into algorithms in several ways through _____ data.

What is training data?

500

What do you think the trolley problem has to do with artificial intelligence and ethics?

It addresses the problem of deciding whether and how to code societal values into autonomous vehicles. What is the most ethical choice? How can a machine decide that? 

500

PredPol is a program used by US states to try and predict where and when crimes will take place. In 2016, it was discovered that the algorithm sent officers to neighbourhoods that were predominantly racial minorities, regardless of historical crime data for the neighbourhood. Which biased data source was PredPol using?

PredPol was using police reports, which were already biased. Because PredPol used where officers had arrested people previously to predict where crime would occur, the racial bias in officer arrests was reflected in its results, leading to a feedback loop of racial bias.