Fairness Metrics
Ethical Frameworks (Mepham + Markkula)
Exploratory Data Analysis (EDA)
Analyzing Results
General Ethics and Concepts
100

Measures how often a model incorrectly predicts a positive outcome within a group.

False positive / false positive rate.

100

Focuses on fair distribution of benefits and harms across groups

Justice

100

The actual outcome used to evaluate model predictions

Ground truth label

100

Model performs well on training data but poorly on new data

Overfitting

100

This semester’s most memorable or impactful moment for you

Everything is correct!

200

Measures the difference in false positive rates between two groups (i.e., privileged and underprivileged). 

FPR difference

200

Focuses on overall benefits and harms experienced by individuals

Well-being

200

Graph used to compare counts or categories across groups

Bar chart

200

Metric that is less informative when one class heavily outweighs the other

Accuracy

200

AI law in Europe regulating high-risk AI systems

EU AI Act

300

Measures the difference in true positive rates between groups.

Equal opportunity

300

Focuses on individuals’ ability to make their own informed decisions

Autonomy

300

Graph used to show the distribution of a continuous variable

Histogram / denisty plot

300

A statistical parity difference of 0 indicates what between groups

Equal outcomes

300

Bias that occurs when certain groups are underrepresented or overrepresented in the dataset

Sampling bias

400

Measures the difference in the rate of positive predictions between groups

Statistical parity


400

Encourages considering stakeholders who are often excluded or overlooked

Expanding the ethical circle.

400

Unequal distribution of outcome labels in the dataset

Class / data imbalance

400

An equal opportunity difference of 0 indicates what between groups

Equal TPRs

400

Method used to quantify how much each feature contributes to a model’s predictions

Feature importance, SHAP values, permutation importance

500

Measures the ratio of positive prediction rates between privileged and unprivileged groups

Disparate impact

500

Encourages thinking about how systems could be misused or exploited by bad actors

Think about the terrible people.

500

Group defined as receiving fewer favorable outcomes relative to another group in fairness analysis

Unprivileged / Underprivileged group

500

A disparate impact below 0.8 means the underprivileged group receives positive outcomes at what kind of rate

Much lower rate

500

Algorithmic bias in facial recognition systems highlighted by Dr. Joy Buolamwini

Coded Gaze

M
e
n
u