Measures how often a model incorrectly predicts a positive outcome within a group.
False positive / false positive rate.
Focuses on fair distribution of benefits and harms across groups
Justice
The actual outcome used to evaluate model predictions
Ground truth label
Model performs well on training data but poorly on new data
Overfitting
This semester’s most memorable or impactful moment for you
Everything is correct!
Measures the difference in false positive rates between two groups (i.e., privileged and underprivileged).
FPR difference
Focuses on overall benefits and harms experienced by individuals
Well-being
Graph used to compare counts or categories across groups
Bar chart
Metric that is less informative when one class heavily outweighs the other
Accuracy
AI law in Europe regulating high-risk AI systems
EU AI Act
Measures the difference in true positive rates between groups.
Equal opportunity
Focuses on individuals’ ability to make their own informed decisions
Autonomy
Graph used to show the distribution of a continuous variable
Histogram / denisty plot
A statistical parity difference of 0 indicates what between groups
Equal outcomes
Bias that occurs when certain groups are underrepresented or overrepresented in the dataset
Sampling bias
Measures the difference in the rate of positive predictions between groups
Statistical parity
Encourages considering stakeholders who are often excluded or overlooked
Expanding the ethical circle.
Unequal distribution of outcome labels in the dataset
Class / data imbalance
An equal opportunity difference of 0 indicates what between groups
Equal TPRs
Method used to quantify how much each feature contributes to a model’s predictions
Feature importance, SHAP values, permutation importance
Measures the ratio of positive prediction rates between privileged and unprivileged groups
Disparate impact
Encourages thinking about how systems could be misused or exploited by bad actors
Think about the terrible people.
Group defined as receiving fewer favorable outcomes relative to another group in fairness analysis
Unprivileged / Underprivileged group
A disparate impact below 0.8 means the underprivileged group receives positive outcomes at what kind of rate
Much lower rate
Algorithmic bias in facial recognition systems highlighted by Dr. Joy Buolamwini
Coded Gaze