This is defined as “openness or access, with the goal of becoming informed about the system.”
What is transparency?
This is the response to limits of transparency—helping humans understand decisions, not just see them.
What is explainability?
This is defined as the ability to question, challenge, or dispute algorithmic decisions.
What is contestability?
This perspective informs users that a system exists and why it collects data.
What is the scope perspective?
This keeps humans “in the loop” as an interface between people and machines.
What is the role of explainability?
Contestability moves users from passive recipients to this role.
What are active participants?
This approach asks, “How was this decision reached?” by focusing on processes that turn inputs into outputs.
What is the decision-rules approach?
From Miller's note: explanations should be causal, contrastive, selective, and ______.
What is social?
The authors say this should be a built-in design feature, not just an after-the-fact appeal.
What is contestability by design?
These laws on the slide illustrate scope transparency for federal systems.
What are the Privacy Act of 1974 and the E-Government Act §208?
Define this: “Why this outcome instead of that one?”—as a better fit for classification.
What are contrastive explanations?
Contestability turns decision-making into a collaborative process between these two parties.
What are humans and machines?
This credit law’s “key reasons” requirement is your example of output transparency.
What is the Equal Credit Opportunity Act?
We have been warned these can mislead because ML finds correlations, not true causes.
What are purely causal explanations?
Name these challenges: let users “trace predictions all the way down” and record disagreements.
What are heightened legibility and mechanisms to capture challenges?