Missions & Values
Methods & Training
Governance Models
Everyday Use & Risks
Media & Meaning
100

This lab says it builds AI that is “reliable, interpretable, and steerable.”

What is Anthropic?

Why it matters: Mission lines preset expectations.

100

Anthropic’s named method that checks outputs against rules/values.

What is Constitutional AI?
Why it matters: Method name communicates norms-first.

100

Anthropic’s corporate form that centers public benefit.

What is a Public Benefit Corporation (PBC)?
Why it matters: Signals stewardship.

100

Name two OpenAI products that signal a general-purpose toolkit.

What are ChatGPT, DALL·E, Whisper, or the OpenAI API?
Why it matters: Shows multi-tool posture.

100

Headlines often brand Anthropic’s approach with this adjective.

What is “responsible” (development)?
Why it matters: Shapes trust cues.

200

This org’s mission is to ensure AI “benefits all of humanity.”

What is OpenAI? 

Why it matters: Signals broad social scope.

200

: In Constitutional AI, outputs are checked against these.

What are explicit rules/ethical guidelines?
Why it matters: Legible to non-experts.

200

OpenAI’s structure balancing investment with mission limits.

What is a capped-profit model?
Why it matters: Signals scale under constraints.

200

Brand that teaches a flexible multi-tool vibe.

What is OpenAI?
Why it matters: Explains broad uptake.

200

Headlines describe OpenAI’s tempo with this phrase.

What is “rapid commercialization”?
Why it matters: Frames speed & ubiquity.

300

These public statements act as “meaning anchors” before any prompt.

What are mission statements? 

Why it matters: They shape trust and use.

300

OpenAI’s alignment pipeline that learns from people’s preferences.

What is RLHF (reinforcement learning from human feedback)?
Why it matters: Human feedback can introduce bias.

300

Two labels that become a shortcut for motive—“care” vs “speed.”

What are PBC (Anthropic) and capped-profit (OpenAI)?
Why it matters: Handy but reductive heuristic.

300

Brand that’s framed as careful/steerable for complex reasoning.

What is Anthropic (Claude)?
Why it matters: Sets expectations for control.

300

Downside of “responsible vs. commercial” framing.

What is flattening nuance / polarizing discourse?
Why it matters: Reminds us to look deeper.

400

Besides “steerable,” name one Anthropic trait.

What is “reliable” or “interpretable”? 

Why it matters: Cues careful behavior.

400

Anthropic’s intended outcomes (name one of three).

What is accuracy/safety/reliability?
Why it matters: Defines trust goals.

400

Risk of picking tools by label, not task.

What is misallocated trust / ignoring task risks?
Why it matters: Choose by task-fit.

400

Polished ≠ correctly—name this risk.
 

What is scope-creep / over-reliance on polish?
Why it matters: Prevents misuse.

400

One reason that simplified framing still helps.

What is quick comprehension of different roles?
Why it matters: Gives audiences an entry point.

500

OpenAI’s mission + breadth frame AI as this kind of collaborator.

What is a general-purpose multi-tool?

Why it matters: Encourages everyday adoption.  

500

Risk of treating methods as guarantees, not probabilities.

What is over-trust / complacency about failures?
Why it matters: Keeps critical evaluation alive.

500

One practice to counter governance-label bias.

What is evaluating task-fit and risks directly?
Why it matters: Focuses judgment on the work.

500

One classroom safeguard for “multi-tool” use.

What is verifying citations/tracing sources / limiting by task?
Why it matters: Builds responsible habits.

500

A better rule than brand binaries when picking tools.

What is evaluating task-fit and risk directly?
Why it matters: Improves real-world choices.

M
e
n
u