Something vs. Nothing
There aren't enough therapists. Isn't some support better than no support?
The Tessa Test
Tessa gave weight-loss advice to users with eating disorders. In one sentence, what does your team think that incident actually proves?
Felt vs. Felt For
Can a machine that has never felt anything actually help someone who feels too much?
Where Does It Go?
You tell a chatbot you want to die. Where does that sentence go?
Google or Cigarettes?
Five years from now, will AI mental health tools be more like Google (everyone uses them, no one thinks about it) or more like cigarettes (regulated, stigmatized, regretted)?
The Rural Teenager
A rural teenager with no insurance can talk to Woebot tonight, for free. What do you say to her?
Who's Responsible?
Character.AI was sued after a teenager's suicide was linked to conversations with one of its bots. Is the company responsible, the parents, the platform design, or the underlying technology?
More Empathetic Than Doctors
A 2023 JAMA study found patients rated ChatGPT's responses as more empathetic than real doctors'. Does that finding support your team's position, or complicate it?
The HIPAA Loophole
HIPAA doesn't cover most mental health apps. Is that a loophole to close, a feature to preserve, or evidence that the whole framework is outdated?
Concede Something
Name one thing your team is willing to concede to the other two teams. If you can't name one, what does that say?
Two-Tiered Care
Critics say AI mental health tools create a two-tiered system: real therapists for the rich, chatbots for the poor. Is that a bug or a feature of the current rollout?
Held to a Higher Standard
Every form of therapy has failure rates. Human therapists also cause harm — sometimes serious harm. Why should AI be held to a higher standard than the humans it's replacing or supplementing?
Is Fake Help Real Help?
If a user feels genuinely helped by an AI — sleeps better, self-harms less, feels less alone — and the AI is "faking" empathy it doesn't have, is the help real? Does the answer matter?
The Cerebral Scandal
The FTC caught Cerebral sharing sensitive user data with Facebook and TikTok advertisers. Your team has 60 seconds to explain whether this is a scandal, a predictable outcome, or a solvable problem — and what that says about your broader stance.
When the Evidence Changes
If an AI therapy tool demonstrably outperforms the median human therapist in a rigorous RCT — on outcomes, not just satisfaction — does your team's position change? Why or why not?
The Cost of Caution
If AI tools genuinely expand access, does refusing to deploy them until they're "proven safe" itself cause harm to the people waiting? Who bears the cost of caution?
Name One Safeguard
Your team has to name one specific, concrete safeguard that would have prevented the Tessa incident — and explain why that safeguard wouldn't just get removed the next time a company needs to cut costs.
Moral Deskilling
Shannon Vallor argues that outsourcing emotional labor to machines causes "moral deskilling" — we get worse at being there for each other. Is your team willing to accept that trade-off? If not, how do you avoid it?
Read Aloud in Court
Would your team be comfortable if a transcript of every conversation you'd had with a mental health chatbot were read aloud in a courtroom, to an employer, or to your family? If not, what exactly is the protection you're counting on?
Who Pays When It Kills?
Who should be legally liable when an AI mental health tool contributes to a death: the developer, the deploying clinic, the platform, the user who clicked "I agree," or no one? Pick one and defend it against the obvious objection.
Name Your Threshold
100 Saved, 1 Lost
A chatbot prevents 100 suicides through 3am crisis support and contributes to 1. A human hotline, with the same funding, would have prevented 40 and contributed to 0. Which system is more ethical to deploy, and what does your answer reveal about how your team actually weighs harm?
The Active Ingredient
Defend or attack this claim: the therapeutic relationship is the active ingredient in therapy, and everything else — CBT worksheets, diagnoses, techniques — is scaffolding around it. If your team is right, what does that mean for a technology that cannot actually have a relationship with anyone?
Write the Disclosure
Design the informed consent disclosure a user should see before their first message to a mental health chatbot. It has to be honest enough to be ethical, short enough that people will actually read it, and not so alarming it drives away the people who need help most. Your team has to actually write it.
Stake Your Position
Your team has to propose the one rule — law, norm, or technical standard — that you would stake your entire position on. If this rule is adopted, you accept the outcome. If it can't be adopted or enforced, your position collapses. What is it?