Red Flags
Future Shock
Case Law REMIX
Global Ethics
Wild Card
100

A university's learning management system shares student grade data with a third-party analytics company without explicit student consent. The privacy policy mentions "educational partners" but doesn't name specific companies.

FERPA violation

100

A university deploys AI-powered "engagement monitoring" that analyzes students' facial expressions during online classes to generate "attention scores" shared with instructors.Why is this an issue? 

Privacy invasion, consent, potential bias, chilling effect on learning

100

 Real case: In Mahanoy Area School District v. B.L. (2021), Supreme Court ruled schools have limited authority over off-campus student speech. Twist: A university uses AI to monitor students' social media for "concerning content" and flags a doctoral student's tweets criticizing the university's AI ethics as "unprofessional." The student is removed from a teaching assistantship. Legal issues?

 First Amendment (public university), academic freedom, employment law

100

Your EdTech platform operates in both the US and EU. A parent in Germany requests deletion of all their child's learning data under GDPR "right to be forgotten." Your US-based university partner says they need to retain the data for 7 years under FERPA. What do you do?

Must comply with GDPR for EU residents (stricter standard); can anonymize data to satisfy both (GDPR allows retention of anonymized data; FERPA satisfied if no longer personally identifiable)

100

A doctoral student uses ChatGPT to help write their literature review, properly citing it as a tool. Their advisor says this is academic dishonesty because "AI can't think critically." Another committee member says it's fine as long as it's disclosed. University policy is silent on AI use. What is the problem?

Emerging norms around AI in academic work; lack of clear policies; generational/philosophical differences

200

An EdTech company's AI tutoring system collects voice recordings of children under 13 for "quality improvement" purposes. Parents clicked "I agree" during account setup.

COPPA (Children's Online Privacy Protection Act) violation

200

A CS education platform uses generative AI to create personalized coding challenges. The AI occasionally generates problems that inadvertently include copyrighted code snippets from its training data. Students submit these as their work. Who is responsible?

Shared - platform (due diligence), students (verification), instructors (detection)

200

Real case: Loomis v. Wisconsin (2016) - court allowed use of proprietary risk assessment AI in sentencing despite lack of transparency. Twist: A university uses proprietary AI to make graduate admissions decisions but won't reveal the algorithm to applicants who are denied. Applicants suspect bias against certain undergraduate institutions. Legal issues?

No constitutional right to admission, but potential Title VI issues if disparate impact, contract law if promises were made

200

You're developing an AI ethics curriculum for a global online doctoral program. Students from China, Saudi Arabia, and the US have very different cultural perspectives on privacy, government surveillance, and individual rights. How do you design curriculum that respects cultural differences while maintaining ethical standards?

 Present multiple ethical frameworks (Western rights-based, Confucian harmony-based, Islamic ethics, Ubuntu, etc.); focus on reasoning processes not just conclusions; create space for respectful disagreement; identify universal principles (human dignity, avoiding harm)

200

An EdTech company's terms of service state that student data will be deleted when accounts close. The company is acquired by a larger corporation that wants to retain all historical data for AI training. Can they change the terms retroactively?

Generally no—retroactive changes to data use require new consent; original terms created contractual obligation

300

A computer science professor uses student code submissions from class assignments to train a commercial AI coding assistant without informing students. The syllabus states "all work becomes part of the learning community."

Multiple - intellectual property rights, FERPA (if code contains identifying info), lack of informed consent

300

A university creates digital twins (AI avatars) of star professors to teach introductory courses at scale. The AI is trained on years of lecture recordings. Professors receive royalties but have no control over what the AI says. The AI occasionally makes factual errors or expresses views the professor doesn't hold.

Misrepresentation, quality control, labor implications, student deception, academic freedom

300

Real case: Carpenter v. United States (2018) - warrant required for cell phone location data. Twist: A university's campus safety app tracks student locations "for emergency response." During a cheating investigation, administrators access location data to see if two students were together during an exam. Students weren't informed location data could be used this way. Legal issues?

Fourth Amendment (public university), FERPA, terms of service, reasonable expectation of privacy

300

Your university partners with institutions in India, Nigeria, and Brazil to share an AI-powered plagiarism detection system. The AI was trained primarily on Western academic writing and flags collaborative writing styles common in some cultures as "suspicious." Students from partner institutions are accused of plagiarism at higher rates. What's the problem?

Cultural bias in AI training data; Western individualistic academic norms encoded as universal; disparate impact on international students

300

A university's "student success" AI predicts a doctoral student has a 75% chance of not completing their degree based on first-semester performance. What is an ethical approach? 

If used for support, both advisor and student should know with emphasis on interventions; if demographic factors used, likely discriminatory and shouldn't be deployed; predictions should trigger support, not reduced investment

400

An online proctoring system uses facial recognition and eye-tracking to detect cheating. The system flags students with darker skin tones as "suspicious" at higher rates due to lighting detection issues. The company knows about the bias but hasn't disclosed it to universities.

Title VI Civil Rights Act (disparate impact), FTC Act (deceptive practices), ADA (if affects students with certain disabilities)

400

A research team develops a "dissertation completion predictor" AI that analyzes doctoral students' writing samples, advisor meeting notes, and mental health app data (with consent) to predict who will finish their degree. The university wants to use it to allocate funding and advisor assignments. Why is this problematic?

privacy, self-fulfilling prophecy, discrimination risk, consent validity

400

Real case: Gebser v. Lago Vista Independent School District (1998) - schools liable for harassment only if had actual knowledge and were deliberately indifferent. Twist: A university's AI harassment detection system monitors all course discussion boards and emails. It flags concerning content but generates so many false positives that administrators ignore most alerts. A student experiences severe harassment that the AI flagged but humans didn't review. Legal issues?

Title IX liability, actual knowledge standard, reasonable response, over-reliance on AI

400

A global EdTech company collects biometric data (facial recognition for proctoring) from students in 50 countries. Laws vary dramatically: EU has strict GDPR protections, some US states have biometric privacy laws, China requires data localization, India is developing regulations. How do you create a compliant global system?

Comply with strictest standard globally (GDPR-level protection); implement data localization where required; obtain explicit informed consent everywhere; provide opt-out alternatives; conduct Data Protection Impact Assessments; appoint regional compliance officers

400

A computer science professor discovers that a commercial AI coding tool used by students was trained on code from GitHub repositories, including code from students' previous class projects (which were public repos). Students are now using AI trained on previous students' work. The professor required students to make repos public for portfolio purposes. Ethical issues?

Unintended consequences of open-source requirements; training data provenance; academic integrity; informed consent; intellectual property

500

A doctoral program requires students to submit dissertation proposals to an AI evaluation system that scores research quality. The AI was trained on successful dissertations from R1 universities and consistently rates proposals using community-based research methods or non-Western theoretical frameworks as "low quality." Faculty use these scores in admission and funding decisions.

Title VI (disparate impact on underrepresented scholars), accreditation standards (faculty judgment required), potential ADA issues (if affects students with disabilities differently)

500

A global EdTech company develops a "cultural adaptation AI" that modifies educational content based on students' detected cultural background (inferred from name, location, language patterns). For example, it shows different historical perspectives on colonialism to students in India vs. UK. The company argues this is "culturally responsive pedagogy."

Stereotyping, epistemic control, accuracy of cultural inference, reinforcing echo chambers, parental rights, educational equity

500

Real case: Grutter v. Bollinger (2003) and Students for Fair Admissions v. Harvard (2023) - evolution of affirmative action law. Twist: After affirmative action banned, a university develops an AI admissions system that considers "adversity scores" based on neighborhood data, school quality, and family circumstances. The AI achieves similar diversity outcomes as previous affirmative action policies without explicitly considering race. Legal issues?

Disparate impact vs. disparate treatment, proxy discrimination, recent SCOTUS ruling interpretation

500

ou're conducting multi-national research on AI in education involving students in the US, Kenya, and Vietnam. US IRB approved your study. Kenyan community leaders say individual consent isn't sufficient—you need community approval. Vietnamese government requires all research data be stored on local servers and accessible to authorities. Your US IRB says that violates research confidentiality protections. How do you proceed ethically?

Obtain both individual AND community consent (highest standard); negotiate with Vietnamese authorities about anonymization and limited access; consider whether research can proceed ethically given constraints; involve local ethics committees; be transparent with participants about data access; consider whether benefits to Vietnamese participants justify risks

500

A university creates an "AI Teaching Assistant" trained on years of student questions and instructor responses from a popular CS course. The AI provides 24/7 help to students. Analysis shows the AI gives less detailed explanations to students with usernames suggesting female or minority identity, likely because the training data reflected instructor bias. The AI improves course completion rates overall but may widen equity gaps. What should the university do?

  • Immediate actions: Pause AI deployment; audit training data for bias; implement fairness testing; provide human TA support to affected students
  • Long-term solutions: Retrain AI on bias-corrected data; implement username anonymization; monitor disaggregated outcomes; question whether AI should replace human interaction; address root cause (instructor bias)
M
e
n
u