Is this a high risk GenAI Use Case?
RAI Basics
Spot the Risk
Everyday AI
The Future of AI
100

GenAI-powered real-time passenger identity verification and boarding authorization

High risk because errors or biases in AI decisions could lead to wrongful denial of boarding or security breaches, impacting passenger safety.

100

What does the "R" in RAI stand for?

Responsible

100

True or False: AI should be trusted to make every decision without human review

False! You should always take the time review the outputs of GenAI technologies to verify its accuracy. 

100

Whats one way to use GenAI in your everyday life?

Movie suggestions

Workout routines

Book lists

100

What is one thing humans will always need to do even as AI gets better and smarter

Use critical thinking to verify answers and make final decisions responsibly

200

Automated customer service handling for flight disruptions and compensation claims

High risk due to potential misinterpretation of passenger issues, leading to incorrect or unfair decisions affecting customer rights and satisfaction.

200

What is one RAI principle?

Safety and Accountability 

Security and Privacy

Transparency and Trustworthiness

Efficiency

Caring and Fairness

200

Why is it risky to put sensitive or confidential data into GenAI tools? 

Inputting sensitive or confidential data into a GenAI technology can lead to unintended risk such as: data leakage, using data to train a model for other clients, etc. 

200

Name a workplace tool that has GenAI 

UnitedGPT

WingTips


200

Whats one risk of relying to heavily on AI in the future?

loosing creativity, confidence, critical thinking

300

Predictive maintenance scheduling using AI analysis of aircraft sensor data

High risk because incorrect predictions could result in missed maintenance, jeopardizing flight safety.


300

Give an example of GenAI being used irresponsibly

Uploading sensitive and confidential data into a ChatGPT. Not verifying answers from a chatbot with sources. 


300

What does it mean that a GenAI is "hallucinating"

Hallucination: instances where general-purpose artificial intelligence systems generate convincing, yet false, information

300

Why is it important to check AI answers before sharing them?

GenAI can hallucinate/ be incorrect, therefore it is important to ensure that answers are correct by checking them with verified sources

300

Why do organizations set rules and standards for AI?

By setting standards organizations are ensuring that teams have requirements on how to develop and deploy GenAI accordingly. 

400

AI-driven crew scheduling and resource allocation

High risk as errors can cause crew fatigue or understaffing, compromising operational safety and regulatory compliance.

400

What role do humans play in Responsible AI?

Being a strong human-in-the-loop, meaning you verify the output of GenAI technologies with sources. 


400

What is AI bias?

Computational bias or machine bias is a systematic error or deviation from the true value of a prediction that originates from a model's assumptions or the data itself

400

Is it safe to paste confidential work documents into any public AI tool

No! Confidential work documents copied into a public LLM can lead to data leakage and unintentional use. 

400

Why does Responsible AI build trust with customers?

Customers know their data is being use responsibly and GenAI technologies are going through a standard and repeatable review and testing process.

500

GenAI creates tailored travel itineraries based on passenger profiles and real-time data

This is a high-risk use case because it involves processing sensitive personal data and real-time information to make automated decisions that could significantly impact user privacy, safety, and travel outcomes.

 

500

Whats one way you can be a strong human-in-the-loop

Verify the outputs of GenAI and provide feedback via the thumbs up/ thumbs down

500

How should you provide feedback on GenAI outputs?

Through the thumbs up/ thumbs down feedback mechanism. Providing feedback via the mechanism ensures use case teams are alerted when a output is inaccurate such as a hallucination.

500

Can you use GenAI tools you use in your everyday also at work for work purposes? 

Maybe! If you want to use a GenAI tool you can ask RAI if it's a GenAI tool that went through the RAI process. Remember you can always use UnitedGPT!

500

If AI becomes more powerful, why will responsible AI and human oversight matter more?

Prevents harm, bias, or misuse ensuring humans set limits and guardrails. 

M
e
n
u