Generative AI Basics
How Models Work
Model Design Steps
Weights and Training
Responsible AI Use
100

What type of AI creates new content instead of just classifying or predicting?


What is generative AI?

100

What does LLM stand for?


What is a Large Language Model?

100

What is the first step in designing an AI model?


Define the problem the model solves.

100

What are weights in an AI model?


Numbers that control how important connections are.

100

Who is responsible for how AI is used?


Humans, not the AI model.

200

Name two types of content generative AI can create.


What are text, images, audio, or video? (any two)

200

How do LLMs generate text responses?


They predict the next word based on probability.

200

Why must training data match the model’s task?


Because models only learn patterns from relevant examples.

200

What does a higher weight mean?


That connection has more influence on the output.

200

Name one responsible way to use AI in school.


Brainstorming ideas, explaining concepts, or practice. (any one)

300

How is generative AI different from traditional AI?


Traditional AI classifies or predicts, while generative AI creates new content.

300

When does training happen for most AI models?


Before users ever interact with the model.

300

What is meant by “model architecture”?


How the model processes and organizes information.

300

What happens to weights during training?


They are adjusted to reduce error.

300

Why is blind trust in AI dangerous?


It can spread misinformation and hurt learning.

400

What does it mean when we say AI outputs “sound human but are machine generated”?


The responses are based on patterns, not understanding or thinking.

400

Name one thing AI models do NOT do.


They do not think, understand meaning, feel emotions, or verify truth. (any one)

400

Why is cleaning training data important?


Bad or biased data leads to inaccurate or biased models.

400

Why do biased datasets create biased weights?


Because the model learns patterns directly from the data.

400

What should students always do when using AI for schoolwork?


Follow teacher guidelines and check outputs.

500

Why does generative AI sometimes give confident but incorrect answers?


Because it predicts likely responses, not verified facts.

500

What is an AI hallucination?


When AI generates information that sounds correct but is false.

500

What happens after a model is evaluated and tuned?


It is deployed for real-world use with safety limits.

500

What real-world object was used as an analogy for weights?


A mixing board or volume knobs.

500

Why should AI outputs always be verified with other sources?


Because AI does not know if an answer is true/can hallucinate. 

M
e
n
u