What type of AI creates new content instead of just classifying or predicting?
What is generative AI?
What does LLM stand for?
What is a Large Language Model?
What is the first step in designing an AI model?
Define the problem the model solves.
What are weights in an AI model?
Numbers that control how important connections are.
Who is responsible for how AI is used?
Humans, not the AI model.
Name two types of content generative AI can create.
What are text, images, audio, or video? (any two)
How do LLMs generate text responses?
They predict the next word based on probability.
Why must training data match the model’s task?
Because models only learn patterns from relevant examples.
What does a higher weight mean?
That connection has more influence on the output.
Name one responsible way to use AI in school.
Brainstorming ideas, explaining concepts, or practice. (any one)
How is generative AI different from traditional AI?
Traditional AI classifies or predicts, while generative AI creates new content.
When does training happen for most AI models?
Before users ever interact with the model.
What is meant by “model architecture”?
How the model processes and organizes information.
What happens to weights during training?
They are adjusted to reduce error.
Why is blind trust in AI dangerous?
It can spread misinformation and hurt learning.
What does it mean when we say AI outputs “sound human but are machine generated”?
The responses are based on patterns, not understanding or thinking.
Name one thing AI models do NOT do.
They do not think, understand meaning, feel emotions, or verify truth. (any one)
Why is cleaning training data important?
Bad or biased data leads to inaccurate or biased models.
Why do biased datasets create biased weights?
Because the model learns patterns directly from the data.
What should students always do when using AI for schoolwork?
Follow teacher guidelines and check outputs.
Why does generative AI sometimes give confident but incorrect answers?
Because it predicts likely responses, not verified facts.
What is an AI hallucination?
When AI generates information that sounds correct but is false.
What happens after a model is evaluated and tuned?
It is deployed for real-world use with safety limits.
What real-world object was used as an analogy for weights?
A mixing board or volume knobs.
Why should AI outputs always be verified with other sources?
Because AI does not know if an answer is true/can hallucinate.