This term means: a question, instruction, scenario, or statement provided by the user to guide the AI's response
AI prompt
What is the best way get specific results from AI
Use the 5 Elements of a quality prompt:
Text, Reference, Format, Context, Tone
What types of response does AI struggle with most?
Creative or emotional
AI prompt: Who was the first person to walk on the moon?
AI response: The first person to walk on the moon was John Glenn.
Is this an example of AI hallucination or contradiction.
This is an AI Hallucination.
This term means: additional information or data surrounding a prompt that helps the AI generate more accurate and relevant responses
Context
What is the mnemonic device (memory trick) you created to remember the 5 Elements of a quality prompt?
You must draw this on the whiteboard/paper to get these points.
Various answers
What are types of things AI does really well?
Repetitive tasks
Making data-based decisions
Can process large amounts of data faster than humans
You ask AI, "Do penguins fly?"
It might answer, "Yes, penguins are birds and all birds can fly."
Is this an example of an AI Hallucination or Contradiction
Which vocabulary term's definition is: the structured or unstructured information, such as text, images, or code, used to train an AI model
training data
What are two ways AI can respond to a computer user's prompt
AI makes PREDICTIONS based on PROBABILITY;
AI uses training data
When a computer user provides specifics in a prompt
Multimodal AI models can do things that other types of AI cannot...
explain how or why
AI models are trained using different types of data (and usually one type)
Multimodal AI models can process multiple types of data at once.
What are two reasons why AI might respond with a hallucination or contradiction?
AI sometimes "hallucinates" or "contradicts" information because it is designed to predict text based on patterns and probability rather than confirm truth.
Sometimes vague or unclear user prompts can cause hallucinations or contradictions
Flawed or insufficient training data can cause hallucinations or contradictions.
Unlike humans who can verify information through direct experience (seeing a cat or looking up a fact in a book), AI models lack a connection to the physical world.
This term means: moral choices considering right or wrong when using emerging technologies, ensuring privacy, and safety for everyone
Ethics
AI uses neural networks to help it respond to prompts in medical areas, speech translation, etc...
Fill in the blanks: neural networks help AI to recognize ___________;
make predictions based on these, and help AI understand word ________________;
patterns and relationships
AI hallucinations are part of using AI models.
What are they?
What are two ways computers can avoid AI responding with a hallucination?
An AI "hallucination" is when an AI system confidently makes up information that isn't true or isn't based on the data it was trained on.
Tell Two potential harms from AI hallucinations or contradictions for us...
various answers
Which term goes along with this definition: the act of giving credit to the original creator of a work, especially in the context of using or referencing their material
It is a synonym for: citing sources
attribution
In any order, what are the 5 Elements of a quality AI prompt?
Tone, Task, Context, Format, Reference
ABSTRACTION is a problem solving process AI does well.
What is abstraction?
How would AI use abstraction if you prompted it with the following, "Find my yellow Corvette in the parking lot."
Abstraction is a problem solving process that focuses only on the important details and ignores the unimportant details to solve the problem.
Finding a car would mean AI would focus only on cars first (not trucks, motorcycles, vans, etc...)
It would then look for the color yellow and only yellow
It would find shapes of Corvettes and compare that to other types of cars.
It would then help you find a path/direction to the yellow Corvette.
How are AI hallucinations and contradictions different?
An AI "hallucination" is when an AI system confidently makes up information that isn't true or isn't based on the data it was trained on.
An AI contradiction is when an AI says two different things that can't both be true at the same time.