Responsible Use
Basic
Concepts
Ethical Practices
Critical Evaluation
Human Centred
100

This document outlines student expectations for using AI tools in their coursework.

What is the course syllabus?

100

This type of AI generates text, images, or music by learning from existing data.

What is Generative AI (GenAI)?

100

This term refers to presenting AI-generated content as one's own work without proper attribution or acknowledgment.

What is plagiarism?

100

AI models can amplify unfair or stereotypical perspectives when trained on this type of unbalanced or skewed data.

What is bias or biased data?

100

This practice ensures AI decisions are monitored and guided by people to prevent errors and biases.

What is human oversight?

200

Before using AI tools in their assignments, obtaining this ensures students adhere to academic integrity.

What is instructor approval?

200

This term refers to the input or instruction given to an AI model to generate a response or perform a task.

What is a prompt?

200

AI developers ensure this by being open about their sources of training data and acknowledging potential biases.

What is transparency (and/or accountability)?

200

To ensure accuracy, AI-generated information should always be verified against these.

What are trusted sources?

200

Overreliance on AI writing tools can hinder the development of this thinking skill.

What is critical thinking (or metacognition)?

300

When guidelines on AI use are unclear, this action helps students avoid academic misconduct.

What is seeking clarification

300

This term describes AI-generated outputs that appear correct but are actually false due to faulty pattern recognition or poor training data.

What is hallucination?

300

The process of verifying AI-generated responses with multiple trusted sources to ensure accuracy and reliability.

What is fact-checking?

300

The term used to describe AI-generated media that mimics real people with high accuracy.

What is a deepfake?

300

Obtaining this is a cornerstone of ethical AI, building trust, ensuring transparency and respecting individual autonomy.

What is user consent?

400

This term describes safeguarding sensitive information when using AI tools.

What is data security?

400

These types of AI models use natural language processing and predict the next word based on the context of previous words.

What are Large Language Models (LLM)?

400

Using AI ethically means complying with these laws when using AI-generated content.

What are copyright laws?

400

Measures that are built-in to ensure AI systems operate safely, ethically, and within intended guidelines are referred to as this.

What are guardrails?

400

This approach emphasizes designing and developing AI systems to enhance and augment human abilities, not replace them.

What is human-centered AI?

500

Transparency about AI use in academic work includes these two key practices.

What are properly citing AI-generated content and disclosing AI assistance?

500

AI uses this type of learning to perform tasks that typically require human intelligence.

What is machine learning?

500

When students do not have equal access to AI tools, AI use may be prohibited or restricted to prevent this.

What is an unfair advantage?


500

The relevance and accuracy of an AI’s response depend on these two factors.

What are the quality of the prompt and the quality of the training data?

500

Mastering this skill allows users to interact more effectively with AI, improving accuracy, efficiency, and ethical considerations in AI-generated content.

What is prompting (or prompt engineering)?

M
e
n
u