AI Basics & Models
Data & Infrastructure
Building & Running AI
Governance, Risk & Responsible AI
Use Cases
Assistants, Governance & Data
100

This is the broad field of making computers perform tasks that typically require human intelligence, like perception, reasoning, or decision‑making.

What is Artificial Intelligence (AI)

100

This is a central repository that stores raw, large‑volume data from many sources in its original format—structured and unstructured—for analytics and AI.

What is a data lake

100

This is the act of crafting the instructions or input we give a generative model to get a desired response.

What is a prompt

100

This happens when an AI model confidently returns an incorrect or fabricated answer, especially in generative systems.

What is an AI hallucination

100

In business AI, this describes a defined business problem and value statement, such as “reduce call center handle time by 10% using an AI assistant."

What is a use case

100

This term describes an AI tool embedded in an application—like email or CRM—to help users draft content, summarize, or automate tasks.

What is a copilot or AI assistant

200

This subset of AI uses data and algorithms so that systems can learn patterns and improve performance on tasks without being explicitly programmed.

What is Machine Learning

200

This newer architecture combines the flexibility of a data lake with the management and performance features of a data warehouse.

What is a data lakehouse

200

This discipline focuses on designing, iterating, and optimizing prompts (and sometimes tools/context) to improve AI output quality.

What is prompt engineering

200

This refers to methods like better retrieval, tighter prompts, and validation checks that reduce incorrect or made‑up outputs from generative models.

What is hallucination mitigation

200

 This small‑scale experiment is designed to prove that a concept is technically feasible, often before investing heavily.

What is a POC (Proof of Concept)

200

 These are numeric representations of text, images, or other objects in a high‑dimensional space, enabling similarity search and powering many RAG systems.

What are embeddings

300

 This type of AI focuses on creating new content—like text, code, images, or audio—rather than just classifying or predicting outcomes.

What is Generative AI

300

This term describes the end‑to‑end system that turns raw data into AI‑powered outcomes, usually including data pipelines, feature stores, models, and deployment.

What is an AI factory

300

This approach augments generative models by first searching external data sources and then feeding the retrieved context into the model along with the user’s request.

What is RAG (Retrieval‑Augmented Generation)

300

These are rules, constraints, and safety controls—such as content filters, policy checks, or tool limits—that define what an AI system is allowed to do or say.

What are guardrails

300

This limited rollout puts a solution into a real environment with real users to validate performance, adoption, and business value at small scale.

What is a pilot

300

These policies, roles, and controls define how data is managed, secured, and used appropriately across its lifecycle.

What is data governance

400

This type of model is trained mostly on text and can understand and generate human‑like language at scale, including powering many copilots and chatbots.

What is a Large Language Model (LLM)

400

This kind of database stores numerical representations of text, images, or other objects to support similarity search for things like RAG or recommendation systems.

What is a vector database or vector store

400

These operational practices apply DevOps‑like principles to the lifecycle of machine learning models and AI systems, including monitoring, deployment, and updates.

What are MLOps or AI Ops

400

These practices and policies ensure AI is fair, safe, transparent, and aligned with human values, and that risks like bias and misuse are proactively managed.

 What is Responsible AI or Trustworthy AI

400

Over time, this phenomenon occurs when a model’s performance degrades because the real‑world data it sees changes compared to the data it was trained on.

What is model drift

400

This broader discipline ensures that AI initiatives follow organizational policies, regulations, and ethical guidelines, often using committees and formal processes.

What is AI governance

500

In an AI workflow, this phase is where the model is used to make predictions or generate outputs on new data, as opposed to learning from training data.

What is inference

500

This is the process of moving and transforming data from sources into usable formats for analytics and AI—often including ingestion, cleaning, and feature creation.

What is a data pipeline

500

These very current AI systems can break tasks into multiple steps and call tools or APIs—such as databases or business systems—instead of only answering questions.

What is agentic AI or agentic RAG

 

500

 These legal and policy concepts govern where data is stored and processed, often requiring that certain data remain in specific countries or regions.

What are data residency and data sovereignty

 

500

These related concepts describe how well humans can understand why a model made a particular prediction or generated a particular output.

What are explainability and interpretability

500

This training‑time vs. deployment‑time distinction refers to updating model parameters using data vs. using a fixed model to answer queries in production.

What are training vs. inference

M
e
n
u