Application
Orchestration
Data
Model Development
Infrastructure
100

This is the part of the AI stack that users actually interact with, often taking the form of a chatbot, a dashboard, or a mobile app

User Interface (UI)

100

This process determines the order in which different AI tasks and tools are executed in a workflow

Task Sequencing 

100

Text, images, audio, video, and numbers are all examples of this core resource used by AI systems

Data

100

This is the process where an AI model learns patterns from data so it can make prediction

Training

100

These machines provide the computing power needed to run AI models and applications

Servers
200

Often called the "bridge" between layers, this acronym refers to the set of rules that allow the Application Layer to send a request to a Large Language

API (application programming interface)

200

When one AI system automatically decides which model, API, or tool to call next, it is doing this

Workflow Routing
200

Before data is fed into an AI model, it often goes through this process to be cleaned, organized, and prepared

Data Processing

200

This type of dataset is used to evaluate how well a trained model performs on unseen data

Test Set

200

These processors are commonly used in AI because they can handle many operations in parallel more efficiently than standard CPUs

GPUs

300

To make AI useful for a specific company, developers add this at the application layer to ensure the AI follows company policies, legal regulations, and safety standards

Business Logic

300

This step in an AI workflow combines results from different tools or model calls into one final output.

Aggregation 

300

When the training data is unfair, incomplete, or unbalanced, AI systems can develop this problem in their outputs

Bias
300

This metric measures the proportion of correct predictions made by a model out of all predictions

Accuracy

300

When companies run AI systems on remote servers instead of their own physical machines, they are using this

Cloud Computing

400

This term describes the invisible instructions embedded in the application layer that guide the AI's persona, tone, and constraints, often hidden from the end-user's view

System Prompt (system message)

400

If an AI assistant retrieves data, sends it to a model, and then triggers another tool based on the result, this overall coordination is an example of this

Pipeline Orchestration 

400

This kind of data is organized into rows and columns, making it easier to store, search, and analyze than raw text or images

Structured Data

400

When a model performs extremely well on training data but poorly on new data, it is experiencing this issue

Overfitting

400

This refers to the ability of a system to handle increasing workloads by adding more resources or machine

Scalability 

500

At the highest level of the application layer, these autonomous systems can not only generate text but also use "tools" to execute multi-step tasks like booking a flight or updating a CRM without human intervention

AI Agent

500

This orchestration concept focuses on connecting multiple tools or services so they act together as one larger automated process

System Integration

500

This principle refers to knowing where a dataset came from, how it was changed, and how it moved through a system over time

Data Lineage

500

This technique improves model performance by combining predictions from multiple models instead of relying on a single one

Ensemble Learning

500

This container management platform is widely used to deploy, run, and scale AI applications across multiple servers

Kubernetes