Basics of LLMs
Integration in SaaS
Applications of LLMs
LLM Techniques
Evaluation
100

What does LLM stand for?

Large Language Model

100

What is LangChain?

A toolkit for integrating LLMs

100

What is Duolingo Max?

An LLM-powered feature providing roleplay and detailed feedback

100

What is advanced prompting?

Engineering inputs to achieve desired LLM behavior

100

What is BLEU?

A metric for evaluating LLM outputs against ground truth

200

Name an LLM-powered application

Sentiment analysis

200

What does RAG stand for?

Retrieval-Augmented Generation

200

How does Stripe use LLMs?

Summarizes customer websites to tailor support

200

What is the purpose of an output parser?

Structures LLM outputs into organized formats

200

What is ROUGE?

A metric to assess recall in LLM-generated text

300

What architecture underpins GPT-4?

Transformer architecture

300

What is fine-tuning?

Customizing a model for specific tasks

300

What is few-shot prompting?

Providing examples to help an LLM understand tasks

300

What is prefix tuning?

Adapting an LLM for specific tasks by adding prefixes to inputs

300

Why is latency important in LLMs?

Affects user experience during real-time interactions

400

Why is memory integration important in LLMs?

To maintain conversational context

400

What is defensive UX?

Designing interfaces to handle ambiguous user inputs gracefully

400

What is chain-of-thought prompting?

Encouraging LLMs to reason step-by-step

400

What is LangChain’s role in memory?

Facilitates state management for conversational systems

400

What is G-Eval?

A tool for comparing LLMs’ outputs

500

Name one type of memory used in LangChain.

ConversationalBufferMemory

500

How does Google’s People + AI Guidebook enhance UX?

Provides design patterns for human-AI interactions

500

What is an ethical challenge of LLMs?

Managing bias in outputs

500

What are vector stores in LangChain?

Databases for searching conversation histories

500

What is the purpose of fine-tuning evaluation?

To measure task-specific improvements in LLM outputs