What does LLM stand for?
Large Language Model
What is LangChain?
A toolkit for integrating LLMs
What is Duolingo Max?
An LLM-powered feature providing roleplay and detailed feedback
What is advanced prompting?
Engineering inputs to achieve desired LLM behavior
What is BLEU?
A metric for evaluating LLM outputs against ground truth
Name an LLM-powered application
Sentiment analysis
What does RAG stand for?
Retrieval-Augmented Generation
How does Stripe use LLMs?
Summarizes customer websites to tailor support
What is the purpose of an output parser?
Structures LLM outputs into organized formats
What is ROUGE?
A metric to assess recall in LLM-generated text
What architecture underpins GPT-4?
Transformer architecture
What is fine-tuning?
Customizing a model for specific tasks
What is few-shot prompting?
Providing examples to help an LLM understand tasks
What is prefix tuning?
Adapting an LLM for specific tasks by adding prefixes to inputs
Why is latency important in LLMs?
Affects user experience during real-time interactions
Why is memory integration important in LLMs?
To maintain conversational context
What is defensive UX?
Designing interfaces to handle ambiguous user inputs gracefully
What is chain-of-thought prompting?
Encouraging LLMs to reason step-by-step
What is LangChain’s role in memory?
Facilitates state management for conversational systems
What is G-Eval?
A tool for comparing LLMs’ outputs
Name one type of memory used in LangChain.
ConversationalBufferMemory
How does Google’s People + AI Guidebook enhance UX?
Provides design patterns for human-AI interactions
What is an ethical challenge of LLMs?
Managing bias in outputs
What are vector stores in LangChain?
Databases for searching conversation histories
What is the purpose of fine-tuning evaluation?
To measure task-specific improvements in LLM outputs