An agent who always moves in a way to maximize the expected utility
Rational agent
PEAS stands for
Performance, environment, actions, sensors
Search algorithm using a queue for frontier, is optimal with non-infinite branching factor
Breadth first search (BFS)
Informed algorithms which only considers the forward cost h(n)
Greedy / best-first search
Type of game where an action being good for one player must be bad for another. There can only be one winner.
Zero-sum game
Period of time with reduced AI funding, and reduced research into AI
AI winter (1970s, late 80s, early 90s)
Knowledge about how the environment works. E.g. how actions will change the environment.
Transition model
Lowest path cost among all possible solutions (sequences of actions)
Optimal solution
Peak which is higher than local neighbors, but not the highest in the state space
Local maxima
Symbol used to represent the maximizing agent in minimax / alpha-beta pruning trees
Upward facing triangle
Test to determine if an AI can convince a human it is not an AI
Turing test
An environment in which the next state is decided by the current state and action executed
Deterministic
Nodes that an algorithm (BFS/DFS) knows about but hasn't yet visited
Frontier / open list
Algorithm which incorporates mutations, based on biological life
Evolutionary algorithm
Space complexity for minimax
O(bm)
An algorithm is _______ if it is guaranteed to find a solution, or return no solution if none exists
Complete
General performance measure for agents to use which allows for a comparison of different (successor) world states
Utility function
Search algorithm which is more memory efficient than BFS, yet still maintains the same optimality guarantee given the same constraints
Iterative deepening search (IDS)
Local search algorithm which initially moves randomly through the state space then slowly focuses efforts on best known positions
Simulated annealing search
Algorithm which can return the current best move based on time/memory limitations
Anytime algorithm
A simple model which uses updating rules for modifying connection strength between neurons (or weights between nodes in a neural network)
Hebbian learning - who actually read chapter 1? Here's your reminder to go read it!
Complete history of everything the agent has perceived (assuming memory)
Percept sequence
Set of states that represent the possible states of the environment
State space
Heuristic which never overestimates the cost to reach the goal
Admissible
Adversarial algorithm which uses random rollouts to replace heuristic evaluation functions
Monte Carlo tree search