Neural Networks
Data Prep
Miscellaneous
Vocabulary
Other Models
100

This is the basic building block of a neural network, inspired by biological neurons.

What is a perceptron (or artificial neuron)?

100

This technique transforms features to have a mean of 0 and standard deviation of 1, helping models converge faster.

What is Normalization (or Standardization)?

100

This type of learning uses labeled data with input-output pairs.

What is Supervised Learning?

100

The process by which a neural network adjusts its weights based on the error of its predictions.

What is training (or learning)?

100

This supervised learning algorithm predicts continuous values by fitting a straight line to the data.

What is linear regression?

200

This activation function is commonly used in hidden layers and helps prevent the vanishing gradient problem.

What is ReLU (Rectified Linear Unit)?

200

This method converts categorical variables into binary vectors where only one element is 1 and the rest are 0.

What is One-Hot Encoding?

200

This metric measures the proportion of correct predictions made by a model, calculated as correct predictions divided by total predictions.

What is Accuracy?

200

This measure of a model's ability to generalize is calculated by evaluating performance on data the model has never seen during training.

What is validation/test accuracy?

200

This classification algorithm uses a sigmoid function to predict probabilities between 0 and 1.

What is logistic regression?

300

This technique randomly drops units during training to prevent overfitting and improve generalization.

What is Dropout?

300

This technique creates synthetic training examples by slightly modifying existing data, commonly used in computer vision with rotations and flips.

What is Data Augmentation?

300

This process divides a dataset into training, validation, and test sets to evaluate model performance.

What is Train-Test Split (or Data Splitting)?

300

This term refers to one complete pass through the entire training dataset during the training process.

What is an epoch?

300

This tree-based model makes predictions by splitting data based on feature values, creating a flowchart-like structure of decisions.

What is a decision tree?

400

This algorithm propagates the error backwards through the network, adjusting weights to minimize loss.

What is Backpropagation?

400

This technique uses statistical methods to replace missing values in a dataset, often using mean, median, or mode.

What is Imputation?

400

This phenomenon occurs when a model learns the training data too well, including its noise, leading to poor performance on unseen data.

What is Overfitting?

400

This hyperparameter controls how much the model's weights are adjusted with respect to the loss gradient during training.

What is learning rate?

400

This ensemble method combines multiple decision trees and uses voting to make final predictions, reducing overfitting.

What is a random forest?

500

These specialized neural networks use convolutional layers with filters to automatically learn hierarchical features from spatial data like images.

What are Convolutional Neural Networks (CNNs)?

500

This advanced technique generates new synthetic training samples by interpolating between existing data points, often used to address class imbalance.

What is SMOTE (Synthetic Minority Over-sampling Technique)?

500

This evaluation metric combines precision and recall into a single score, particularly useful when dealing with imbalanced datasets.

What is F1-score?

500

This attention mechanism, introduced in the Transformer architecture, allows the model to weigh the importance of different parts of the input sequence.

What is self-attention (or attention mechanism)?

500

This DeepMind AI system defeated world champion Lee Sedol at the ancient board game Go in 2016, using deep reinforcement learning and Monte Carlo tree search.

What is AlphaGo?