Latency
Linguistic nuances
Architecture
Dataset
Processing power/ethics
100

What is latency in the context of machine learning models?

Latency is the time it takes for a machine learning model to process an input and provide an output.

100

What are linguistic nuances in the context of natural language processing?

Linguistic nuances refer to the subtle differences in language, such as tone, context, idioms, and cultural expressions, that affect the meaning of text or speech.

100

What is the primary role of a chatbot's architecture?

A chatbot's architecture is responsible for processing user inputs, understanding their intent, and generating appropriate responses.

100

What is a dataset in the context of machine learning?

A dataset is a collection of data used to train, validate, and test machine learning models.

100

Name two hardware components commonly used to accelerate machine learning tasks.

GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units).

200

How does optimising the "critical path" in a system lower latency?

Optimising the critical path lowers latency by minimizing the number of dependent tasks and ensuring the shortest sequence of operations is used to produce the output

200

Why is understanding linguistic nuances important for chatbots?

Understanding linguistic nuances allows chatbots to provide more accurate and contextually appropriate responses.

200

Name the three main components of a chatbot’s architecture

  • Natural Language Understanding (NLU): Processes and interprets user input.
  • Dialogue Management: Decides how to respond based on the user’s intent.
  • Natural Language Generation (NLG): Produces the chatbot’s response in natural language.
200

Name the three subsets into which a dataset is typically divided

  • Training Set: Used to train the model.
  • Validation Set: Used to tune hyperparameters and prevent overfitting.
  • Test Set: Used to evaluate the final performance of the model.
200

Why are TPUs considered more efficient than GPUs for certain machine learning tasks?

TPUs are specifically optimized for tensor operations,  making them faster and more energy-efficient for tasks like training large neural networks.

300

What role does hardware acceleration, such as GPUs or TPUs, play in reducing latency?

Hardware acceleration speeds up tasks like forward and backward passes, enabling faster computation of neural network operations and reducing overall latency

300

How can a machine learning model handle ambiguous phrases caused by linguistic nuances?

Models can handle ambiguity by leveraging contextual embeddings, such as those generated by transformers (e.g., BERT), which analyze the surrounding words to determine the intended meaning.

300

How does a transformer neural network improve a chatbot’s architecture compared to an RNN?

A transformer NN improves chatbot architecture by using the self-attention mechanism, which allows it to process all words in a sentence simultaneously

300

Why is it important for a dataset to be domain-specific and balanced?

A domain-specific dataset ensures the data is relevant to the chatbot’s purpose, improving accuracy

300

Why is ethics important in the development of AI chatbots?

Ethics ensures that chatbots are designed responsibly, avoiding harm, bias, or misuse

400

How can we optimise algorithms to improve the latency in an NLU pipeline?

Tokenisation, entity recognition, (or any other appropriate response)

400

What role does domain-specific training data play in capturing linguistic nuances?

Domain-specific training data ensures the model learns context-relevant language patterns, idiomatic expressions, and terminology

400

What is the purpose of modular design in chatbot architecture, and what are its advantages?

Advantages include easier debugging, scalability, the ability to replace or update individual modules without impacting the entire system, and better performance optimization.

400

What is data augmentation, and how does it benefit machine learning models?

Data augmentation involves generating additional training examples by applying transformations (e.g., synonym replacement, paraphrasing, or noise injection) to existing data

400

What is a common ethical concern when training chatbots using large datasets?

Bias in the dataset can result in biased responses from the chatbot, leading to unfair or discriminatory interactions.

500

Explain how parallelization in the NLU pipeline can reduce latency?

Parallelization allows tasks like tokenization and NER to be processed simultaneously, reducing sequential processing time.

500

Describe one limitation of current NLP models in fully capturing linguistic nuances

Current NLP models struggle to understand cultural references and a diverse range of languages

500

Identify one common bottleneck in a chatbot’s architecture

High latency caused by sequential dependencies in decision-making models

500

Identify and explain one common challenge in preprocessing a dataset for a chatbot

Handling unstructured text data, such as slang, typos, or mixed languages, which can confuse the chatbot.

500

Explain how transparency can be maintained in chatbot decision-making.

Transparency can be maintained by documenting the model's design, providing clear explanations for its responses (using explainable AI techniques), and openly sharing the sources of training data.

M
e
n
u