What is the definition of AI?
AI refers to the capability of an engineered system to acquire, process, and apply knowledge and skills.
What is regression in supervised learning?
It is when the problem requires the ML model to predict a numeric output using regression, such as predicting the age of a person based on input data.
What are the three categories of AI?
The three categories of AI are narrow AI (weak AI), general AI (strong AI), and super AI.
What is AI as a Service (AIaaS)?
AI as a Service refers to accessing AI components, such as ML models, through the web as a service. It allows organizations to utilize AI functionalities provided by third-party providers, often in the cloud, without having to build their own AI services.
What is the AI Effect?
The AI Effect refers to the changing perception of what constitutes AI as technology advances, leading to the evolution of the definition of AI over time.
What are the two categories of problems in supervised learning?
Classification, where the problem requires classifying inputs into predefined classes.
Regression, where the problem requires predicting a numeric output.
How does the evolution of AI-based systems need to be constrained?
The evolution must meet the original system requirements and constraints, stay within limits, and remain aligned with human values.
Explain the difference between narrow AI and general AI.
Narrow AI systems are designed for specific tasks with limited context, while general AI systems have wide-ranging cognitive abilities similar to humans and can reason and understand their environment.
What is the goal of Explainable AI (XAI) in relation to AI-based systems?
The goal of Explainable AI (XAI) is to enable users to understand how AI-based systems arrive at their results. XAI aims to increase users' trust in AI systems by providing transparency, interpretability, and explainability.
How is autonomy defined in the context of AI-based systems?
Autonomy is defined as the ability of the system to work independently of human oversight and control for prolonged periods of time.
Explain the concept of overfitting and underfitting in machine learning.
What factors need to be specified and tested for autonomous systems?
The length of time the autonomous system is expected to perform without human intervention and the events for which it must give control back to its human controllers.
What is the technological singularity in the context of AI?
The technological singularity refers to the point at which AI-based systems transition from general AI to super AI, becoming significantly more advanced and surpassing human intelligence.
What is the difference between flexibility and adaptability?
Flexibility refers to the ability of the system to be used in situations that were not part of the original requirements, while adaptability refers to the ease with which the system can be modified for new situations.
What are the risks associated with pre-trained models and transferred learning?
1. Lack of transparency and understanding compared to internally generated models.
2. Insufficient similarity between functions and potential impact on performance.
3. Differences in data preparation steps may affect functional performance.
4. Inherited shortcomings and biases from pre-trained models, lack of documentation.
5. Sensitivity to vulnerabilities of the base pre-trained model, known to potential attackers.
How can data quality issues affect the ML model?
Transparency: Transparency refers to the ease with which the algorithm and training data used to generate the AI model can be determined. The challenge lies in the inherent complexity of AI systems, often seen as "black boxes." Ensuring transparency requires providing access to information about the underlying algorithms, data sources, and processing methods employed by the system.
Interpretability: Interpretability focuses on the understandability of the AI technology by various stakeholders, including the users. It involves providing meaningful explanations of how the AI system works, its decision-making process, and the factors influencing its outcomes. Achieving interpretability is particularly challenging when dealing with complex deep learning models or ensembles of algorithms.
Explainability: Explainability relates to the ease with which users can determine how the AI system arrives at a specific result or decision. It involves providing clear and comprehensible explanations for the system's outputs, highlighting the key factors, patterns, or rules that contributed to a particular outcome. Explainability helps users gain insights into the AI system's reasoning and promotes trust in its actions.
What are the characteristics of AI-based systems that make it more difficult to ensure they are safe are?
complexity
• non-determinism
• probabilistic nature
• self-learning
• lack of transparency, interpretability and explainability
• lack of robustness
What are some AI technologies used in AI-based systems?
AI technologies used in AI-based systems include fuzzy logic, search algorithms, reasoning techniques, and machine learning techniques such as neural networks and decision trees.
What is Bias and which components can introduce it to the results? Elaborate.
Bias is a statistical measure of the distance between the outputs provided by the system and what are considered to be “fair outputs” which show no favoritism to a particular group.
These two components can introduce bias in the results:
• Algorithmic bias can occur when the learning algorithm is incorrectly configured, for example, when it overvalues some data compared to others.
• Sample bias can occur when the training data is not fully representative of the data space to which ML is applied
What are the three forms of machine learning algorithms? Explain each one.
What are the risks associated with using pre-trained models and transfer learning in AI-based systems?
Lack of Transparency: Pre-trained models may lack transparency compared to internally generated models. The inner workings and decision-making processes of the pre-trained model may not be fully understandable or explainable, which can make it challenging to identify and address potential issues.
Insufficient Functionality: The level of similarity between the pre-trained model's function and the required functionality in the new system may be insufficient. If the pre-trained model's capabilities do not align well with the specific requirements of the new system, it may not perform effectively or accurately.
Data Preparation Differences: Differences in the data preparation steps used for the pre-trained model and the new system can impact the functional performance of the model. Inconsistencies in data preprocessing, feature engineering, or data quality can affect the model's performance when applied to new data.
Inherited Shortcomings: Pre-trained models may have inherent limitations or biases that are inherited when reused in a new system. Biases in the training data or other shortcomings of the pre-trained model may not be apparent or well-documented, posing potential risks to the new system's performance and fairness.
Vulnerabilities and Attacks: Pre-trained models and systems based on transfer learning can be susceptible to the same vulnerabilities as the original model. Adversarial attacks or vulnerabilities associated with the pre-trained model may also affect the new system. Furthermore, if the specific pre-trained model is known, potential attackers may already be familiar with its vulnerabilities.
What are the key activities in the machine learning workflow?
What are the potential ethical implications of using AI-based systems?
Bias and Discrimination: AI models can inherit biases from the data they are trained on, leading to discriminatory outcomes or perpetuating existing societal biases. How can organizations ensure fairness and prevent discrimination in AI systems?
Privacy and Data Protection: AI systems often require access to large amounts of data, including personal and sensitive information. How can privacy concerns be addressed to ensure proper data protection and compliance with privacy regulations?
Transparency and Explainability: Many AI models, such as deep neural networks, are complex and operate as black boxes, making it challenging to understand their decision-making processes. How can transparency and explainability be achieved to gain user trust and ensure accountability?
Accountability and Responsibility: Who should be held accountable when an AI-based system makes a mistake or causes harm? How can responsibility be assigned and legal frameworks be developed to address liability issues in the context of AI?
Job Displacement and Economic Impact: AI automation has the potential to disrupt traditional job markets and impact employment opportunities for certain sectors. How can organizations and governments address the socioeconomic consequences of AI adoption, such as retraining and job creation?
What are the characteristics that make it more challenging to ensure the safety of AI-based systems?
The characteristics include complexity, non-determinism, probabilistic nature, self-learning capability, lack of transparency and interpretability, lack of robustness, among others.