AI Threats
Cybersecurity Basics
AI Ethics and Bias
100

This type of AI attack involves feeding a model misleading inputs to manipulate its output.

What is an Adversarial Attack?

An adversarial attack is a malicious attempt to trick machine learning models into making incorrect predictions. These attacks are designed to exploit vulnerabilities in the model by manipulating the input data. 

How do adversarial attacks work?

  • Input data manipulation: Attackers alter the input data to trick the model into making incorrect predictions.

  • Model parameter changes: Attackers change the parameters or architecture of the AI model itself.

  • Poisoning attacks: Attackers disrupt the model during the training phase.

  • Evasion attacks: Attackers disrupt the model after it has been trained.
100

This is the practice of converting data into unreadable formats to prevent unauthorized access 

What is encryption?

Why encryption is important

  • Encryption protects sensitive data from being intercepted by unauthorized people. 


  • Encryption can help prevent cybercriminals from accessing data. 


  • Encryption can help protect national security by preventing access to confidential communications and computer systems. 
100

This occurs when AI systems favor one group over another due to biased training data.

What is algorithmic bias?

Causes of algorithmic bias

  • Selection biasWhen the data used to train the AI system is not representative of the real world 

  • Confirmation biasWhen the AI system is too reliant on pre-existing beliefs or trends in the data 


  • Measurement biasWhen the data used to train the AI system overemphasizes certain variables or inaccurately represents the world 


200

A malicious AI bot designed to spread disinformation is commonly known as this.

What is a social bot?

A good example of a social bot would be a Twitter account that automatically retweets and comments on posts related to a specific topic, like a political campaign, often using pre-written phrases to mimic human engagement, even if the account is not operated by a real person; another example would be a brand's Facebook Messenger bot that answers common customer queries and provides basic product information automatically.

200

A cybersecurity method that authenticates users with two or more verification factors is called this.

What is multi-factor authentication (MFA)?

Why it's important

  • MFA adds an extra layer of security to your accounts.

  • It can prevent unauthorized access, even if your password is stolen.

  • It can help protect against data leaks and other cyber attacks 





200

The principle of ensuring AI systems operate transparently and explain their decisions is known as this.

What is explainability?

Explainability in artificial intelligence (AI) is the ability to understand how an AI model makes decisions, recommendations, or predictions. It's also known as interpretability. 

Why is explainability important?

  • Trust: Explainability helps people trust the results of AI systems. 


  • Accountability: Explainability helps identify and fix errors in AI models. 


  • Legal compliance: Explainability helps ensure that AI systems comply with legal requirements. 


  • Decision making: Explainability helps people understand why AI systems make decisions, which can be important in fields like healthcare and finance. 
300

This type of cyberattack involves using AI to mimic legitimate users for unauthorized access

What is AI-powered identity spoofing?

How it can be used maliciously:

  • Financial fraud: Scammers can impersonate bank officials or other trusted individuals to trick people into revealing sensitive financial information.
  • Social engineering: Manipulating people's trust by appearing to be someone they know through fake online interactions.
  • Reputation damage: Creating fake content to damage someone's public image. 
300

This type of cybersecurity attack involves overwhelming a network with traffic to make it unavailable.

What is a Distributed Denial-of-Service (DDoS) attack?

How it works:

  • Hackers use a network of infected devices, called a botnet, to send a large number of requests to the target. 


  • The target's resources are overwhelmed, causing it to crash or become inaccessible. 


  • Legitimate users are unable to access the target, which can lead to lost business, reputation damage, and other consequences. 
300

This international organization has issued ethical guidelines for the responsible use of AI.

What is UNESCO?

Recommendation on the Ethics of Artificial Intelligence. UNESCO's first-ever global standard on AI ethics – the 'Recommendation on the Ethics of Artificial Intelligence', adopted in 2021, is applicable to all 194 member states of UNESCO.