Securing Your AI and Machine Learning Systems

Securing Your AI and Machine Learning Systems
Securing Your AI and Machine Learning Systems
English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 2h 10m | 642 MB

Design secure AI/ML solutions

Artificial Intelligence (AI) is literally eating software as more and more solutions become ML-based. Unfortunately, these systems also have vulnerabilities; but, compared to software security, few people are really knowledgeable about this area. If it’s impossible to secure AI against cyberattacks, there will be no AI-based technologies, such as self-driving cars, and yet another “AI winter” will soon be on us.

This course is almost certainly the first public, online, hands-on introduction to the future perspectives of cybersecurity and adopts a clear and easy-to-follow approach. In this course, you will learn about high-level risks targeting AI/ML systems. You will design specific security tests for image recognition systems and master techniques to test against attacks. You will then learn about various categories of adversarial attacks and how to choose the right defense strategy.

By the end of this course, you will be acquainted with various attacks and, more importantly, with the steps that you can take to secure your AI and machine learning systems effectively. For this course, practical experience with Python, machine learning, and deep learning frameworks is assumed, along with some basic math skills.


  • Design secure AI solution architectures to cover all aspects of AI security from model to environment
  • Create a high-level threat model for AI solutions and choose the right priorities against various threats
  • Design specific security tests for image recognition systems
  • Test any AI system against the latest attacks with the help of simple tools
  • Learn the most important metrics to compare various attacks and defences
  • Deploy the right defence methods to protect AI systems against attacks by comparing their efficiency
  • Secure your AI systems with the help of practical open-source tools
Table of Contents

Machine Learning Security
1 The Course Overview
2 Introduction to ML Security
3 Setting Up the Environment

Security Test Using Adversarial Attack
4 Introduction to Machine Learning Tasks
5 Attacks Against ML with Examples
6 Categories of ML Tasks and Attacks
7 Attacks on Classification and How They Work
8 Practical Example of Classification Attacks for MNIST Adversarial Challenge

Build a Threat Model and Learn Different Attacks on AI
9 Most Common AI Solutions and Threats
10 Confidentiality, Availability, and Integrity Attacks
11 Poisoning Attacks, Privacy, and Backdoor Attacks Theory
12 Practical Poisoning Attacks
13 Practical Privacy Attacks
14 Practical Backdoor Attacks

Testing Image Classification
15 Building an Image Classification Task and Its Peculiarities
16 Adversarial Attacks and Their Distinctive Features
17 White-Box Adversarial with Example
18 Grey-Box Adversarial with Example
19 Black-Box Adversarial with Example

Compare Various Attacks
20 Adversarial Attacks Metrics and White-Box Adversarial Attacks
21 BIM Attack Practical Configuration
22 CW Attack Practical Configuration
23 DeepFool Attack Practical Configuration
24 PGD Attack Practical Configuration
25 Comparing Metrics and Choosing the Best Attack

Choosing the Right Defense
26 Introduction to Various Defense Approaches to Adversarial Attacks
27 The Current State of Defenses
28 Testing Practical Defense from Adversarial Training Category
29 Testing Practical Defence from Modified Input Category
30 Testing Practical Defence from Modified Model Category
31 Comparing Defence Approaches and Choosing the Best Defence

Summary and Future Trends
32 Combining Everything Together
33 An Approach to Testing AI Solutions
34 Preparing the Environment
35 Importing the Models
36 Testing the Attacks
37 Choosing the Defenses
38 The Future of AI Attacks
39 Sources and Recommendations
40 Conclusions and Best Wishes