This course begins by providing an overview of white-box and black-box adversarial attacks on machine learning systems. It will then guide you through using the Fast Gradient Signed Method (FGSM) white-box attack on a keras machine learning model. Next, we will cover black-box attacks. You will be guided on using a machine learning as a service system called Clarif.AI and then performing a black-box adversarial attack to trick this service into labeling a benign image as dangerous. Finally, to solidify learning, the student is given an assignment on tricking a MNIST keras classifier via a white-box adversarial attack.
Adversarial Machine LearningDuration: 1:55
Overview of adversarial attacks on ML.
White-Box Attacks on Machine LearningDuration: 6:28
Understand and perform the FGSM attack on a keras ResNet model using the Foolbox library.
Getting Started with Clarif.AIDuration: 2:59
Tutorial on the offerings of Clarif.AI ML as a service, installation and setup of the developer API key.
Black-Box Attack on Clarif.AIDuration: 20:39
Code from scratch and perform a black-box attack on Clarif.AI’s moderation model.
Assignment - TrickMeDuration: 0:00
Meet the author
Dr. Tsukerman graduated from Stanford University and UC Berkeley. In 2017, his machine-learning-based anti-ransomware product won Top 10 Ransomware Products by PC Magazine. In 2018, he designed a machine-learning-based malware detection system for Palo Alto Network's WildFire service (over 30,000 customers). In 2019, Dr. Tsukerman authored the Machine Learning for Cybersecurity Cookbook and launched Infosec Skills Cybersecurity Data Science learning path.
You're in good company
"Comparing Infosec to other vendors is like comparing apples to oranges. My instructor was hands-down the best I’ve had."
"I knew Infosec could tell me what to expect on the exam and what topics to focus on most."
"I’ve taken five boot camps with Infosec and all my instructors have been great."