This course will teach you some of the darker, less publicized attacks on machine learning. You will learn how to steal machine learning models (i.e., create high-fidelity copies of black-box machine learning models), how to poison ML models so that their performance is degraded, and how to perform backdoor attacks on ML. The lessons in this course will be solidified through an assignment on backdoor attacks on ML.
Model-Stealing Attacks on Machine LearningDuration: 6:07
Machine Learning PoisoningDuration: 4:59
Backdoor Attacks on Machine LearningDuration: 2:22
Assignment - Backdoor Attacks on Machine LearningDuration: 0:00
Meet the author
Dr. Tsukerman graduated from Stanford University and UC Berkeley. In 2017, his machine-learning-based anti-ransomware product won Top 10 Ransomware Products by PC Magazine. In 2018, he designed a machine-learning-based malware detection system for Palo Alto Network's WildFire service (over 30,000 customers). In 2019, Dr. Tsukerman authored the Machine Learning for Cybersecurity Cookbook and launched Infosec Skills Cybersecurity Data Science learning path.
You're in good company
"Comparing Infosec to other vendors is like comparing apples to oranges. My instructor was hands-down the best I’ve had."
"I knew Infosec could tell me what to expect on the exam and what topics to focus on most."
"I’ve taken five boot camps with Infosec and all my instructors have been great."