Certified Robustness against Adversarial Attacks in Image Classification

Fatemeh Sheikholeslami (Bosch Center for Artificial Intelligence)

Researchers have repeatedly shown that it is possible to craft adversarial attacks, i.e., small perturbations that significantly change the class label, on deep classifiers and considerably degrade their performance. This fragility can significantly hinder the deployment of deep learning-based methods in safety-critical applications. To address this, adversarial attacks can be defended against either by building robust classifiers or, by creating classifiers that can detect the presence of adversarial perturbations. I will talk about a couple of algorithms that we have developed at BCAI which provide certified defenses against different threat models.

Fatemeh Sheikholeslami received her PhD in Electrical Engineering from University of Minnesota in 2019, under the supervision of Professor Georgios Giannakis. She is currently a Machine Learning Research Scientist at Bosch Center for Artificial Intelligence with the Safe and Robust Deep Learning group.

Start date
Friday, Nov. 19, 2021, 1:25 p.m.
End date
Friday, Nov. 19, 2021, 2:25 p.m.
Location

Zoom

Registration is required to access the Zoom webinar.

Share