A theoretical look at adversarial examples: a perspective from high-dimensional geometry

Tom Goldstein
University of Maryland

Neural networks solve complex computer vision problems with human-like accuracy. However, it has recently been observed that neural nets are easily fooled and manipulated by "adversarial examples," in which an attacker manipulates the network by making tiny changes to its inputs. In this talk, I give a high-level overview of adversarial examples, and then discuss a newer type of attack called "data poisoning," in which a network is manipulated at train time rather than test time. Then, I explore adversarial examples from a theoretical viewpoint and try to answer a fundamental question: "Are adversarial examples inevitable?"

Presentation (PDF File)

Back to Workshop IV: Deep Geometric Learning of Big Data and Applications