Generalization and adversarial robustness of Regularized Deep Neural Networks

Adam Oberman
McGill University
Mathematics and Statistics

Deep Neural Networks (DNNs) perform much better than traditional Machine Learning methods in a number of tasks. However, they lack performance guarantees, which limits the use of the technology in real world and real time applications where errors can be costly. DNNs are vulnerable to adversarial attacks, which are small perturbations of the input designed to cause the network to make errors.

It is an open problem to (i) design networks which are more robust to adverarial examples and to (ii) better understand why DNNs work so well.

We will discuss recent work which uses Partial Differential Equations to regularize the loss function used in training. This regularization leads to a proof of convergence of DNNs in large data limit, as well as a proof of generalization. Applying this regularization also leads to state of the art robustness to adversaral perturbations.

Presentation (PDF File)

Back to Autonomous Vehicles