Adversarial examples are created by adding a small perturbation to an input signal (e.g. images) that maximally increases some classification loss function. The study of such examples has received tremendous amounts of attention by the machine learning community in the past few years. Although its original motivation was to reveal the brittleness of neural networks, it is believed that adversarial examples also have potential uses in satisfying classic machine learning needs such as generalization. Generated adversarial examples may be difficult to classify, and may not even visually appear in the distribution of valid dataset images. However, exposing a network to them during training may imply a covering of the space of natural transformations for that image, and may improve the generalization performance even when the system is deployed in non-adversarial environments.
Despite such potential merits, little success has been achieved for such generalization use cases. We conjecture a major obstacle on this path is the fact that adversarial perturbations lack coherent visual structure; they often look like random dot patterns with random colors. In order to benefit from them for generalization purposes, we would like them to exhibit the properties of natural transformations. In this talk we propose the use of differential operators as one way to enforce structure over adversarial perturbations. Specifically, we show that using these operators we can construct low-dimensional subspaces that can maintain certain photometric and geometric properties of the images. Thus, by projecting adversarial perturbations onto these subspaces, we obtain a refined perturbation that is much more structured. We empirically show (across multiple image datasets) that trained deep convolutional networks using the structured adversarial examples can achieve better generalization performance compared to traditional non-structured ones.