Taming Deep Convolutional Networks

Hanie Sedghi
Google AI

We characterize the singular values of the linear transformation associated with a standard 2D multi-channel convolutional layer, enabling their efficient computation. This characterization also leads to an algorithm for projecting a convolutional layer onto an operator-norm ball. We show that this is an effective regularizer; for example, it improves the test error of a deep residual network using batch normalization on CIFAR-10 from 6.2% to 5.3%. I will also cover the follow up works and how this tool has helped solving various ML problems involving deep convolutional networks.


Back to Workshop IV: Efficient Tensor Representations for Learning and Computational Complexity