Statistical Mechanics of Deep Manifolds: Mean Field Geometry in High Dimension

Haim Sompolinsky
The Hebrew University of Jerusalem

Recent advances in systems neuroscience and AI have generated considerable interest and rich research on high dimensional neural representations. An overriding challenge is to identify computationally relevant statistical measures of these representations which go beyond traditional dimensionality reduction methods. In my talk, I will discuss recent work, with Dan Lee, SueYeon Chung, and Uri Cohen in which we developed new geometric measures of high dimensional data based on mean field theory of linear classification of manifolds. These manifolds stand for the set of the neural representations corresponding to the same category or object. Manifold Dimensions and Radii are shown to predict the object classification capacity, a quantitative measure of the ability to support object classification. We apply our theory to the characterization of the changes in manifold geometry as signals propagate across layers of Deep Convolutional Neural Networks. Recordings from cortical neurons responding to object and face stimuli have been similarly analyzed, allowing us to test the correspondence between DCNNs and the visual hierarchy in primate cortex. I will also discuss the relation between manifold geometry and the generalization ability of neural representations in DCNNs.

Back to Workshop IV: Using Physical Insights for Machine Learning