Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ~ N^2 pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D « N. We show that the capacity, i.e. the maximal ratio L/N, decreases as |loge|^-D, where e is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner’s classical theory of learning to the case of patterns with strong spatial correlations.
Back to Workshop IV: Using Physical Insights for Machine Learning