On Computational Hardness with Graph Neural Networks

Joan Bruna
New York University

Deep Learning, thanks mostly to Convolutional architectures, has transformed computer vision and speech recognition. Their ability to encode geometric stability priors, while offering enough expressive power, is at the core of their success. In such settings, geometric stability is expressed in terms of local deformations, and it is enforced thanks to localized convolutional operators that separate the estimation into scales.
Many inference problems on graphs, such as community detection or graph matching, share similar notions of geometric stability, expressed in terms of graph local metric deformations. Such stability can be leveraged in practice with message-passing type algorithms such as belief propagation, or with spectral approaches.
In this talk, we will describe techniques that exploit geometric stability in general geometries with appropriate graph neural network architectures. We will show that these techniques can all be framed in terms of local graph generators such as the graph Laplacian. We will present some stability certificates, as well as applications to computer graphics, particle physics and graph estimation problems. In particular, we will describe how graph neural networks can be used to reach statistical detection thresholds in community detection, and attack hard combinatorial optimization problems, such as the Quadratic Assignment Problem.

Presentation (PDF File)

Back to New Deep Learning Techniques