Workshop IV: Efficient Tensor Representations for Learning and Computational Complexity

May 17 - 21, 2021

Overview

tensorsVirtual Workshop: In response to COVID-19, it is likely that all participants will attend this workshop virtually via Zoom. Workshop registrants will receive the Zoom link a few days prior to the workshop, along with instructions on how to participate. The video of the recorded sessions will be made available on IPAM website.

Tensors are well-suited to capture higher order correlations or complex relations in data. Unfortunately, the number of parameters describing a tensor scales exponentially with its order. Naive tensor estimation methods would thus require an impractical amount of samples. To counter this problem, a number of efficient tensor representations have been introduced. These include low-rank decompositions which capture latent structures, or tensor networks that are tailored to quantum many-body systems with local interactions. The first emphasis of this workshop will be on the theory of recovering efficient tensor representations from empirical data, as studied e.g. in the context of low-rank tensor completion or matrix-product state learning. We will focus both on algorithmic and on statistical aspects.

In addition to describing data, tensors can also represent computational problems, such as the problem of multiplying large matrices or the evaluation of permanents. In this context, low-rank decompositions correspond to efficient algorithms, while the non-existence of such decompositions amounts to lower bounds. The second emphasis of the workshop will thus be on applications of efficient tensor representations to theoretical computer science, particularly computational complexity theory. A closely related area that will also be covered is the resource theory of tensors in quantum information theory.

This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Program Flyer PDF

Organizing Committee

Anima Anandkumar (California Institute of Technology)
Fernando Brandao (California Institute of Technology)
Rong Ge (Duke University)
David Gross (Universität zu Köln)
Michael Walter (Universiteit van Amsterdam)
Ming Yuan (Columbia University)