What is a tensor? Part 1

Lek-Heng Lim
University of Chicago
Statistics

We discuss the three best-known definitions of a tensor: as an object satisfying certain transformation rules, as a multilinear map, and as an element of a tensor product of vector spaces. Each definition captures an important idea: equivariance, multilinearity, and separability respectively. We discuss their roles in computations and applications. The notion of equivariance has become widely known due to the success of equivariant neural networks, notably AlphaFold 2's unprecedented results in protein structure prediction. We will see that equivariance originated from the tensor transformation rules and has been implicitly used throughout algorithms in numerical linear algebra and optimization. The notion of separability is also ubiquitous, separation-of-variables is an indispensible ingredient in fast multipole method, Grover's quantum search algorithm, Hartree-Fock approximation, Strassen's matrix multiplication; and when restricted to symmetric and alternating tensors, manifests itself as polynomial Mercer kernels and Slater determinants respectively. Time permitting, we will discuss how Cooley-Tukey FFT, Walsh transform, wavelet packet transform, Yates method in factorial designs, etc, may be cast as the same algorithm for performing a tensor contraction; and that every tensor network is the contraction of a rank-1 tensor.

Presentation (PDF File)

Back to Tensor Methods and Emerging Applications to the Physical and Data Sciences Tutorials