Mesoscale reconstruction of images and networks using tensor decomposition

Hanbaek Lyu
University of Wisconsin-Madison

We provide a unified framework of reconstructing images and networks using low-rank mesoscale (intermediate-scale) structures. A measure of global reconstruction error is shown to be upper bounded by the expected reconstruction error at mesoscale divided by the length of the mesoscale. Imposing a linear model for how the mesoscale patches are generated, minimizing the resulting upper bound leads to a natural stochastic optimization problem related to the classical problem of dictionary learning in signal processing literature. For reconstructing multi-modal dataset, we propose to use online CP-dictionary learning to learn effective basis for mesoscale reconstruction, which incorporates CP tensor decomposition for the basic building blocks. This will reduce the computational cost of representing complicated inter-relation between different modes so that it scales linearly in the number of modes. Convergence guarantees of online CP-dictionary learning algorithms will also be discussed.


View on Youtube

Back to Workshop IV: Multi-Modal Imaging with Deep Learning and Modeling