Learning Manifold-structured Data via Geometry Inspired DNNs

Rongjie Lai
Rensselaer Polytechnic Institute

Deep neural networks have made great success in many problems in science and engineering. In this talk, I will discuss our recent efforts on learning non-trivial geometry information hidden in data via DNNs. In the first part, I will discuss our work on advocating the use of a multi-chart latent space for better data representation. Inspired by differential geometry, we propose a Chart Auto-Encoder (CAE) and prove a universal manifold approximation theorem on its representation capability. CAE admits desirable manifold properties that auto-encoders with a flat latent space fail to obey, predominantly proximity of data. In the second part, I will discuss our work on a new way of defining convolution on manifolds via parallel transport. This geometric way of defining parallel transport convolution (PTC) provides a natural combination of modeling and learning on manifolds. PTC allows for the construction of compactly supported filters and is also robust to manifold deformations. I will demonstrate its applications to shape analysis and point clouds processing using PTC-nets. This talk is based on a series of joint work with a group of my students and collaborators.


Back to New Trends in Scientific Computing