White Paper: Geometry and Learning from Data in 3D and Beyond

Posted on 6/25/19 in Reports and White Papers

This white paper is an outcome of IPAM’s spring 2019 long program, Geometry and Learning from Data in 3D and Beyond.

As technology advances, there is an ever-increasing demand to acquire, analyze, and generate 3D data. These necessarily large data sets must be amenable to efficient processing, analysis, and implementation in a variety of settings such as multi-dimensional modeling, high-resolution visualization, medical imaging, and the entertainment industry. Beyond 3D shapes, understanding and learning high-dimensional geometric structures is an active area of research. Given the goals of this program, the Core Participants identified four areas of particular interest: (1) 3D shape analysis, (2) graphs and data, (3) optimal transport and Wasserstein information geometry, and (4) practical matters.

3D shape modeling and analysis is critical in efforts to digitize and replicate the world without losing the core geometric information. Several applications like 3D content generation, shape modeling, animation, and manufacturing necessitate novel shape-analysis approaches. Fortunately, recent advances in deep-learning architectures (e.g., Convolutional Neural Networks) have facilitated solutions to many difficult problems in the 2D domain. In this program, a primary motivation was to adapt these advances to the 3D domain and thereby bridge the gap between traditional 3D shape analysis and deep learning methods. An important question emerged: In addition to using machine learning to understand geometry, how can geometry be used to understand machine learning (ML)? We believe that formulations around strong shape priors and shape properties will play a crucial role in understanding complex neural networks. Moreover, it will be essential to develop holistic shape-understanding systems where the analysis goes beyond the 3D geometry to including texture, color, material, and semantic information.

Graph-based analysis methods have been increasingly used for large-scale pattern recognition. Graph neural networks can learn representations of nodes, edges, subgraphs, whole graphs, and spaces of graphs. The generality of these networks allow for neural graph representation learning to be applied to many domains where standard techniques fail. This is especially true for problems that involve heterogeneous data. Graph neural networks explicitly learn lower dimensional graph representations for less computational cost than classical dimensionality-reduction algorithms. Properly defining convolution-like graph operations is fundamental to formulating these networks and is an active area of research. Furthermore, incorporation of topological data-analysis tools (e.g., the study of persistent homologies) may geometrically motivate neural-network architectures.

Optimal transport provides a geometric framework for the study of probability distributions by extending the geometry of the sample space. It defines a similarity measures between (high-dimensional) distributions, providing geometric tools for navigating the space of probability distributions. In contrast to information theoretical divergences that do not consider the geometry of the sample space, this is an inherent consideration in optimal transport. In particular, Wasserstein metrics extend distance functions of sample spaces to distances between distributions. Solution of optimal transport problems is amenable to ML because the approach can be applied to minimize the distance between a probabilistic model and a data population; this should be further investigated. Moreover, potential associations between the geometric structures from optimal transport and other ML methods, such as kernel methods, should be explored.

Finally, the group espoused rigorously deriving the mathematical formalisms required to explain, describe, and predict the performance of ML models. For example, robust mathematical analyses would help to characterize aspects of the behavior of ML (e.g., why transfer learning works, how dropout is so effective in reducing overfitting, etc.). Moreover, because important decisions will increasingly be made with the assistance of AI, ML models must be verified and validated to build confidence in their estimates and predictions (e.g., machine-learning-assisted medical diagnoses and treatments). Perhaps a more thorough mathematical understanding of ML will help address important societal issues identified by the group. As AI systems become increasingly prevalent in everyday life and infrastructure, the social impacts of adversarial attacks, loss of privacy and anonymity, and intentional manipulation must be forefront. Risks include abuse of natural language processors (fake news), subversion of security systems (altered facial recognition), malicious and adversarial attacks on infrastructure (vulnerability of the power grid), and perhaps most importantly, job displacement. It is incumbent upon experts in our field to inform policy makers and decision makers so that proper regulations can be developed and laws enacted to protect individuals, societies, and economies.

Overall, participants in this program thoughtfully identified and outlined some of the fields and tools with the potential to yield significant impacts at the intersection of mathematics and ML. These fields include differential geometry, topology, probability theory, information geometry, optimal transport, partial differential equations, harmonic analysis, graph theory, combinatorics, functional analysis, linear algebra, and optimization, among others.

Read the full report.