Workshop IV: Multi-Modal Imaging with Deep Learning and Modeling

Part of the Long Program Computational Microscopy
November 28 - December 2, 2022

Overview

Multimodal microscopy that combines complementary nano- and atomic-scale imaging techniques is critical for extracting comprehensive chemical, structural, magnetic, and functional information. Experiments from correlative electron, X-ray, optical and scanning probe microscopes have generated very large data sets, and the scientific community desperately needs more efficient methods. Methodologies such as compressed sensing and deep learning developed for natural images come without any performance guarantee for the microscopy problem. Furthermore, when multimodal data is collected, the data processing of each modality usually is separate, and the combined results are checked for consistency. Simultaneous processing has the advantage to require less data for extracting the same amount of information. To achieve this, however, one must have consistent imaging modalities for each detector and stable mathematical learning procedures to fuse the data in reliable and reproducible ways. The goal of the workshop is to bridge the gap between mathematicians, physicists, materials scientists, and engineers to advance data acquisition, modeling, simulation, and analysis in multimodal microscopy. It will be instrumental to build foundations for interdisciplinary research by engaging all these subject areas. This workshop will provide the opportunity to present and exchange ideas, share data, and introduce new mathematical techniques needed in this cross-disciplinary field.

Organizing Committee

Peter Binev (University of South Carolina)
Sergei Kalinin (Oak Ridge National Laboratory)
Gitta Kutyniok (Technische Universität Berlin)
Deanna Needell (University of California, Los Angeles (UCLA))
Paul Weiss (University of California, Los Angeles (UCLA))