Recently, impressive text-to-image generators, such as DALL-E 2 and Stable Diffusion, have drawn much attention from both the scientific community and the wider public. These models can synthesize detailed images from arbitrary natural language prompts. One of the key innovations that made these techniques possible is diffusion models, a new class of powerful generative models inspired by non-equilibrium thermodynamics. Diffusion models generate high quality images from pure noise by reversing a forward diffusion process that slowly destroys structure in data. In the first part of this tutorial, we overview the fundamentals of training and sampling from diffusion models and point out active areas of research. Then, we highlight some recent results on applying diffusion models to inverse problems in imaging, including deblurring, inpainting, MRI and phase retrieval.
Back to Computational Microscopy