Computational Photography and Intelligent Cameras

February 4 - 6, 2015

Overview

Until recently, digital photography has mostly just replaced the traditional film with a silicon sensor, without substantial changes to the interface or the capabilities of a still camera. However, as the computational power of cameras, cell phones, and other mobile or embedded systems has increased, computation can now be coupled much more tightly with the act of photography. Computational photography is a new area of computer graphics and vision, seeking to create new types of photographs and to allow photographers to acquire better images or images they never could observe before. This involves research into new software algorithms for fusing data from multiple images, video streams, or other types sensors as well as into new hardware architectures for capturing the data needed for the software and numerical processing. Applications of computational photography paradigms include compressed sensing cameras, extended depth of field/refocusing, high dynamic range images, invertible motion blurs, and plenoptic cameras, and mathematics is an important tool for inventing and optimizing these new cameras. This workshop will serve as a gathering place for all those interested in theories, algorithms, methodologies, hardware designs, and experimental studies in computational photography.

This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Organizing Committee

Amit Agrawal (Amazon Lab126)
Richard Baraniuk (Rice University, Electrical and Computer Engineering)
Lawrence Carin (Duke University)
Oliver Cossairt (Northwestern University)
Stanley Osher (University of California, Los Angeles (UCLA))
Yohann Tendero (University of California, Los Angeles (UCLA))