Codesign of an 3D imaging system based on depth-from-defocus

Pauline Trouvé
Onera

We present a 3D imaging system composed of a single-lens camera and a single-image depth-from-defocus (DFD) algorithm. The lens has an uncorrected chromatic aberration which induces a variation of the in-focus plane position with the wavelength. In a single snapshot of the color camera, three images with different defocus blurs are obtained, from which a depth map can be estimated. We focus here on the co-design of such a system, ie. the joint optimization of its optical and processing parameters. First, from an original calculation of the Cramer-Rao Bound we derive a theoretical model of the depth accuracy provided by such a system. Second, the performance of the system in terms of final RGB image quality is related to a "generalized depth of field". Third, these two criteria are used to design a 3D imaging system adapted to the requirements of autonomous navigation of MAV (obstacle avoidance). This system has been realized and experimentally validated and we present quantitative and qualitative depth estimation results.


Back to Computational Photography and Intelligent Cameras