Capturing the scene’s light field has been an old interest in the field of computational photography. However, the recent release of hand held plenoptic cameras such as Lytro has introduced the potential of light field imaging to the mass market. By placing a microlens array between the main lens and the sensor, a plenoptic camera captures the
direction of the light bundles that enter the camera, in addition to their intensity and color. Captured data is then demultiplexed to provide a matrix of horizontally and vertically aligned views from slightly different points of view over the scene.
With the light fields, a number of natural applications have risen such as depth estimation or post-capture refocusing. Among the state of art post-processing methods of the plenoptic data, only very few address the very first steps regarding raw data conversion: (i) demosaicking, which aims to recover the color content of the scene from the mosaicked captured raw data ([1, 2]) and (ii) view demultiplexing, which consists in reordering the pixels based on microlenses positions in order to recover the matrix of views ( [3, 4, 5]). None of the mentioned works address both problems simultaneously.
Most of the works in the literature propose to first demosaick the raw data and then demultiplex to recover the views, but this leads to color artifacts on the views. By construction, neighbor pixels in a plenoptic raw image contain different angular information (each pixel under a microlens corresponds to a different view). So, demosaicking the raw plenoptic image, as if it was a conventional image, wrongly mixes angular information: classical algorithms interpolate neighbor color values, which causes the so-called view cross-talk artifacts. Besides, it has been shown in  that disparity estimation from views obtained from such a demosaicked raw image is
prone to tremendous errors. Therefore, we build on the work in , in which the raw image is demultiplexed without demosaicking, and we study how to recover the full RGB views. This means that demosaicking is done on the views and not on the raw multiplexed data. Note that the demultiplexing step (pixel reordering) transforms the Bayer pattern on the raw data into new view-dependent color patterns. On these new irregular color patterns, classical demosaicking algorithms poorly recover highly textured areas. In this presentation, we propose a generic demosaicking framework specifically designed for plenoptic data and inspired by multi-frame demosaicking approaches. The goal is to increase the chromatic resolution of each view, exploiting the redundant sampling of the scene by the other views. Particularly, our strategy is to estimate and use pixel disparities to guide demosaicking. A recent block-matching method for plenoptic data is used to estimate pixel disparities. Then reliable estimated disparities are used to demosaick views, exploiting the redundant sampling of the scene.
The results do not contain color artifacts, compared to the state of art methods. Thanks to accurate view demultiplexing and sub-pixel accuracy of the estimated disparities, the spatial resolution of the demosaicked views are higher than the state of art methods by a factor of 2, and can be increased up to a factor of 6, without bearing the complexity of additional super-resolution steps. Circumventing the fattening effect of the block matching method, and achieving higher factors of super-resolution are left as future works.
Back to Computational Photography and Intelligent Cameras