Why don't good cameras work well as good light meters? The quick answers are "noise" and "lack of calibration"; but perhaps the better answer is "nobody wants one". As our eyes often see subtleties our cameras miss, how could they help us?
Aggregation, however, can change all that. After a ~15-minute automated display/camera self-calibration procedure named 'AutoLum' by PhD Candidate Paul Olczak, light estimates from aggregated pixel values can outperform by 10X or better the contrast sensitivity of the human eye (e.g. 0.06% vs. 1%-2%), and can estimate light amounts to an accuracy better than 1/10th of the camera's own quantization steps.
Applied to picture groups and/or pixel groups from a fixed, calibrated camera, aggregation can reveal image subtleties our eyes cannot see. It might help us to match tooth colors for dental work, or to see smudges and fingerprints on visually 'clean' surfaces, or even to alert us to early signs of wear, fading and deterioration of a museum's best cultural treasures.
'AutoLum' calibration also revealed the light responses of most digital camera are quite poorly described by the simple smooth curves of earlier published calibration methods; the curves smooth away a digital camera's strong step-by-step irregularities. From the worst to the best cameras we measured (e.g. Canon 1D Mark-3) each held ragged, alternating narrow/wide spacing between quantizing boundaries, probably due to the LSB inaccuracy in the A/D converter. Correction for these individual irregularities (easily mistaken for noise) yielded a 10X improvement in the accuracy of simple photometric stereo applications, suggesting AutoLum's individual-step calibrations may boost performance of other common computer vision tasks as well.
Back to Computational Photography and Intelligent Cameras