Opponent Color Revisited

Sabine Susstrunk
École Polytechnique Fédérale de Lausanne (EPFL)

According to the efficient coding hypothesis, the goal of the visual system should be to encode the information presented to the retina with as little redundancy as possible. From a signal processing point of view, the first step in removing redundancy is de-correlation, which removes the second order dependencies in the signal. This principle was explored in the context of trichromatic vision by *Buchsbaum and Gottschalk*(1), and later *Ruderman et al.,*(2) who found that linear de-correlation of the LMS cone responses matches the opponent color coding in the human visual system.
And yet, there is comparatively little research in computational photography and computer vision that explicitly model and incorporate color opponency into solving imaging tasks. A common perception is that “colors” are redundant and/or too correlated to be of any interest, or that it is too complex to deal with.
In this talk I will illustrate with several applications, such as saliency and super-pixels, that considering opponent colors can significantly improve computational photography and computer vision tasks not only in image enhancement but also image ranking. We have additionally extended the concept of "color opponency" to include near-infrared for applications such as scene recognition, object segmentation, and semantic image labeling.

Links:
http://rspb.royalsocietypublishing.org/content/220/1218/89.short
http://www.opticsinfobase.org/josaa/abstract.cfm?&uri=josaa-15-8-2036

Presentation (PDF File)

Back to Computational Photography and Intelligent Cameras