CMOS Imaging: Technology & Applications

SukHwan Lim
Stanford University
Electrical Engineering

Most of today's video and digital cameras employ CCD image sensors with highly
optimized imaging performance. CCDs, however, are fabricated in non-standard
integrated circuit processes and therefore cannot be integrated with analog and
digital processing on the same chip. Moreover, CCDs have serial readout
resulting in low frame rate and high power consumption. Recently developed CMOS
image sensors, by comparison, use standard processes and therefore can be
integrated with analog and digital circuits enabling faster and lower power
readout and the integration of all digital camera functions into a single
"camera-on-chip".

We first briefly describe CCD and analog CMOS image sensor architectures. We
then present the Digital Pixel Sensor (DPS) architecture we developed at
Stanford, where A/D conversion is performed at each pixel in parallel and
digital data is read out of the sensor in a manner similar to a digital
memory. Such massively parallel conversion and readout provide unlimited
potential for high-speed "snap-shot" digital imaging. We have been exploring
applications of high speed readout to still and digital imaging. The idea
is to oversample the scene to obtain more accurate information about
illumination and motion. Such information can then be used to enhance image
quality or improve the performance of video applications. We describe two
applications. The first is extending sensor dynamic range by capturing several
images during exposure time. We present a method for synthesizing a high
dynamic range, motion blur free, still image from multiple image captures. The
algorithm consists of two main procedures, photocurrent estimation to reduce
read noise and motion/saturation detection to extend dynamic range at high
illumination and to prevent motion blur. The second application we present is
to obtain high accuracy optical flow estimates at a standard frame rate using
a high frame rate sequence. The method uses Lucas-Kanade method to obtain
optical flow estimates at high frame rate, which are then accumulated and
refined to obtain optical flow estimates at a standard frame rate. We
demonstrate significant improvements in optical flow estimation accuracy on
synthetic sequences. We also demonstrate that the most significant benefit of
high frame rate is the reduction or elimination of motion aliasing. Finally we
argue that the integration of memory and signal processing with an image
sensor in deep submicron CMOS processes will enable low cost implementations
of these high frame rate applications.


Back to NANO2002 Workshop III: Data Analysis and Imaging