How many linear measurement do we need to make about an object (a digital signal, a digital image, and so on) to be able to reconstruct this object to within fixed accuracy?
In this talk, we will show that if one takes K log N random coefficients, then one can essentially reconstructs the K largest coefficients of the signal in any fixed basis. That is, one has an error of approximation which is nearly the same as that one would achieve by knowing ALL the coefficients and selecting the K largest. The logarithmic factor is sharp and cannot be improved upon. Moreover, the signal can be reconstructed by solving a simple convex optimization program; in fact a linear program.
The ability to recover or approximately recover an object from only few measurements depends on a crucial property we call the uniform uncertainty principle. We will describe this property together with its implications, and present promising early numerical experiments.
There are connections with the agenda of coding theory, we can think of the random measurement process as some kind of universal encoder. The encoder essentially correlates the signal with white-noise and the decoder upon receiving such quantized random measurements, reconstruct an approximate signal by minimizing the l1-norm. There is a sense in which this encoder/decoder pair asymptotically achieves nearly optimal information theoretic limits.
This is joint work with Terence Tao (UCLA)