Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

Emmanuel Candes
California Institute of Technology
Applied and Computational Mathematics

Suppose we are given an n-dimensional vector S (signal). How many linear measurements do we need to make about this signal to be able to recover S to within a small error in the Euclidean metric? Or more exactly, suppose we are interested in a class of such objects---discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within
high accuracy? This talk shows that if the objects of interest are sparse or
compressible in the sense that the reordered entries of a signal S decay like a power-law (or if the coefficient sequence of S in a fixed basis decays like a power-law), then it is possible to
reconstruct S to within very high accuracy from a small number of random measurements. A typical result is as follows: suppose the rearranged (by decreasing order of magnitude) entries of S decay
like the power-law $k^{-1/p}$; we take measurements by correlating the signal with $K$ $N$-dimensional Gaussian vectors with
independent standard normal entries; then with overwhelming probability, one can reconstruct an object S* from the data of these $K$ random coefficients only with the property that the
reconstruction error is at most $O(K/log N)^{-r}$, $r = 1/p - 1/2$. Moreover, the reconstruction can be obtained by solving a linear program which searches among all candidates fitting the
observed data, for that with minimum l1-norm.
The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes few
randomly sampled Fourier coefficients of S. Indeed, our results are quite general and require only two hypotheses on the measurement
ensemble, which we call the uniform uncertainty principle property and the exact reconstruction property.

Alternatively, we can think of the random measurement process as some kind of universal encoder. The encoder essentially correlates
the signal with white-noise and the decoder upon receiving such quantized random measurements, reconstruct an approximate signal by
minimizing the l1-norm. There is a sense in which this encoder/decoder pair asymptotically achieves nearly optimal information theoretic limits.

This is joint work with Terence Tao (UCLA).

Presentation (PDF File)

Back to MGA Workshop II: Multiscale Geometry in Scientific Computing