Tightness of Convex Relaxations to Sparsity and Rank

Nathan Srebro
University of Chicago
Computer Science, Toyota Technological Institute

I will first present the k-support norm, which is the tightest convex relaxation of sparsity combined with an $\ell_2$ penalty. In particular, the k-support norm is strictly tighter then relaxing sparsity to L1 as in the elastic net, and allows us to study the looseness of the elastic net relaxation. I will also discuss tightness of convex relaxations to the rank, establishing that the max-norm is tighter then the trace-norm, though possibly not the tightest.

Presentation (PDF File)

Back to Structure and Randomness in System Identification and Learning