I will first present the k-support norm, which is the tightest convex relaxation of sparsity combined with an $\ell_2$ penalty. In particular, the k-support norm is strictly tighter then relaxing sparsity to L1 as in the elastic net, and allows us to study the looseness of the elastic net relaxation. I will also discuss tightness of convex relaxations to the rank, establishing that the max-norm is tighter then the trace-norm, though possibly not the tightest.
Back to Structure and Randomness in System Identification and Learning