Proximal-Gradient Homotopy Methods for Sparse Optimization

Lin Xiao
Microsoft Research

Exploiting problem structure has become an important theme of recent advances in convex optimization. It is well known that proper use of problem structure may dramatically improve solution efficiency at the numerical linear algebra level. More recently, it has become clear that exploiting problem structure can also lead to more efficient optimization methods in terms of their iteration complexity, sometimes significantly surpassing the limitations of the black-box complexity theory.

In this talk we focus on the L1-regularized least-squares problem in the context of sparse recovery or compressed sensing. The standard proximal gradient method (iterative soft-thresholding) has low computational cost per iteration but a rather slow sublinear convergence rate. Nevertheless, when the solution is sparse, it often exhibits fast linear convergence in the final stage. We exploit the local linear convergence using a homotopy continuation strategy, and show that under suitable assumptions for sparse recovery, this method achieves a global geometric rate of convergence. We also present an accelerated proximal gradient method that can automatically estimate the restricted strong convexity parameter of the objective function, and adapt itself to achieve a faster convergence rate.

This talk is based on joint work with Tong Zhang and Qihang Lin.

Presentation (PDF File)

Back to Long Programs