Accelerating first order methods for large-scale well-structured convex optimization

Arkadi Nemirovski
Georgia Institute of Technology

Utilizing problem's structure in a computationally cheap fashion is a key to solving
large-scale nonsmooth convex problems. With the approach proposed by Nesterov (2003), this is
achieved by reformulating the problem, using its structure, as a smooth (in many cases, just
bilinear) convex-concave saddle point problem; the latter is then solved by computationally
cheap first order methods, which allows to achieve the best known so far O(1/t) convergence
rate. In the talk, we investigate possibilities to further accelerate this scheme under
favorable structural assumptions, in particular, when (a) the cost function of the saddle
point problem is affine in the ``convex'' argument and is strongly concave in the
``concave'' argument, or (b) the saddle point problem is bilinear, and its domain has ``nice
geometry'' (which is the case, e.g., in L1 minimization arising in sparsity-oriented Signal
Processing). We demonstrate that in the case of (a) the convergence rate can be improved to
O(1/t^2), while in the case of (b) acceleration can be achieved by appropriate randomization
of matrix-vector multiplications.

Presentation (PDF File)

Back to Workshop II: Numerical Methods for Continuous Optimization