Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Unified Framework

Dimitri Bertsekas
Massachusetts Institute of Technology

We survey incremental methods for minimizing a sum $\sum_{i=1}^mf_i(x)$, either unconstrained or subject to a set intersection constraint $X=\cap_{i=1}^m X_i$, where $f_i$ and $X_i$ are convex cost and constraint components respectively. Our methods consist of iterations applied to single cost components and, as an option, projection on single constraint components. We introduce a flexible unified algorithmic framework for a variety of such methods, some involving gradient, subgradient, and proximal iterations. For the constrained case, we provide a convergence analysis, based on a coupled supermartingale convergence theorem, which shows that the convergence relies on an interplay between two coupled processes: progress towards feasibility and progress towards optimality. We also discuss the rate of convergence properties of the methods, including the advantages offered by randomization in the selection of components. Based in part on joint work with Mengdi Wang.

Presentation (PDF File)

Back to Long Programs