Accelerating optimal black-box schemes

Michel Baes
ETH Zürich
Operations Research

In huge-scale convex optimization, the problem size can rule out the most efficient interior-point methods: albeit they converge in few iterations, the cost of the very first one might already be prohibitive. As a result, only cheap first-order methods can be used on these problems. Among first-order methods, estimate sequences schemes developed by Nesterov are theoretically the fastest for smooth convex problems. Due to their conceptual complexity, these schemes have not been studied in depth and were not frequently used in practice, until the emergence of smoothing techniques.
In this talk, we study theoretically and numerically the sensitivity of these algorithms with respect to various types of approximations, such as the use of approximate subgradients and
partial resolutions of subproblems. We also describe different strategies for further accelerating estimate sequence schemes and illustrate them numerically. Finally, we apply our methods to a model predictive control problem.

Presentation (PDF File)

Back to Workshop II: Numerical Methods for Continuous Optimization