Convexity Exploiting Newton-Type Optimization for Learning and Control

Moritz Diehl
University of Freiburg

This talk reviews and investigates a large class of Newton-type algorithms for nonlinear optimization that exploit convex-over-nonlinear substructures. We show how these algorithms can be used to solve optimal control problems arising in estimation and control, and how data driven versions of these methods can be devised to achieve near optimal performance in learning and control tasks.

In more detail: all of the considered algorithms are generalizations of the Gauss-Newton method, and all of them sequentially solve convex optimization problems that are based on linearizations of the nonlinear problem functions. We attempt to classify them into two major classes, one of which might be denoted by ”Generalized Gauss-Newton (GGN)”, and the other by ”Sequential Convex Programming (SCP)”. We report on applications in real-time optimal control, notably nonlinear model predictive control (MPC) and moving horizon estimation (MHE). Finally, we discuss how “zero-order” variants of these methods can be used to solve optimization problems approximately, and that in the case of MHE problems, the error is of the same size as the inherent estimation error of MHE. Further, these zero-order variants can also be used in a data driven setting, where only experimental function evaluations are available.

Presentation (PDF File)

Back to Long Programs