Towards Optimal Newton-Type Methods for Nonconvex Smooth Optimization

Coralia Cartis
University of Edinburgh
Mathematics

We show that the steepest-descent and Newton’s methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent's global worst-case complexity bound. This shows that the latter upper bound is essentially tight for steepest descent and that Newton’s method may be as slow as the steepest-descent method in the worst case. Then the cubic regularization of Newton's method (Griewank (1981), Nesterov & Polyak (2006)) is considered and extended to large-scale problems, while preserving the same order of its improved worst-case complexity (by comparison to that of steepest-descent).
Then this improved worst-case bound is also shown to be essentially tight. We further address the optimality of cubic regularization from a worst-case complexity point of view amongst a class of second-order methods. Time permitting, an extension of cubic regularization to bound-constrained problems will be presented that satisfies the unconstrained function-evaluation complexity bound of cubic regularization. This is joint work with Nick Gould (Rutherford Appleton Laboratory, UK) and Philippe Toint (Namur University, Belgium).


Back to Workshop II: Numerical Methods for Continuous Optimization