How to Escape Saddle Points Efficiently?

Rong Ge
Duke University

The presence of saddle points is one of the key features of a non-convex objective function. In this talk, we will discuss why saddle points are ubiquitous in machine learning applications, and how to deal with these problems efficiently. We will show that for many natural problems, all local minima are also global, and the saddle points are strict. With such nice geometric properties, we show how a simple modification of gradient descent can optimize these problems very efficiently. Based on joint work with Chi Jin, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan.

Presentation (PowerPoint File)

Back to Workshop I: Optimization and Optimal Control for Complex Energy and Property Landscapes