Leveraging "partial" smoothness for faster convergence in nonsmooth optimization

Damek Davis
Cornell University

First-order methods in nonsmooth optimization are often described as "slow." I will present two (locally) accelerated first-order methods that violate this perception: a superlinearly convergent method for solving nonsmooth equations, and a linearly convergent method for solving "generic" nonsmooth optimization problems. The key insight in both cases is that nonsmooth functions are often "partially" smooth in useful ways.


Back to Computational Microscopy