Proximal Stochastic Gradient Method with Variance Reduction

Lin Xiao
Microsoft Research

We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We also assume that the overall objective function is strongly convex. Such problems often arise in machine learning, known as regularized empirical risk minimization. We propose and analyze a new proximal stochastic gradient method, which uses a multi-stage scheme to progressively reduce the variance of the stochastic gradient. While each iteration of this algorithm has similar cost as the classical stochastic gradient method, we show that the expected objective value enjoys a geometric rate of convergence. This is joint work with Tong Zhang.

Presentation (PDF File)

Back to Long Programs