A common modern approach to learning is to phrase training as an empirical optimization problem, and then invoke an optimization procedure in order to carry out the training. It is thus common to study machine learning problems as (deterministic) optimization problems, yielding optimization approaches with runtimes that grow with the size of the training set. But in this talk I will argue and demonstrate how, from a machine learning perspective, optimization runtime should only *decrease* as a funtionc of training set size, and how this is often achieved by smiple stochastic approaches, which outpreform sophisticated convex optimization approaches.
Specifically, I will discus stochastic optimization approaches for SVMs and for matrix factorization problems.
Back to Workshop II: Numerical Methods for Continuous Optimization