We present a line search algorithm for constrained optimization where the problem functions may be nonsmooth and/or nonconvex. The method is based on a sequential quadratic programming framework where nonsmoothness is handled by sampling gradients randomly in an epsilon-neighborhood of the current iterate. The intention of our approach is to form the basis for carrying algorithmic techniques for smooth problems into the realm of nonsmooth optimization. Global convergence of the algorithm is proved and encouraging numerical results are presented for a set of test problems.
Back to Workshop II: Numerical Methods for Continuous Optimization