Optimal Nonlinear Control Using Hamilton–Jacobi–Bellman Viscosity Solutions on Quasi-Monte Carlo Grids

Christian Chilan
University of Illinois at Urbana-Champaign

The optimal control of nonlinear systems is traditionally obtained by the application of the Pontryagin minimum principle. Despite the success of this methodology in finding the optimal control for complex systems, the resulting open-loop trajectory is guaranteed to be only locally optimal. Furthermore, the determination of open-loop solutions is computationally intensive, which rules out its application in feedback controllers, reducing its robustness against disturbances. In principle, these issues can be addressed by solving the Hamilton–Jacobi–Bellman (HJB) partial differential equation (PDE). However, the space complexity of the problem is exponential with respect to the system dimensionality for rectangular uniform grids. Moreover, the value function of the HJB equation may be nondifferentiable, which renders traditional PDE solution methods impractical. Therefore, extant methods are suitable only for special problem classes such as those involving affine systems or where the value function is differentiable. To deal with these issues, this work introduces a methodology for the solution of the HJB equation for general nonlinear systems that combines PDE viscosity solutions, quasi–Monte Carlo grids, and kriging regression to implement globally optimal nonlinear feedback controllers for practical applications. The effectiveness of the method is illustrated with smooth and nondifferentiable problems with finite and infinite horizons.

Presentation (PDF File)

Back to Workshop I: High Dimensional Hamilton-Jacobi Methods in Control and Differential Games