Our goal is to the approximation a classical solution of the non-linear and high-dimensional Hamilton Bellmann equations subordinated to a finite and an infinite horizon feedback control problem, for deterministic as well as for stochastic control problems. The differential operator of the related Hamilton Jacobi equation can be hardly approximated in a tensor product form. To this end we use a variational Monte Carlo approach. Variational Monte Carlo method are used in the present approach to solve an inhomogeneous backward Kolmogorov equation inside a policy iteration step. In Variational Monte Carlo one replaces the underlying objective functional by an empirical functional. A prototypical example is the treatment of regression problem in statistical learning by risk minimization over quadratical loss functions. For the actual optimization we need only computable gradients or better Riemann gradient at sample points.
We use multi-polynomial ansatz-functions and HT tensor product to represent the solution and circumventing the curse of dimensions (deep neural networks and other tools from ML may be used alternatively). First numerical results have been obtained by my PhD students Leon Sallandt and M. Oster.
Back to Workshop II: Tensor Network States and Applications