Data-driven discretization of PDEs via tensor refinement

Vladimir Kazeev
Universität Wien

We consider the solution of PDE problems in several «physical» dimensions using a low-rank tensor decomposition (matrix product states, known as tensor train in numerical mathematics) for the low-parametric representation of the data and of approximate solutions. Even for a problem posed in two or three dimensions, the uniform refinement of a standard, generic finite-element discretization increases the dimensionality of the data and of the approximate solution, each new dimension of which represents a new scale resolved by the refined discretization. The adaptive compression by low-rank tensor approximation, however, renders standard discretizations feasible and efficient by constructing effective data-driven discretizations in the course of computation and by operating on these adaptive discretizations. This approach has been implemented and analyzed for various types of problems; two notable classes are problems with singularities and with high-frequency oscillations, each of which may require very fine meshes to achieve high accuracy. In this talk, we discuss the conditioning and stability of long MPS/TT discretizations of linear elliptic second-order PDEs. With due attention to these two aspects, we propose and analyze a generalized mixed finite-element formulation, which simplifies the construction of tensor-structured PDE discretizations and reduces the tensor ranks of the data.


Back to Workshop I: Tensor Methods and their Applications in the Physical and Data Sciences