Sampling for Hierarchical Approximation of Kernel Matrices

James Levitt
University of Texas at Austin
The Institute for Computational Engineering and Sciences (ICES)

A key challenge for rank-structured solvers and preconditioners is the construction of low-rank approximations of certain submatrices. The cost of computing or accessing all entries of the matrix grows quadratically with the problem size, which becomes prohibitively expensive for large problems. Therefore it is necessary to use only a small sample of the matrix entries in constructing the low-rank approximation, which in our case is an interpolative decomposition (ID). We compare a number of methods for sampling matrix entries, drawing inspiration from theoretical results in randomized linear algebra. Additionally, we present methods for estimating the error of a sampled ID approximation and for efficiently updating a sampled ID with additional samples. Together, these methods allow for the number of samples to be chosen adaptively depending on the problem. We present results for real-world datasets.


Back to Science at Extreme Scales: Where Big Data Meets Large-Scale Computing