Sparse matrix operations are widely used in in scientific simulation. As problems have become more complex and as larger scale, non-PDE based applications expand, data irregularity is placing increased pressure on the resulting sparse kernel computations. The focus on this talk is on extending the limits of computation, particularly at the strong scaling limit.
For many sparse solvers, such as conjugate gradient or algebraic multigrid, sparse matrix vector multiplication is a fundamental operation. Yet even this relative straightforward computation encounters limitations on modern machines that have both node level and fine grained parallelism. Higher numbers of communicating cores can result in limited bandwidth and often duplicate information is transferred in the algorithm. We introduce some strategies to mitigate this at both the method level and in the underlying communication algorithm.
Back to Workshop III: HPC for Computationally and Data-Intensive Problems