Communication-Avoiding Sparse Matrix Algorithms for Large Graph and Machine Learning Problems

Aydin Buluc
Lawrence Berkeley National Laboratory

Machine learning (ML) and graph analytics have proven to be powerful tools for analyzing business, social and scientific data sets. Data from scientific domains where ML and graph methods excel are often sparse with many missing entries. Many ML methods rely on numerical optimization algorithms, which themselves are based on sparse matrix operations. Sparse linear algebra software is needed for ML due to sparse datasets or the need to enforce output (and model) sparsity for avoiding overfitting and increasing interpretability.

Additionally, a vast majority of graph algorithms can be cast into the language of sparse matrices as exploited by the GraphBLAS effort. The challenges of ML and graph methods for science problems include extreme-¬scale data volumes and data rates, necessitating parallel algorithms that will run on exascale architectures. Due to sparsity, popular implementations of common ML and graph algorithms struggle to efficiently harness the capabilities of large¬-scale parallel computers. One prevalent problem is the increasingly dominant cost of communication. In this talk, I will describe a sample from our recent work on distributed¬-memory parallelization of prevalent ML and graph problems. I will highlight the importance of communication-avoiding sparse matrix primitives to achieve scalability in these problems.

Presentation (PDF File)

Back to Long Programs