Scalable kernel methods

George Biros
University of Texas at Austin

Kernel methods are ubiquitous in science and engineering. Roughly speaking, such algorithms accelerate computations that involve large, dense matrices. For example, for a square dense matrix of size $N$, matrix vector multiplication requires $O(N^2)$ work. Kernel methods can accelerate this operation to $O(N)$ work. State of the art methods are based on low-rank approximations. But such methods do not scale with increasing dataset size. I will discuss algorithms that relax the low-rank assumption and enable algorithmic and parallel scalability. I will also discuss applications of kernel methods to dimensionality reduction, preconditioning, supervised learning, and spectral
clustering.


BIO
George Biros is the W. A. ``Tex'' Moncrief Chair in Simulation-Based Engineering Sciences in the Institute for Computational Engineering and Sciences and has Full Professor appointments with the departments of Mechanical Engineering and Computer Science (by courtesy) at the University of Texas at Austin. From 2008 to 2011, he was an Associate Professor in the School of Computational Science and Engineering at Georgia Tech and The Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. From 2003 to 2008, he was an Assistant professor in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He received his BS in Mechanical Engineering from Aristotle University Greece (1995), his MS in Biomedical Engineering from Carnegie Mellon (1996), and his PhD in Computational Science and Engineering also from Carnegie Mellon University (2000). He was a postdoctoral associate at the Courant Institute of Mathematical Sciences from 2000 to 2003. Biros was in two research groups that won the IEEE/ACM SC03 and SC10 Gordon Bell Prize.

Presentation (PDF File)

Back to Workshop II: HPC and Data Science for Scientific Discovery