Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation

Inderjit Dhillon
University of Texas at Austin
Computer Science

The L1-regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, from very limited samples. In this talk, I will present an algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a quadratic approximation, but with substantial enhancements that leverage the structure of the sparse Gaussian MLE problem. A divide and conquer approach, combined with our quadratic approximation method, allows us to scale the algorithm to large-scale instances. I will present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods. This is joint work with Cho-Jui Hsieh, Matyas Sustik and Pradeep Ravikumar.

Presentation (PDF File)

Back to Structure and Randomness in System Identification and Learning