Gaussian Processes (I)

Raquel Urtasun
Toyota Technological Institute at Chicago

Gaussian processes I and II: We expect this tutorial to provide the theoretical background for good understanding of Gaussian processes, as well as illustrate the applications where Gaussian processes have been shown to work well; in some cases outperforming the state-of-the-art. We will begin by an introduction to Gaussian processes starting with parametric models and generalized linear models. We will further demonstrate how basis functions can be increased in number to lead to non parametric regression models. An overview of how prediction with Gaussian processes is formed through conditioning in a joint Gaussian density will then be provided. We will then demonstrate how covariance parameters can be learned and what is the role of the log determinant in the likelihood. Gaussian processes have a natural trade-off between data fit and regularization, we will explain where this come from. We will then extend Gaussian processes from the Gaussian noise model, and show how to deal with non Gaussian likelihood models including likelihoods for classification. Finally, we will show how to make Gaussian process models computationally efficient. The usefulness of these processes will be demonstrated in a wide variety of vision related applications including pose estimation and object recognition. The second part of the tutorial will focus on how to use Gaussian processes to perform manifold modeling. In particular, we will review the Gaussian process latent variable model (GPLVM) as well as its variants that incorporate prior knowledge in the form of dynamics, labels, topologies and physical constraints. We will finish by discussing a wide range of applications of these latent variable models including character animation, articulated tracking as well as deformable surface estimation.

Presentation (PDF File)

Back to Graduate Summer School: Computer Vision