Graduate Summer School: Deep Learning, Feature Learning

July 9 - 27, 2012

Overview

One of the challenges for machine learning, AI, and computational neuroscience is the problem of learning representations of the perceptual world. This summer school will review recent developments in feature learning and learning representations, with a particular emphasis on “deep learning” methods, which can learn multi-layer hierarchies of representations.

Topics will include unsupervised learning methods such as stacked restricted Boltzmann machines, sparse coding, denoising auto-encoders, and methods for learning over-complete representations; supervised methods for deep architectures, metric learning criteria for vector-space embeddings; deep convolutional architectures and their applications to images, video, audio, and text; compositional hierarchies and latent-variable models.

Mathematical issues will be addressed, particularly how to characterize the low-dimensional structure of natural data in high-dimensional spaces; training density models with intractable partition functions; the geometry of non-convex and ill-conditioned loss functions for deep learning; efficient optimization methods for inference and deep learning; the representational efficiency of deep architectures, and the advantages of high-dimensional and over-complete representations.

The Canadian Institute for Advanced Research (CIFAR) is cosponsoring the program. Ten students associated with CIFAR’s Neural Computation and Adaptive Perception (NCAP) Program will participate with CIFAR support.

 

Organizing Committee

Yoshua Bengio (University of Montreal)
Geoffrey Hinton (University of Toronto)
Yann LeCun (New York University)
Andrew Ng (Stanford University)
Stanley Osher (University of California, Los Angeles (UCLA))