This summer school is motivated by recent advances which offer the promise of building rigorous models for human cognition by applying the mathematical and computational tools developed for designing artificial systems. In turn, the complexity of human cognitive abilities offers challenges which test current theories and drive the development of more advanced tools. The goal is to develop a common mathematical framework for all aspects of cognition, and review how it explains empirical phenomena in the major areas of cognitive science – including vision, memory, reasoning, learning, planning, and language. The main theoretical theme is to model cognitive abilities as forms of probabilistic inference over structured relational systems such as graphs and generative grammars. We will focus on how the mind learns complex generative models of the world and how it inverts or conditions these models based on observed data to infer world structure. We will pay particular emphasis on vision because this is currently an area of great activity but we will address all aspects of cognition. Other important themes include the combination of logic with probability and the development of probabilistic programming languages.
The first week will introduce the basic concepts and techniques, including machine learning and artificial intelligence, and give applications to cognitive modeling. The second will focus on more advanced methods including stochastic grammars with examples from natural languages and vision. We will present technical material in the mornings and illustrate them by applications in the afternoons. There will be breakout sessions, opportunities to meet and talk with the speakers, and additional evening lectures on topics of interest.
Josh Tenenbaum (Massachusetts Institute of Technology)
Alan Yuille (University of California, Los Angeles (UCLA))