Probabilistic Models of Cognition: The Mathematics of Mind

January 24 - 28, 2005

Overview

The workshop involves leaders from Cognitive Science and experts from Computer Science, Mathematics and Statistics, who are interested in making bridges to Cognitive Science. The workshop is motivated by recent advances, reviewed below, which offer the promise of modeling human cognition mathematically. These advances have occurred largely because the mathematical and computational tools developed for designing artificial systems are beginning to make an impact on theoretical and empirical work in Cognitive Science. In turn, Cognitive Science offers an enormous range of complex problems which challenge and test these theories. The intention of the workshop is to stress themes and tools that are common to all aspects of Cognitive Science rather than concentrating on any specific area. The IPAM workshop will be the first meeting that brings together experts from across the major areas of cognitive science – including vision, memory, reasoning, learning, planning, and language – to discuss these new approaches and their potential to provide a unifying and rigorous theoretical framework for the field.

The main theoretical theme of the workshop is to model cognitive abilities as sophisticated forms of probabilistic inference. The approach is “sophisticated” in at least two respects. First, the knowledge and beliefs of cognitive agents are modeled using sophisticated probability distributions defined over structured relational systems, such as graphs and generative grammars. Second, the learning and reasoning processes of cognitive agents are modeled using advanced mathematical techniques from statistical estimation, statistical physics, and stochastic differential equations.

Early mathematical formulations of this approach include Grenander’s pattern theory (Grenander 1993) and Pearl’s work on probabilistic reasoning (Pearl 1988). This approach has been responsible for a broad revolution in the design of intelligent machines, with implications for computer vision, machine learning, speech and language processing, and planning and decision making (see Russell and Norvig, 2003, for a survey). Applying these ideas to modeling aspects of human cognition was not straightforward, despite pioneering work by Shepard (1987) and Anderson (1990). Indeed, classic work in cognitive psychology by Kahneman, Tversky, and their colleagues suggested that human cognition might be non-rational, non-optimal, and non-probabilistic in fundamental ways. Many advances in constructing probabilistic models of cognition have come recently, due largely to better theoretical understanding of how to represent probability models for these types of problems and how to determine effective algorithms for learning and inference. It is starting to seem plausible that human cognition may in fact be amenable to analysis in terms of rational probabilistic inference, and that in core domains, human cognition approaches an optimal level of performance based on the statistical properties of the domain. Nevertheless, these new ideas remain unfamiliar to most cognitive scientists and it is only in the last five years that they have started making a significant impact on the field. So far their greatest impact has been in vision but, as described below, the same theme is increasingly showing up in all aspects of cognitive science.

Vision is the subfield of cognitive science where these models are most advanced and where these ideas have had most impact. Mumford (2002)and Kersten and Yuille (2003) give overviews of this approach from mathematical and cognitive science perspectives. Recent work (Liu, Knill and Kersten 1995, Weiss, Simoncelli, and Adelson 2001) has shown that these techniques allow cognitive scientists to extend the classic idea of an ideal observer (who performs optimally) to complex stimuli, using Bayesian decision theory, and show that these models can account for many aspects of human visual perception. There have also been successes at formulating the classic Gestalt laws of vision in terms of probabilistic models (Zhu 1999) which relate to cognitive science models of these laws (Shipley and Kellman 2001). Researchers have started exploring grammatical models of vision using compositionality (Geman, Potter, Chi 2002) and developed stochastic grammars for image parsing (Tu, Chen, Yuille and Zhu 2003). These provide links between vision and probabilistic approaches to language processing (Manning and Schutze 2000, Geman and Johnson 2003) which are becoming increasingly successful at modeling experiments in psycholinguistics (Jurafsky 2003).

An advantage of this approach based on probabilistic inference is that it naturally leads to techniques for coupling between different sensory modes and for integrating perception with planning. Recent work (Ernst and Banks 2002, Schrater and Kersten 2000) has built on theoretical studies (Clark and Yuille 1990) to model the coupling of visual and haptic cues and shown good fit with experimental data. Stankiewicz et al (Stankiewicz, Legge, Mansfield, and Schlicht 2003) have made use of modeling by Kaelbling et al (Kaelbling, Littman and Cassandra 1998) to design an ideal observer model for how humans navigate through mazes and demonstrated that this model gives a good fits to experimental data of subjects navigating mazes in virtual reality.

More recently, these ideas have started making an impact in causal learning and inference. This work has built on probabilistic reasoning (Pearl 1988) and recent models of causality (Pearl 2000, Spirtes, Glymour, and Scheines 2000). Cheng’s power PC model (Cheng 1997) showed that these models of this type gave good description of adult performance on experimental tasks. Tenenbaum and Griffiths (2001) extended this work by showing that human subjects used model selection to compete between different hypothetical causal structures. Most recently, Gopnik et al (Gopnik, Glymour, Sobel, Schulz, Kushnir, and Danks 2004) showed that children learning could be also be modeled in this way. Related work on concept learning by Tenenbaum (1999, 2000) showed that experimental data could be explained by a model that assumes humans have a set of hypotheses which are updated as they receive more data.

There are a range of other cognitive abilities that appear to be explicable within this framework. How people learn the forms and meanings of words from linguistic and perceptual experience has been the subject of a number of recent models (Brent, 1999; Tenenbaum and Xu, 2000; Griffiths and Steyvers, 2003; Blei, Griffiths, Tenenbaum, and Jordan, in press), which draw on and advance state-of-the-art techniques developed in information retrieval, computational linguistics, and machine learning (Manning and Schutze, 2000). In earlier, related work, Anderson (1990) and Shiffrin and Steyvers (1997) analyzed the problem of how people form long-term memories, and prioritize the retrieval of memories, as a function of the statistics of their experience with the relevant events. Another example is work by Oaksford and Chater (1994), Chater and Oaksford (1999), McKenzie (2003; McKenzie and Mikkelsen, 2000), and Krauss, Martignon, and Hoffrage (1999), explaining the tendency of humans to use simple heuristics for certain reasoning and decision tasks as approximations to Bayesian inference. This approach relates to theoretical analyzes (Pearl 1984, Coughlan and Yuille 2003) which prove that heuristics can be statistically very effective for inference.

Finally, studies of the temporal characteristics of human causal learning (Danks, Griffiths, Tenenbaum 2003) suggest relationships to the study of stochastic differential equations. It has been shown that causal relationship models can be learnt by variants of the standard Rescorla-Wagner model which has been widely used in cognitive science as a model of data association. The equibibria of this model have recently been classified (Danks 2003) and the convergence rates analyzed using stochastic approximation theory (Yuille 2003). Related techniques have been applied by Dayan and colleagues (Kakade and Dayan, 2002; Dayan, Kakade and Montague, 2000) to analyzing the dynamics of animal learning behavior, and to understanding connections between basic human and animal learning processes.

Organizing Committee

Josh Tenenbaum (Massachusetts Institute of Technology)
Alan Yuille, Chair (UCLA)