First-order probabilistic models in human cognition

Josh Tenenbaum
Massachusetts Institute of Technology
Brain and Cog Sc, CS, and AI

Cognitive scientists have traditionally treated logical representations and statistical learning and inference as mutually incompatible concepts when it comes to modeling the mind. But properly understood, these concepts are in fact complementary. Logic provides rich frameworks for representing knowledge, while statistical methods provide powerful tools for using these knowledge representations to guide uncertain reasoning and for learning these knowledge representations from data. I will sketch several examples of cognitive models based on probabilistic inference over knowledge representations defined with first-order logic, including models of human causal reasoning, semantic inference, kinship relations, and object tracking. I will also contrast these accounts to connectionist models, an alternative approach to probabilistic learning and reasoning which rejects explicitly logical representations.


Presentation (PDF File)
Presentation (PowerPoint File)

Back to Graduate Summer School: Probabilistic Models of Cognition: The Mathematics of Mind