Reinforcement learning in human neuroimaging; uncertainty-based competition

Nathaniel Daw
New York University

I first review recent studies of reinforcement learning in humans using functional neuroimaging, and how results both parallel and complement what is known from animal work particularly concerning prediction errors. In the second part of my talk, I consider evidence that instrumental learning is driven by multiple, behaviorally and neurally dissociable pathways. I propose an account under which these reflect different reinforcement learning strategies, model-based and model-free, and the reconciliation of their action preferences is treated as a Bayesian cue-combination problem based on uncertainty.


Presentation (PDF File)

Back to Long Programs