There lies a great distance from the raw inputs sensed at our retinas to what we experience as the contents of our percepts and thoughts --- objects and scenes with 3D shapes and physical properties, predictions of how these scenes will unfold, and our plans toward them. What algorithms underlie these cognitive abilities in the brain, including the formats of neural object representations, how these representations are inferred across the sensory cortex, and how they are used to predict what will happen next? This talk will address these questions by introducing novel, uniquely integrative computational frameworks that natively interoperate at both the cognitive and neural levels. With these multilevel computational theories, I’ll provide evidence that neural populations and neural dynamics in humans and macaques are best understood as building and manipulating generative models of how scenes form, dynamically unfold, and project to sensory signals, during spontaneous visual processing and goal-directed action.
Back to Long Programs