Comparisons between human performance and model predictions in motion perception

Hongjing Lu
University of Hong Kong
psychology

Humans perceive motion in visual scenes quickly and reliably despite the notorious correspondence problem in motion stimuli. This talk will illustrate the use of a Bayesian framework in the context of motion perception to investigate how humans make inferences from noisy visual data. I will present various motion phenomena that motivate the development of computational models to account for human performance. Ideal observer analysis using conventional signal detection theory will be presented first to provide a benchmark to assess how well humans perform in a specific visual task. I will then show how to reformulate the ideal observer model in the Bayesian framework. The quantitative discrepancy between human performance and ideal observer predictions requires further development of Bayesian models. I will show that Bayesian models incorporating generic priors provide close quantitative fits to patterns of human inferences, whereas alternative Bayesian models lacking generic priors prove inadequate. I will also sketch how the Bayesian framework yields empirical predictions that can guide the design of experimental tests in motion perception.




Presentation (PDF File)

Back to Long Programs