Bayesian Decision Theory and Sequential Sampling

Angela Yu
Princeton University

Three major aspects of cognitive processing lend themselves naturally to
Bayesian analysis: inference, learning, and decision-making. In this
lecture, I will focus on Bayesian Decision Theory. After introducing
the general framework, I will illustrates the ideas with some examples.
The first example shows how, as a function of the cost function, the
"optimal" estimate for a hidden variable based on indirect and noisy
observations may be either the mean, median, or mode of the posterior
distribution. The second example focus on 2-alternative decision-making
based on sequentially observed iid samples, with a particular emphasis
on the speed with which decisions are made, in addition to any accuracy
objectives. The optimal policy is realized by comparing the cumulative
posterior probability to a fixed threshold -- this process is equivalent
to a 1-D random walk with absorbing boundaries. The third example
extends the 2-alternative sequential decision-making problem to the case
when there is a deadline for decision-making, which is implemented by
imposing an extra penalty whenever the deadline is exceeded. The
optimal policy in this case is to integrate the evidence up to a
"decaying" threshold, which implies that the decision criteria are gradually
relaxed as the deadline becomes more imminent. This "decay" is enhanced
when there is any stochasticity associated with the deadline, including,
for example, when the observer has internal timing uncertainty. These
models can be used to make precise predictions about an ideal observer's
behavior, and also about detailed dynamics of the underlying neural
substrate implementing the computations.


Presentation (PDF File)

Back to Graduate Summer School: Probabilistic Models of Cognition: The Mathematics of Mind