Configural conditioning experiments probe how animals discriminate and generalize between patterns of stimuli (such as tones and lights) that are differentially predictive of reinforcement. In
this talk, I will present a Bayesian account of configural conditioning. According to our theory, an organism's learning process approximates statistical inference over a family of latent variable
models in an attempt to recover the generative process that gave rise to the training data. In form, our model is reminiscent of the more phenomenological model of Pearce (1994); however, a normative grounding allows our theory to clarify seemingly arbitrary aspects of this and
other previous models. Issues thought to be important in explaining configural conditioning, such as the choice of representation and generalization from previous experience to novel stimulus patterns, will be shown to follow from sound statistical inference. I will also show how our theory provides a novel explanatory framework for many conditioning phenomena, such as second-order conditioning and acquired equivalence effects, traditionally outside the scope of configural conditioning theory.