Detecting simple patterns in featural data

Jacob Feldman
Rutgers University

The detection of simple patterns in featural data -- e.g. the discovery
of regularities and the formation of categories -- is a problem of long
standing in psychology, and has been approached in many ways. In this
talk I will discuss separately three key elements of the problem: what
is meant by a "pattern," what is meant by "simple," and what it means
to "detect" a simple pattern. I'll discuss how a featural pattern can
be algebraically decomposed into the constituent regularities that
jointly constitute it. Some of these regularities are simpler, and
others more complex, leading to a kind of "spectral breakdown" of the
pattern into its regular components at various levels of complexity.
Each of these regular components takes the form of quasi-causal law or
rule; the spectral representation allows the observer to extract the
simplest regularities while discarding or ignoring the more complex or
accidental ones. In this framework, the "complexity" of a pattern is
just its average spectral power, indicating how much of its structure
can be explained by simple rules and how much only by more complex
ones. This number accurately predicts the difficulty human subjects
have learning the pattern. Finally, I'll discuss what it means to
"detect" a simple pattern, in the sense of distinguishing it from a
random process that whose apparent simplicity is accidental. To answer
this question we consider statistical properties of the complexity
measure, and in particular the "null distribution" of complexity, which
makes it possible to attach a measure of "significance" to a particular
degree of simplicity in the observed data.

Back to Probabilistic Models of Cognition: The Mathematics of Mind