Probabilistic data models specified in part by a labeled graph “structure” have assumed considerable importance as a foundation for pattern recognition, machine learning, and their applications to cognitive models. Such probabilistic data models include Bayes Nets, Markov Random Fields, and Factor Graphs. They do not, however, naturally express situations in which the number and nature of objects and their relationships with each other (such as compositional or spatial
relationships) can depend on the values of other variables, or change over time. To fulfill this function we propose (a) extensions of graphical model notations to new kinds of probabilistic dependency diagrams, and (b) stochastic parameterized grammars with operator algebra semantics. These interrelated frameworks are highly expressive in modeling such situations, and are promising as substrates for statistical inference. They have well-defined semantics in terms of probability distributions, and can express a very wide range of relevant architectures for pattern recognition and learning in variable-structure systems.
Back to Probabilistic Models of Cognition: The Mathematics of Mind