In thinking about how humans do probabilistic reasoning, we would like to talk not just about the models people use for particular sets of variables -- such as the heights of people in a particular family -- but about more general models that can be applied to many scenarios.
Relational probability models (RPMs) are a formal representation for such abstract models. A single RPM can be applied to scenarios with many different relational structures, such as families with different numbers of members, or ecologies with different relationships between animal species. The probabilistic dependencies among variables in an RPM are determined by the relations among objects. This lecture will introduce representations for RPMs, describe inference algorithms that use an RPM to answer queries about a particular scenario, and also cover algorithms for learning the relational dependency structure of an RPM from data. We will start with the case where the relational structure of the scenario is known at inference time, and move on to cases where the relations are unobserved and are modeled probabilistically.