Humans appear to believe that there are things in the world.
Probability theory per se is agnostic on the matter; graphical models deny it, dealing only with worlds defined by variables with values.
This lecture provides an introduction to probability models over possible worlds that contain things, where those things may be related to each other in various ways. Since there is already a well-developed mathematical theory of such worlds - namely first-order logic - I will spend some of the time covering the relevant aspects of first-order logic and model theory. Then I will explain how to combine probability theory and first-order logic, a task that has been an important long-term goal within artificial intelligence. I will note an important distinction between probability models in which the relationships among things are uncertain but the things themselves are known, and probability models for unknown worlds in which the very existence of things is something to be discovered. The lecture concludes with discussion of the kinds of cognitive tasks that require the various kinds of probability models, and will be followed by two lectures by Brian Milch that go into detail on methods for representing, reasoning with, and learning such models.