Virtual Talk: Individual Probabilities, The Reference Class Problem, Model Multiplicity, and Reconciling Beliefs

Aaron Roth
University of Pennsylvania

Dawid gives two conceptualizations for models of individual probabilities: "Group to Individual" and "Individual to Group". A classical concern about the "Group to Individual" view of probability is the reference class problem: Given that we can empirically measure only averages over many individuals, which group or "reference class" do we choose to average over when estimating the probability for an individual? Machine Learning on the other hand operates in the "Individual to Group" conceptualization: models purport to assign probabilities to individuals, which can be aggregated over to obtain group probabilities. Multicalibration gives us a way to obtain models that predict individual probabilities that are consistent with an arbitrary number of reference classes. But (with finite data) it does not solve the "model multiplicity" problem: there may be multiple models that are multicalibrated that assign many people very different individual probabilities. How are we to adjudicate between such models? We argue that if two parties agree on the data distribution, then they cannot agree to disagree on (very many) individual probabilities, even given access only to a small number of samples from the distribution.

Joint work with Alexander Tolbert and Scott Weinstein


Back to Who Counts? Sex and Gender Bias in Data