Learn Now, Optimize Later: Algorithmic Fairness, Loss Minimization and Outcome Indistinguishability

Omer Reingold
Stanford University

Training a predictor to minimize a loss function fixed in advance is a dominant paradigm in machine learning. However, loss minimization by itself might fail to satisfy properties that come up naturally in the context of algorithmic fairness. To remedy this, multi-group fairness notions such as multicalibration have been proposed, which require the predictor to share certain statistical properties of the ground truth, even when conditioned on a rich family of subgroups. These notions could be understood from the perspective of computational indistinguishability through the notion of outcome indistinguishability where a predictor can be viewed as giving a model of events that cannot be refuted from empiric evidence within some computational bounds. While motivated in fairness, this alternative paradigm for training an indistinguishable predictor is finding a growing number of appealing applications, where the same predictor can later be used to optimize one of a large set of loss functions, under a family of capacity and fairness constraints and instance distributions.


Back to EnCORE Workshop on Computational vs Statistical Gaps in Learning and Optimization