Beyond Worst-case Guarantees for Sequential Prediction: Robustness via Abstention

Surbhi Goel
University of Pennsylvania
Computer and Information Science

In this talk, we will focus on the problem of sequential prediction over a stochastic sequence with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples as and when they desire. Traditional algorithms designed to handle purely stochastic data tend to fail in the presence of such adversarial examples, leading to erroneous predictions. Whereas, assuming fully adversarial data leads to very pessimistic bounds that are often vacuous in practice. To move beyond these pessimistic guarantees while allowing for arbitrarily many adversarial examples, we will propose a new model that allows the learner to abstain from making a prediction at no cost on adversarial examples, thereby asking the learner to make predictions only when it is certain. In this new model, we will design learners that can handle any number of adversarial examples, while ensuring their regret scales as in the purely stochastic setting. We will conclude with several exciting open questions that our new model posits.

This talk is based on joint work with Steve Hanneke, Shay Moran, and Abhishek Shetty.


Back to EnCORE Workshop on Computational vs Statistical Gaps in Learning and Optimization