Secure Learning in Adversarial Environments

Bo Li
University of Illinois at Urbana-Champaign

Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks. In this talk, I will describe my recent research about evasion attacks, poisoning attacks, and privacy problems in machine learning systems. In particular, I will introduce examples of physical attacks, unrestricted (semantic) attacks, and discuss several potential defensive approaches and principles towards developing real-world robust learning systems.

Presentation (PDF File)

Back to Workshop I: Individual Vehicle Autonomy: Perception and Control