Adversarial Classification Games

John Musacchio
University of California, Santa Cruz (UC Santa Cruz)

We study attacker classification games in which intelligent adversaries have an incentive to manipulate their attacks to mislead the defender.
First we consider a game in which a single, strategic defender aims to distinguish a malicious attacker from a non-attacker (benign user), while protecting assets of different valuations. The classification is based on the observed attack vector during a fixed window, that might come from the attacker or by a normal user. The interaction is modeled as a game. In this game the defender needs to balance the expected cost of missed detections and false alarms, while the attacker trades off the potential increased reward of a more aggressive attack vector with his increased risk of being detected. We then extend this analysis to a network of defending nodes facing an attacker seeking to stealthily capture and control these nodes for use in another attack. The more aggressively the attacker utilizes his network of captured nodes the more likely it is that he will be detected by the defenders of those nodes. Thus he must balance stealth and aggression in his strategic utilization of his network. Conversely, the defenders then must decide how vigilant they will be in trying to detect the presence of such an attacker.

Presentation (PDF File)

Back to Graduate Summer School: Games and Contracts for Cyber-Physical Security