The safety of an autonomous vehicle is highly dependent on what it assumes about other vehicles on the road and how accurate those models are. On one hand, one can rely on an adversarial model of other vehicles, resulting in very safe but conservative strategies. On the other hand, one can provide strong safety guarantees under limiting assumptions about other agents. To address these challenges, there have been significant advances in building computational models of humans when interacting or operating autonomous and intelligent systems. Some of today’s robots model humans as optimal and rational decision-makers. Other robots account for human limitations, and relax this assumption so that the human is modeled as noisily rational. Both of these models make sense when the human receives deterministic rewards. But in real world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty — and in these settings, humans exhibit a cognitive bias towards suboptimal behavior.
In this talk, I will discuss how we can model suboptimal human behavior in risky scenarios in the near-end of the spectrum using ideas from prospect theory and behavioral economics. We demonstrate our risk-aware models capture human suboptimalities that cannot be captured by noisily rational human models, and discuss how these models can be used by robots that can act safely and efficiently in both risky and non-risky settings.