The question of safety when integrating learning techniques in control systems has been recognized as a central challenge for the widespread success of these promising techniques. While different notions of safety exist, I will focus on the satisfaction of critical safety constraints (in probability) in this talk, a common and intuitive form of specifying safety in many applications. Optimization-based control has been established as the main technique for systematically addressing constraint satisfaction in the control of complex systems. However, it suffers from the need of a mathematical problem representation, i.e. a model, constraints and objective. Reinforcement learning, in contrast, has demonstrated its success for complex problems where a mathematical problem representation is not available by directly interacting with the system, however, at the cost of safety guarantees.
In this talk, I will discuss techniques that aim at bridging these two paradigms. We will investigate three variants how learning can be combined with optimization-based concepts to generate high-performance controllers that are simple and time-efficient to design while offering a notion of constraint satisfaction, and thereby of safety. We will begin with techniques for inferring a model of the dynamics, objective or constraints from data for the integration in optimization-based control, and then discuss a safety filter as a modular approach for augmenting reinforcement learning with constraint satisfaction properties. I will show examples of using these techniques in robotics applications.
Back to Intersections between Control, Learning and Optimization