Relationships between the areas of control, learning, and optimization have always been strong, but have recently been expanding and deepening in surprising ways. Optimization formulations and algorithms have historically been vital to solving problems in control and learning, while conversely, control and learning have provided interesting perspectives on optimization methods. Intersections that have been explored recently include relationships between reinforcement learning and model predictive control, and the use of control techniques to analyze the convergence of optimization algorithms. We will bring together researchers who work in Control, Learning, and Optimization to discuss current areas of interaction and explore possibilities for future areas of collaboration.
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.
This workshop is partially supported by the DOE-funded MACSER project.
Moritz Diehl
(University of Freiburg)
Ben Recht
(University of California, Berkeley (UC Berkeley))
Stephen Wright
(University of Wisconsin-Madison)
Melanie Zeilinger
(ETH Zurich)