Modern automated vehicle platforms combine large volumes of data into decisions using techniques that are increasingly powered via artificial intelligence that learn and change over time. Artificial intelligence techniques have provided several breakthroughs to bring forth automated vehicles that, in demonstrations, can be shown to operate similar to human drivers. Challenges include: 1) interpretability of decision making; 2) safety for data-driven systems (e.g., assuring safety of systems composed of learning-enabled components; generalizing to rare and unsafe events); 3) robustness of machine learning algorithms, particularly robustness of deep learning for perception and control to adversarial attacks on extreme and real but previously unseen environments; 4) development of reinforcement learning algorithms that are resistant to reward hacking, or are required to search dangerous or unsafe parts of the state space, etc. Moreover, there is no established notion of what precisely constitutes safe, efficient, or even natural driving when immersed on a highway with other human drivers; and depending on the level of autonomy, the AI will need to interact with the human in the vehicle.
This workshop will bring together researchers working on the theoretical sides of deep learning techniques for perception and control of automated vehicles with researchers interested in assuring these autonomous systems operate with safety guarantees. Moreover, experts in sensing and imaging technology will be brought to the table, to cover the full pipeline from the collection of the data, over the AI theory and development, all the way to the software and actuation challenges. Additional themes addressed in this workshop include interactions between vehicle sensing and the infrastructure, and cybersecurity aspects related to sensing and machine learning (how to purposefully mislead sensors and AI).
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.