In the last 10 years a robust industry has arisen in self-driving cars and this field has seen a major injection of resources and capital. This has been a powerful force in pushing the field forward but has created somewhat of a “crowding out” problem in academia. It is difficult to compete with the resources of many of these efforts using traditional funding sources and grant schemes. This talk will outline a set of research questions and novel approaches to the robot perception problem that attempt to side-step some of the brute force computer vision techniques that rely on large sets of hand labeled data and the computation power arms race that has biased the field to large players with seemingly unlimited resources. Three major themes are addressed: 1) The reduction in dependence on human labels through augmentation, simulation, and novel optimization frameworks; 2) the addition of the physics constraints of real sensors to more traditional end-to-end deep learning approaches; 3) the use of alternative sensing technologies (like thermal cameras) that reduce some of the challenges of both LIDAR and RGB cameras. These techniques are applied to pedestrian prediction, semantic segmentation, obstacle detection, and risk assessment for autonomous vehicles (AVs). We will show concrete examples of these techniques reducing our dependence on massive datasets while producing state-of-the-art performance on traditional benchmarks. We will also highlight several avenues of inquiry that could provide areas for academics to continue to add value to the commercial AV ecosystem.
Back to Workshop I: Individual Vehicle Autonomy: Perception and Control