Learning with Few Labeled Data

Pratik Chaudhari
University of Pennsylvania

The relevant limit for machine learning is not N ? infinity but instead N ? 0, the human visual system is proof that it is possible to learn categories with extremely few samples. This ability comes from having seen millions of _other_ objects. The first part of the talk will discuss algorithms to adapt representations of deep neural networks to new categories given few labeled data. The second part will exploit a formal connection of thermodynamics and machine learning to characterize such adaptation and build stochastic processes that help explore the fundamental limits of representation learning. This theory leads to algorithms for transfer learning that can guarantee the classification performance on the target task.

This talk will discuss results from https://arxiv.org/abs/1909.02729, https://arxiv.org/abs/1710.11029 and https://arxiv.org/abs/2002.12406.


Back to Workshop I: Individual Vehicle Autonomy: Perception and Control