Semantic Labeling of Images

Martial Hebert
Carnegie-Mellon University
The Robotics Institute

Semantic labeling, or semantic segmentation, involves assigning class labels to pixels. In broad terms, the task involves assigning at each pixel a label that is most consistent with local features at that pixel and with labels estimated at pixels in its context, based on consistency models learned from training data. In this talk, we will review the current learning and inference techniques used for semantic labeling tasks. We will discuss the limitations of the different approaches with respect to number of classes, inference time, learning efficiency, and size of training data. Based on this review, we will then investigate recent approaches to address current limitations.

Presentation (PDF File)

Back to Graduate Summer School: Computer Vision