Exploring image data with quantization-based techniques

Svetlana Lazebnik
University of North Carolina

In many vision applications, we have to deal with high-dimensional vector space data, such as the space of fixed-size local image patches or their descriptors. This talk will focus on quantization-based methods for studying the structure of such data. Specifically, I will present techniques for estimating the intrinsic dimensionality of vector space data (an unsupervised learning task) and for creating informative quantizer codebooks (a supervised learning task).

In the first part of the talk, I will discuss a technique for intrinsic dimensionality estimation based on the theoretical notion of quantization dimension. This technique works by quantizing the dataset at increasing rates (in practice, we use k-means to learn the quantizer) and by fitting a parametric form to the plot of the empirical quantizer distortion as a function of rate. I will also present preliminary results for an extension of this technique that uses tree-structured vector quantization to partition heterogeneous datasets into subsets having different intrinsic dimensions.

In the second part of the talk, I will discuss an information-theoretic method for learning a quantizer codebook from labeled feature vectors such that the index of the nearest codevector of a given feature approximates a sufficient statistic for its class label. I will demonstrate applications of this method to learning discriminative visual vocabularies for bag-of-features image classification and to image segmentation. Joint work with Maxim Raginsky

Audio (MP3 File, Podcast Ready) Presentation (PowerPoint File)

Back to Workshops II: Numerical Tools and Fast Algorithms for Massive Data Mining, Search Engines and Applications