Large image databases are a valuable resource in numerous domains--from video archives and personal photo collections, to scientific image data repositories. Efficient content-based search methods are key to exploiting such data. However, rich image representations are often non-vectorial and/or call for similarity measures that are not amenable to existing fast search algorithms. For instance, matching sets of local image features is a useful way to compare objects and scenes, but the correspondence is itself expensive to compute and exhaustively searching large image databases with such measures is impractical. Similarly, metric learning can yield powerful distance functions that are specialized for a given task, but when used for similarity search require heuristics that cannot guarantee performance better than a naive linear scan.
I will present techniques that enable scalable search with correspondence kernels and learned metrics. We show how to encode such metrics of interest into randomized locality-sensitive hash functions in such a way that sub-linear time search with known approximate nearest neighbor techniques becomes feasible. In addition, we formulate an indirect solution to allow metric learning and hashing for sparse input vector spaces whose high dimensionality make it impossible to learn an explicit weighting (Mahalanobis parameterization) over the feature dimensions. We demonstrate our approach applied to image retrieval. Joint work with Trevor Darrell at MIT, and Prateek Jain and Brian Kulis at the University of Texas.