In real-world vision applications, data observed during training can differ substantially from data observed during deployment: the distribution and even the type and dimensionality of features can change from one data set to the next. In this talk, I will discuss the emerging problem of visual domain adaptation for transferring object models from one data set or visual domain to another. I will discuss recent work on models for supervised learning of non-linear transformations between domains. Our methods are based on novel theoretical results demonstrating that such transformations can be learned in kernel space. The asymmetric version of the model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the methods can be applied to categories that were not available during training. I will demonstrate the ability of our methods to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks. I will also discuss recent extensions, as well as the integration of fast similarity search techniques into the models.
Back to Large Scale Multimedia Search