Image segmentation is the art of describing image data, which tend to be highly complex, in terms of simpler entities, such as regions of homogeneous gray level, colour or texture. This amounts to attaching an integer label to each pixel in an image, representing its class. In the last decade or so, it has become widely accepted that the problem can be formalised as a maximisation of a posteriori probability, based on a stochastic image model, such as a Markov Random Field (MRF). While this puts segmentation on a firm footing, it raises a significant issue in terms of computation: how can one possibly maximise the posterior probability over the huge number of possible image segmentations, given a set of data? In recent years, two methods have found widespread use: stochastic simulation samples from the posterior distribution of the image model; multiresolution methods exploit the self-similarity of image data to solve a sequence of successively finer approximations to the problem. It has occurred to a number of authors that these two approaches might be usefully combined in a multiresolution MRF. The work we have done using these models
has thrown up interesting results in both the theory and practice of image segmentation.
In the talk, I will examine the segmentation problem and present some of our results.
Back to MGA Workshop I: Multiscale Geometry in Image Processing and Coding