A primary advantage of image restoration methods based on statistical models
is the ability to quantify uncertainty in the restoration. Under a Bayesian
paradigm, uncertainty is quantified via the variability of the joint
posterior distribution of unobserved quantities, such as the pixel
intensities of the image. In this talk we discuss how Monte Carlo samples
from such posterior distributions can be used to decide among competing
source models, account for uncertainty in the model specification, and
quantify error in the resulting reconstruction.
Visible features in an astronomical image may have different associated
levels of certainty; understanding this uncertainty is crucial to making
scientific conclusions regarding celestial objects. Because of the high
volume of information in a Monte Carlo sample from a high-dimensional
posterior distribution, representing these uncertainties in an easily
digestible form is a research challenge. Various summaries of the Monte
Carlo output including a movie of the full sample will be illustrated and
discussed.
One strategy for modeling a complex image is to use a mixture of a number of
model components. For example, we might mix a flexible model for an extended
source that smoothes on multiple scales with a number of point sources or
components with other shapes. Various model-checking techniques are
available to compare the choices of these model components and to guide the
final choice of model. We use the posterior predictive distribution to
evaluate posterior p-values for various hypotheses.
Reliable quantification of uncertainty depends on realistic assessment of
uncertainty in the various model components. For example, uncertainly in the
point spread function, variability in the point spread function across a
detector, and the effects of instrumental and cosmic background
contamination all have important effects on the overall uncertainty of the
image restoration. Thus, careful model construction must aim to account for
such effects.