Computational methods such as parallel tempering and replica exchange are designed to speed convergence of more slowly converging Markov processes (corresponding to lower temperatures for models from the physical sciences) by coupling them with higher temperature processes that explore the state space more quickly through a Metropolis type swap mechanism. An implementation of the infinite swapping rate limit, which by certain measures is optimal, can be realized in terms of a process which evolves using a symmetrized version of the original dynamics, and then produces approximations to the original problem by using a weighted empirical measure. The weights are needed to transform samples under the symmetrized dynamics into distributionally correct samples for the original problem.
After reviewing the construction of this ``infinite swapping limit,’’ we focus on the sources of variance reduction due to the coupling of the different Markov processes. As will be discussed, one source is due to the lowering of energy barriers due to the coupling of high and low temperature components and consequent improved communication properties. A second and less obvious source of variance reduction is due to the weights used in the weighted empirical measure that appropriately transforms the samples of the symmetrized process. These weights are analogous to the likelihood ratios that appear in importance sampling, and play much the same role in reducing the overall variance. A key question in the design of the algorithms is how to choose the ratios of the higher temperatures to the lowest one. As we will discuss, the two variance reduction mechanisms respond in opposite ways to changes in these ratios One can characterize in precise terms the tradeoff and explicitly identify the optimal temperature selection for certain models when the lowest temperature is sent to zero, i.e., when sampling is most difficult.
Back to Workshop IV: Uncertainty Quantification for Stochastic Systems and Applications