Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 23, 2012 | Published + Submitted
Journal Article Open

Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions

Abstract

The Metropolis-adjusted Langevin (MALA) algorithm is a sampling algorithm which makes local moves by incorporating information about the gradient of the logarithm of the target density. In this paper we study the efficiency of MALA on a natural class of target measures supported on an infinite dimensional Hilbert space. These natural measures have density with respect to a Gaussian random field measure and arise in many applications such as Bayesian nonparametric statistics and the theory of conditioned diffusions. We prove that, started in stationarity, a suitably interpolated and scaled version of the Markov chain corresponding to MALA converges to an infinite dimensional diffusion process. Our results imply that, in stationarity, the MALA algorithm applied to an N-dimensional approximation of the target will take O(N^(1/3)) steps to explore the invariant measure, comparing favorably with the Random Walk Metropolis which was recently shown to require O(N) steps when applied to the same class of problems. As a by-product of the diffusion limit, it also follows that the MALA algorithm is optimized at an average acceptance probability of 0.574. Previous results were proved only for targets which are products of one-dimensional distributions, or for variants of this situation, limiting their applicability. The correlation in our target means that the rescaled MALA algorithm converges weakly to an infinite dimensional Hilbert space valued diffusion, and the limit cannot be described through analysis of scalar diffusions. The limit theorem is proved by showing that a drift-martingale decomposition of the Markov chain, suitably scaled, closely resembles a weak Euler–Maruyama discretization of the putative limit. An invariance principle is proved for the martingale, and a continuous mapping argument is used to complete the proof.

Additional Information

© 2012 Institute of Mathematical Statistics. Received March 2011; revised November 2011. [NSP] Supported by NSF Grant DMS-11-07070. [AMS] Supported by EPSRC and ERC. [AHT] Supported by (EPSRC-funded) CRISM.

Attached Files

Published - stuart97.pdf

Submitted - 1103.0542.pdf

Files

stuart97.pdf
Files (875.0 kB)
Name Size Download all
md5:3837140909351b5aa13b68cf90ce007a
454.3 kB Preview Download
md5:3fa647696572713d2a648080d1cb2e2a
420.8 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
March 5, 2024