Optimal scaling for partially updating MCMC algorithms
Neal, Peter ; Roberts, Gareth
Ann. Appl. Probab., Tome 16 (2006) no. 1, p. 475-515 / Harvested from Project Euclid
In this paper we shall consider optimal scaling problems for high-dimensional Metropolis–Hastings algorithms where updates can be chosen to be lower dimensional than the target density itself. We find that the optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well. Furthermore, the optimal efficiency obtainable is independent of the dimensionality of the update rule. This has important implications for the MCMC practitioner since high-dimensional updates are generally computationally more demanding, so that lower-dimensional updates are therefore to be preferred. Similar results with rather different conclusions are given for so-called Langevin updates. In this case, it is found that high-dimensional updates are frequently most efficient, even taking into account computing costs.
Publié le : 2006-05-14
Classification:  Metropolis algorithm,  Langevin algorithm,  Markov chain Monte Carlo,  weak convergence,  optimal scaling,  60F05,  65C05
@article{1151592241,
     author = {Neal, Peter and Roberts, Gareth},
     title = {Optimal scaling for partially updating MCMC algorithms},
     journal = {Ann. Appl. Probab.},
     volume = {16},
     number = {1},
     year = {2006},
     pages = { 475-515},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1151592241}
}
Neal, Peter; Roberts, Gareth. Optimal scaling for partially updating MCMC algorithms. Ann. Appl. Probab., Tome 16 (2006) no. 1, pp.  475-515. http://gdmltest.u-ga.fr/item/1151592241/