Convergence of adaptive mixtures of importance sampling schemes
Douc, R. ; Guillin, A. ; Marin, J.-M. ; Robert, C. P.
Ann. Statist., Tome 35 (2007) no. 1, p. 420-448 / Harvested from Project Euclid
In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao–Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.
Publié le : 2007-02-14
Classification:  Bayesian statistics,  Kullback divergence,  LLN,  MCMC algorithm,  population Monte Carlo,  proposal distribution,  Rao–Blackwellization,  60F05,  62L12,  65-04,  65C05,  65C40,  65C60
@article{1181100193,
     author = {Douc, R. and Guillin, A. and Marin, J.-M. and Robert, C. P.},
     title = {Convergence of adaptive mixtures of importance sampling schemes},
     journal = {Ann. Statist.},
     volume = {35},
     number = {1},
     year = {2007},
     pages = { 420-448},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1181100193}
}
Douc, R.; Guillin, A.; Marin, J.-M.; Robert, C. P. Convergence of adaptive mixtures of importance sampling schemes. Ann. Statist., Tome 35 (2007) no. 1, pp.  420-448. http://gdmltest.u-ga.fr/item/1181100193/