Accelerated randomized stochastic optimization
Dippon, Jürgen
Ann. Statist., Tome 31 (2003) no. 1, p. 1260-1281 / Harvested from Project Euclid
We propose a general class of randomized gradient estimates to be employed in a recursive search for the minimum of an unknown multivariate regression function. Here only two observations per iteration step are used. Special cases include random direction stochastic approximation (Kushner and Clark), simultaneous perturbation stochastic approximation (Spall) and a special kernel based stochastic approximation method (Polyak and Tsybakov). If the unknown regression is p-smooth ($p\ge 2$) at the point of minimum, these methods achieve the optimal rate of convergence $O(n^{-(p-1)/(2p)})$. For both the classical stochastic approximation scheme (Kiefer and Wolfowitz) and the averaging scheme (Ruppert and Polyak) the related asymptotic distributions are computed.
Publié le : 2003-08-14
Classification:  Stochastic approximation,  stochastic optimization,  gradient estimation,  randomization,  asymptotic normality,  optimal rates of convergence,  62L20
@article{1059655913,
     author = {Dippon, J\"urgen},
     title = {Accelerated randomized stochastic optimization},
     journal = {Ann. Statist.},
     volume = {31},
     number = {1},
     year = {2003},
     pages = { 1260-1281},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1059655913}
}
Dippon, Jürgen. Accelerated randomized stochastic optimization. Ann. Statist., Tome 31 (2003) no. 1, pp.  1260-1281. http://gdmltest.u-ga.fr/item/1059655913/