Stochastic Approximation Algorithms for Constrained Optimization Problems
Kushner, Harold J.
Ann. Statist., Tome 2 (1974) no. 1, p. 713-723 / Harvested from Project Euclid
The paper gives convergence theorems for several sequential Monte-Carlo or stochastic approximation algorithms for finding a local minimum of a function $f(\bullet)$ on a set $C$ defined by $C = \{x: q^i(x) \leqq 0, i = 1, 2, \cdots, s\}. f(\bullet)$ is unknown, but "noise perturbed" values can be observed at any desired parameter $x \in C$. The algorithms generate a sequence of random variables $\{X_n\}$ such that (for a.a. $\omega$) any convergent subsequence of $\{X_n(\omega)\}$ converges to a point where a certain necessary condition for constrained optimality holds. The techniques are drawn from both stochastic approximation, and non-linear programming.
Publié le : 1974-07-14
Classification:  62-45,  90-58,  93-60,  93-70,  Sequential Monte Carlo,  constrained optimization,  constrained stochastic approximation
@article{1176342759,
     author = {Kushner, Harold J.},
     title = {Stochastic Approximation Algorithms for Constrained Optimization Problems},
     journal = {Ann. Statist.},
     volume = {2},
     number = {1},
     year = {1974},
     pages = { 713-723},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1176342759}
}
Kushner, Harold J. Stochastic Approximation Algorithms for Constrained Optimization Problems. Ann. Statist., Tome 2 (1974) no. 1, pp.  713-723. http://gdmltest.u-ga.fr/item/1176342759/