Estimation and control in finite Markov decision processes with the average reward criterion
Rolando Cavazos-Cadena ; Raúl Montes-de-Oca
Applicationes Mathematicae, Tome 31 (2004), p. 127-154 / Harvested from The Polish Digital Mathematics Library

This work concerns Markov decision chains with finite state and action sets. The transition law satisfies the simultaneous Doeblin condition but is unknown to the controller, and the problem of determining an optimal adaptive policy with respect to the average reward criterion is addressed. A subset of policies is identified so that, when the system evolves under a policy in that class, the frequency estimators of the transition law are consistent on an essential set of admissible state-action pairs, and the non-stationary value iteration scheme is used to select an optimal adaptive policy within that family.

Publié le : 2004-01-01
EUDML-ID : urn:eudml:doc:279704
@article{bwmeta1.element.bwnjournal-article-doi-10_4064-am31-2-1,
     author = {Rolando Cavazos-Cadena and Ra\'ul Montes-de-Oca},
     title = {Estimation and control in finite Markov decision processes with the average reward criterion},
     journal = {Applicationes Mathematicae},
     volume = {31},
     year = {2004},
     pages = {127-154},
     zbl = {1080.90082},
     language = {en},
     url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-doi-10_4064-am31-2-1}
}
Rolando Cavazos-Cadena; Raúl Montes-de-Oca. Estimation and control in finite Markov decision processes with the average reward criterion. Applicationes Mathematicae, Tome 31 (2004) pp. 127-154. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-doi-10_4064-am31-2-1/