Bernoulli One-Armed Bandits--Arbitrary Discount Sequences
Berry, Donald A. ; Fristedt, Bert
Ann. Statist., Tome 7 (1979) no. 1, p. 1086-1105 / Harvested from Project Euclid
Each of two arms generate an infinite sequence of Bernoulli random variables. At each stage we choose which arm to observe based on past observations. The parameter of the left arm is known; that of the right arm is a random variable. There are two conflicting desiderata: to observe a success at the present stage and to obtain information useful for making future decisions. The payoff is $\alpha_m$ for a success at stage $m$. The objective is to maximize the expected total payoff. If the sequence $(\alpha_1, \alpha_2, \cdots)$ is regular an observation of the left arm should always be followed by another of the left arm. A rather explicit characterization of optimal strategies for regular sequences follows from this result. This characterization generalizes results of Bradt, Johnson, and Karlin (1956) who considered $\alpha_m$ equal to 1 for $m \leqslant n$ and 0 for $m > n$ and of Bellman (1956) who considered $\alpha_m = \alpha^{m-1}$ for $0 \leqslant \alpha < 1$.
Publié le : 1979-09-14
Classification:  One-armed bandit,  sequential decisions,  optimal stopping,  two-armed bandit,  regular discounting,  Bernoulli bandit,  62L05,  62L15
@article{1176344792,
     author = {Berry, Donald A. and Fristedt, Bert},
     title = {Bernoulli One-Armed Bandits--Arbitrary Discount Sequences},
     journal = {Ann. Statist.},
     volume = {7},
     number = {1},
     year = {1979},
     pages = { 1086-1105},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1176344792}
}
Berry, Donald A.; Fristedt, Bert. Bernoulli One-Armed Bandits--Arbitrary Discount Sequences. Ann. Statist., Tome 7 (1979) no. 1, pp.  1086-1105. http://gdmltest.u-ga.fr/item/1176344792/