Uniform convergence of exact large deviations for renewal reward processes
Chi, Zhiyi
Ann. Appl. Probab., Tome 17 (2007) no. 1, p. 1019-1048 / Harvested from Project Euclid
Let (Xn, Yn) be i.i.d. random vectors. Let W(x) be the partial sum of Yn just before that of Xn exceeds x>0. Motivated by stochastic models for neural activity, uniform convergence of the form sup c∈I|a(c, x)Pr {W(x)≥cx}−1|=o(1), x→∞, is established for probabilities of large deviations, with a(c, x) a deterministic function and I an open interval. To obtain this uniform exact large deviations principle (LDP), we first establish the exponentially fast uniform convergence of a family of renewal measures and then apply it to appropriately tilted distributions of Xn and the moment generating function of W(x). The uniform exact LDP is obtained for cases where Xn has a subcomponent with a smooth density and Yn is not a linear transform of Xn. An extension is also made to the partial sum at the first exceedance time.
Publié le : 2007-06-15
Classification:  Large deviations,  renewal reward process,  point process,  continuous-time random walk,  60F10,  60G51
@article{1179839181,
     author = {Chi, Zhiyi},
     title = {Uniform convergence of exact large deviations for renewal reward processes},
     journal = {Ann. Appl. Probab.},
     volume = {17},
     number = {1},
     year = {2007},
     pages = { 1019-1048},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1179839181}
}
Chi, Zhiyi. Uniform convergence of exact large deviations for renewal reward processes. Ann. Appl. Probab., Tome 17 (2007) no. 1, pp.  1019-1048. http://gdmltest.u-ga.fr/item/1179839181/