Some Invariance Principles for Functionals of a Markov Chain
Freedman, David A.
Ann. Math. Statist., Tome 38 (1967) no. 6, p. 1-7 / Harvested from Project Euclid
The new results are (C), (D) and (F) below. Let $X_0, X_1, \cdots$ be a Markov chain with countable state space $I$ and stationary transitions. Suppose $I$ is a positive recurrent class, with stationary probability vector $p$. Let $f$ be a real-valued function on $I$. Fix a reference state $s \varepsilon I$, and let $0 \leqq t_1 < t_2 < \cdots$ be the times $n$ at which $X_n = s$. Let $Y_j = \sum \{f(X_n):t_j \leqq n < t_{j + 1}\}$ and $U_j = \sum \{|f(X_n)|:t_j \leqq n < t_{j + 1}\}.$ Let $V_m = \sum^m_{j = 1} Y_j$ and $S_n = \sum^n_{j = 0}f(X_j)$. For (C) and (D) below, assume (A) $\sum_{i \varepsilon I} p_if(i) = 0$; and (B) $U^2_j$ has finite expectation. Then: (C) Theorem. $n^{-\frac{1}{2}} \max \{|S_j - V_{jp_s}|: 1 \leqq j \leqq n\} \rightarrow 0$ in probability; and (D) Theorem. $(n \log \log n)^{-\frac{1}{2}} \max \{|S_j - V_{jp_s}|: 1 \leqq j \leqq n\} \rightarrow 0$ almost everywhere. For (F), do not assume (A) and (B), but assume (E) $Y_j$ differs from 0 with positive probability. Let $v_m$ (respectively, $s_n$) be 1 or 0 according as $V_m$ (respectively, $S_n$) is positive or non-positive. Then (F) Theorem. $n^{-1} \sum \{s_j : 1 \leqq j \leqq n\} - p^{-1}_sn^{-1} \sum \{v_j : 1 \leqq j \leqq np_s\} \rightarrow 0$ almost everywhere. I do not believe the convergence in (C) is a.e., but have no counter-example.
Publié le : 1967-02-14
Classification: 
@article{1177699053,
     author = {Freedman, David A.},
     title = {Some Invariance Principles for Functionals of a Markov Chain},
     journal = {Ann. Math. Statist.},
     volume = {38},
     number = {6},
     year = {1967},
     pages = { 1-7},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1177699053}
}
Freedman, David A. Some Invariance Principles for Functionals of a Markov Chain. Ann. Math. Statist., Tome 38 (1967) no. 6, pp.  1-7. http://gdmltest.u-ga.fr/item/1177699053/