Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems
Koo, Ja-Yong
Ann. Statist., Tome 21 (1993) no. 1, p. 590-599 / Harvested from Project Euclid
Consider an unknown regression function $f$ of the response $Y$ on a $d$-dimensional measurement variable $X$. It is assumed that $f$ belongs to a class of functions having a smoothness measure $p$. Let $T$ denote a known linear operator of order $q$ which maps $f$ to another function $T(f)$ in a space $G$. Let $\hat{T}_n$ denote an estimator of $T(f)$ based on a random sample of size $n$ from the distribution of $(X, Y)$, and let $\|\hat{T}_n - T(f)\|_G$ be a norm of $\hat{T}_n - T(f)$. Under appropriate regularity conditions, it is shown that the optimal rate of convergence for $\|\hat{T}_n - T(f)\|_G$ is $n^{-(p - q)/(2p + d)}$. The result is applied to differentiation, fractional differentiation and deconvolution.
Publié le : 1993-06-14
Classification:  Regression,  inverse problems,  method of presmoothing,  optimal rate of convergence,  62G20,  62G05
@article{1176349138,
     author = {Koo, Ja-Yong},
     title = {Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems},
     journal = {Ann. Statist.},
     volume = {21},
     number = {1},
     year = {1993},
     pages = { 590-599},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1176349138}
}
Koo, Ja-Yong. Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems. Ann. Statist., Tome 21 (1993) no. 1, pp.  590-599. http://gdmltest.u-ga.fr/item/1176349138/