We consider methods for kernel regression when the explanatory
and/or response variables are adjusted prior to substitution into a conven-
tional estimator.This “data-sharpening” procedure is designed to
preserve the advantages of relatively simple, low-order techniques, for
example, their robustness against design sparsity problems, yet attain the
sorts of bias reductions that are commonly associated only with high-order
methods.We consider Nadaraya–Watson and local-linear methods in detail,
although data sharpening is applicable more widely. One approach in particular
is found to give excellent performance. It involves adjusting both the
explanatory and the response variables prior to substitution into a local
linear estimator. The change to the explanatory variables enhances resistance
of the estimator to design sparsity, by increasing the density of design points
in places where the original density had been low. When combined with
adjustment of the response variables, it produces a reduction in bias by an
order of magnitude. Moreover, these advantages are available in multivariate
settings. The data-sharpening step is simple to implement, since it is
explicitly defined. It does not involve functional inversion, solution of
equations or use of pilot bandwidths.
@article{1015957396,
author = {Choi, Edwin and Hall, Peter and Rousson, Valentin},
title = {Data sharpening methods for bias reduction in nonparametric
regression},
journal = {Ann. Statist.},
volume = {28},
number = {3},
year = {2000},
pages = { 1339-1355},
language = {en},
url = {http://dml.mathdoc.fr/item/1015957396}
}
Choi, Edwin; Hall, Peter; Rousson, Valentin. Data sharpening methods for bias reduction in nonparametric
regression. Ann. Statist., Tome 28 (2000) no. 3, pp. 1339-1355. http://gdmltest.u-ga.fr/item/1015957396/