Nearly unbiased variable selection under minimax concave penalty
Zhang, Cun-Hui
Ann. Statist., Tome 38 (2010) no. 1, p. 894-942 / Harvested from Project Euclid
We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO may prevent consistent variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased selection (PLUS) algorithm. The MCP provides the convexity of the penalized loss in sparse regions to the greatest extent given certain thresholds for variable selection and unbiasedness. The PLUS computes multiple exact local minimizers of a possibly nonconvex penalized loss function in a certain main branch of the graph of critical points of the penalized loss. Its output is a continuous piecewise linear path encompassing from the origin for infinite penalty to a least squares solution for zero penalty. We prove that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO. This selection consistency applies to the case of p≫n, and is proved to hold for exactly the MC+ solution among possibly many local minimizers. We prove that the MC+ attains certain minimax convergence rates in probability for the estimation of regression coefficients in ℓr balls. We use the SURE method to derive degrees of freedom and Cp-type risk estimates for general penalized LSE, including the LASSO and MC+ estimators, and prove their unbiasedness. Based on the estimated degrees of freedom, we propose an estimator of the noise level for proper choice of the penalty level. For full rank designs and general sub-quadratic penalties, we provide necessary and sufficient conditions for the continuity of the penalized LSE. Simulation results overwhelmingly support our claim of superior variable selection properties and demonstrate the computational efficiency of the proposed method.
Publié le : 2010-04-15
Classification:  Variable selection,  model selection,  penalized estimation,  least squares,  correct selection,  minimax,  unbiasedness,  mean squared error,  nonconvex minimization,  risk estimation,  degrees of freedom,  selection consistency,  sign consistency,  62J05,  62J07,  62H12,  62H25
@article{1266586618,
     author = {Zhang, Cun-Hui},
     title = {Nearly unbiased variable selection under minimax concave penalty},
     journal = {Ann. Statist.},
     volume = {38},
     number = {1},
     year = {2010},
     pages = { 894-942},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1266586618}
}
Zhang, Cun-Hui. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., Tome 38 (2010) no. 1, pp.  894-942. http://gdmltest.u-ga.fr/item/1266586618/