We consider the model
y = Xθ∗ + ξ,
Z = X + Ξ,
¶
where the random vector y ∈ ℝn and the random n × p matrix Z are observed, the n × p matrix X is unknown, Ξ is an n × p random noise matrix, ξ ∈ ℝn is a noise independent of Ξ, and θ∗ is a vector of unknown parameters to be estimated. The matrix uncertainty is in the fact that X is observed with additive error. For dimensions p that can be much larger than the sample size n, we consider the estimation of sparse vectors θ∗. Under matrix uncertainty, the Lasso and Dantzig selector turn out to be extremely unstable in recovering the sparsity pattern (i.e., of the set of nonzero components of θ∗), even if the noise level is very small. We suggest new estimators called matrix uncertainty selectors (or, shortly, the MU-selectors) which are close to θ∗ in different norms and in the prediction risk if the restricted eigenvalue assumption on X is satisfied. We also show that under somewhat stronger assumptions, these estimators recover correctly the sparsity pattern.