Let $Z^i=(Y^i,X_1^i,\dots,X_m^i)$, $i=1,\dots,n$ , be independent and identically distributed random vectors, $Z^i \sim F, \;F \in {\cal F}$ . It is desired to predict Y by $\sum \beta_j X_j$ , where $(\beta_1,\dots,\beta_m) \in B^n \subseteq \R^m , under a prediction loss. Suppose that $m=n^\alpha$, $\alpha>1$ , that is, there are many more explanatory variables than observations. We consider sets Bn restricted by the maximal number of non-zero coefficients of their members, or by their l1 radius. We study the following asymptotic question: how 'large' may the set Bn be, so that it is still possible to select empirically a predictor whose risk under F is close to that of the best predictor in the set? Sharp bounds for orders of magnitudes are given under various assumptions on ${\cal F}$ . Algorithmic complexity of the ensuing procedures is also studied. The main message of this paper and the implications of the orders derived are that under various sparsity assumptions on the optimal predictor there is 'asymptotically no harm' in introducing many more explanatory variables than observations. Furthermore, such practice can be beneficial in comparison with a procedure that screens in advance a small subset of explanatory variables. Another main result is that 'lasso' procedures, that is, optimization under l1 constraints, could be efficient in finding optimal sparse predictors in high dimensions.