Meta-analysis consists of quantitative methods for combining evidence from different studies about a particular issue. A frequent criticism of meta-analysis is that it may be based on a biased sample of all studies that were done. In this paper, we use selection models, or weighted distributions, to deal with one source of bias, namely, the failure to report studies that do not yield statistically significant results. We apply selection models to two approaches that have been suggested for correcting the bias. The fail-safe sample size approach calculates the minimum number of unpublished studies showing nonsignificant results that must have been carried out in order to overturn the conclusion reached from the published studies. The maximum likelihood approach uses a weighted distribution to model the selection bias in the generation of the data and estimates various parameters of interest. We suggest the use of families of weight functions to model plausible biasing mechanisms to study the sensitivity of inferences about effect sizes. By using an example, we show that the maximum likelihood approach has several advantages over the fail-safe sample size approach.