When using the bootstrap in the presence of measurement error, we must first estimate the target distribution function; we cannot directly resample, since we do not have a sample from the target. These and other considerations motivate the development of estimators of distributions, and of related quantities such as moments and quantiles, in errors-in-variables settings. We show that such estimators have curious and unexpected properties. For example, if the distributions of the variable of interest, W, say, and of the observation error are both centered at zero, then the rate of convergence of an estimator of the distribution function of W can be slower at the origin than away from the origin. This is an intrinsic characteristic of the problem, not a quirk of particular estimators; the property holds true for optimal estimators.