We consider the problem of estimating the distance from an unknown
signal, observed in a white-noise model, to convex cones of
positive/monotone/convex functions. We show that, when the unknown function
belongs to a Hölder class, the risk of estimating the $L_r$-distance, $1
\leq r < \infty$, from the signal to a cone is essentially the same (up to a
logarithmic factor) as that of estimating the signal itself. The same risk
bounds hold for the test of positivity, monotonicity and convexity of the
unknown signal.
¶ We also provide an estimate for the distance to the cone of
positive functions for which risk is, by a logarithmic factor, smaller than
that of the “plug-in” estimate.