Let $\{X_n, n \geq 1\}$ be a sequence of independent identically distributed random variables with mean $\mu$ and unknown variance $\sigma^2$. We want to estimate $\mu$ by $\bar{X}_n$ with a loss function of $\sigma^{2 \delta - 2}(\bar{X}_n - \mu)^2 + \lambda n$, where $\delta > 0$ and $\lambda \rightarrow 0 +$. For $n_\lambda = o(\lambda^{-1/2})$ and $n_\lambda(\log \lambda)^{-1} \rightarrow -\infty$ as $\lambda \rightarrow 0+$, set $T = \inf\{n \geq n_\lambda: n^{-1} \sum^n_{i=1} (X_i - \bar{X}_n) + b_n \leq \lambda^{1/\delta} a_n\}$. If $a_n n^{-2/\delta} \rightarrow 1$ and $0 < b_n \rightarrow 0$ as $n \rightarrow \infty$, we prove that $T$ is asymptotically risk efficient, that is, as $\lambda \rightarrow 0+, E \lbrack(2\lambda^{1/2}\sigma^\delta)^{-1}(\sigma^{2\delta-2}(\bar{X}_T - \mu)^2 + \lambda T)\lbrack \rightarrow 1$. When the $X_n$'s are normal, the asymptotic risk efficiency of $T$ was established by Starr. By introducing the delay factor $n_\lambda$, we are able to drop the condition of $X_n$'s being normal.