Let $(X_1, X_2, \cdots)$ be a sequence of random variables and let the p.d.f. of $\mathbf{X}_n = (X_1, \cdots, X_n)$ be $p(\mathbf{x}_n, \theta)$, where $\theta = (\theta_1, \theta_2)$. An estimating equation rule for $\theta_1$ is a sequence of functions $g(x_1, \theta_1), g(x_1, x_2, \theta_1), \cdots$. If the random sample size $N = n$, we estimate $\theta_1$ through the estimating equation $g(\mathbf{X}_n, \theta_1) = 0$. In this paper, optimum estimation rules are obtained and, in particular, sufficient conditions for the optimality of the maximum conditional likelihood estimation rule are given. In addition, Bhapkar's concept of information in an estimating equation is used to discuss stopping criteria.