Stochastic approximation procedures were shown by Sakrison to become asymptotically efficient estimators when used to minimize the Kullback-Leibler information, if certain conditions hold. Further results in this direction were obtained by Nevel'son and Has'minskij. This paper gives, first, alternative conditions for convergence and, secondly, shows that, under weaker conditions, asymptotic optimality is obtained by a modified stochastic approximation procedure. The modified procedure uses a consistent estimate which leads the approximating sequence to a proper local minimum of the Kullback-Leibler information. The conditions under which the procedure is asymptotically optimal are close to or weaker than those for asymptotic optimality of one-step-correction maximum likelihood methods.