The Monte Carlo expectation maximization (MCEM) algorithm is a versatile tool for inference in incomplete data models, especially when used in combination with Markov chain Monte Carlo simulation methods. In this contribution, the almost-sure convergence of the MCEM algorithm is established. It is shown, using uniform versions of ergodic theorems for Markov chains, that MCEM converges under weak conditions on the simulation kernel. Practical illustrations are presented, using a hybrid random walk Metropolis Hastings sampler and an independence sampler. The rate of convergence is studied, showing the impact of the simulation schedule on the
fluctuation of the parameter estimate at the convergence. A novel averaging procedure is then proposed to reduce the simulation variance and increase the rate of convergence.