Countable state and action Markov decision processes are investigated, the objective being to maximize expected discounted reward. Well-known results of Maitra and Blackwell are generalized, their assumption of bounded rewards being replaced by weaker conditions, the most important of which is as follows. The expected reward to be received at time $n + 1$ minus the actual reward received at time $n$, viewed as a function of the state at time $n$, the action at time $n$ and the decision rule to be followed at time $n + 1$, can be bounded. It is shown that there exists an $\varepsilon$-optimal stationary policy for every $\varepsilon > 0$ and that there exists an optimal stationary policy in the finite action case.