Let $S$ be the number of successes in $n$ independent trials, and let $p_j$ denote the probability of success in the $j$th trial, $j = 1, 2, \cdots, n$ (Poisson trials). We consider the problem of finding the maximum and the minimum of $Eg(S),$ the expected value of a given real-valued function of $S,$ when $ES = np$ is fixed. It is well known that the maximum of the variance of $S$ is attained when $p_1 = p_2 = \cdots = p_n = p.$ This can be interpreted as showing that the variability in the number of successes is highest when the successes are equally probable (Bernoulli trials). This interpretation is further supported by the following two theorems, proved in this paper. If $b$ and $c$ are two integers, $0 \leqq b \leqq np \leqq c \leqq n,$ the probability $P(b \leqq S \leqq c)$ attains its minimum if and only if $p_1 = p_2 = \cdots = p_n = p,$ unless $b = 0$ and $c = n$ (Theorem 5, a corollary of Theorem 4, which gives the maximum and the minimum of $P(S \leqq c)$). If $g$ is a strictly convex function, $Eg(S)$ attains its maximum if and only if $p_1 = p_2 = \cdots = p_n = p$ (Theorem 3). These results are obtained with the help of two theorems concerning the extrema of the expected value of an arbitrary function $g(S)$ under the condition $ES = np.$ Theorem 1 gives necessary conditions for the maximum and the minimum of $Eg(S).$ Theorem 2 gives a partial characterization of the set of points at which an extremum is attained. Corollary 2.1 states that the maximum and the minimum are attained when $p_1, p_2, \cdots, p_n$ take on, at most, three different values, only one of which is distinct from 0 and 1. Applications of Theorems 3 and 5 to problems of estimation and testing are pointed out in Section 5.