A number of articles concerned with the problem of finding minimum variance unbiased estimates (MVUE) of certain "non-standard" parametric functions have appeared in the statistical literature. When a complete sufficient statistic (c.s.s)exists, the Rao--Blackwell Theorem gives a method of constructing the MVUE of an estimable parametric function based on any unbiased estimate. A parametric function $\varphi(\theta)$ is called estimable if there exists an unbiased estimate. As an example, let $X_1, \cdots, X_n$ be independent and identically distributed (i.i.d.), $X_i$ being distributed according to a $p$-dimensional multivariate normal distribution with mean $\mu \in R^p (R^p = p$-dimensional Euclidean space) and covariance matrix $\Sigma: p \times p$. This is denoted by $\mathscr{L}(X_i = \mathscr{N} _p(\mu, \Sigma), \mathscr{L}(X)$ reading "the law of $X$." All vectors in this paper are column vectors. For $p = 1$, Kolmogorov (1950) solved the problem of finding the MVUE for the parametric function $\varphi_t(\mu, \sigma^2) = P_{\mu,\sigma^2}(X_1 \geqq t),$ $\mu$ and $\sigma^2$ both unknown. Kolmogorov's method, which was to calculate the conditional distribution of $X_1$ given the c.s.s. and apply the Rao--Blackwell Theorem to the indicator function $I_{\lbrack t,\infty)}(X_1)$, was sufficiently general to be applicable to other estimable parametric functions for this family of distributions. Later Lieberman and Resnikoff (1955) gave an independent solution to this problem using the same method. Barton (1961) and A. P. Basu (1964) solved this and other problems using the same approach. An alternative method for finding MVUE's when the distribution of the c.s.s. is the given, is the transform (LaPlace, Mellin, etc.) method. When applicable, this method does not require having an initial unbiased estimate. The transform method was used by Tate (1959) for distributions involving location and scale parameters and by Washio, Morimoto and Ikeda (1956) for the one-parameter Koopman-Darmois family. Olkin and Pratt (1958) also determined MVUE's of certain correlation coefficients using the LaPlace Transform. Neyman and Scott (1960) developed a third method, which they term "the expansion method," for producing the MVUE of certain parametric functions. Their applications were restricted to univariate normal distributions. An obvious alternative to the above methods is to exhibit a function of the c.s.s. and verify that it is unbiased. This method was used by Ghurye and Olkin (1969) to estimate density functions of the multivariate normal and the Wishart distribution with various parametric assumptions. As it is not constructive, this method may be hard to apply. E. Lehmann (unpublished lecture notes prior to 1964) and Sathe and Varde (1969) used a theorem due to D. Basu 91955) to greatly simplify the application of the Rao-Blackwell Theorem. Our Theorem 2.1, which permits other (including multi-variate) applications, is a generalization of their approach. Theorem 2.1 is applicable when a given unbiased estimate may be written as a function of the c.s.s. and an ancillary statistic. An ancillary statistic is one whose distribution does not depend on the parameter. The preponderance of applications of all the above references have been to families of distributions which are invariant under a group of transformations. These applications are in the domain of our Theorem 2.2 which gives conditions under which a group of transformations may be used to construct the data as a function of a c.s.s. and an ancillary statistic determined by the group structure and the c.s.s. In most cases, this ancillary statistic will be a maximal invariant statistic. The group structure essentially provides an easy method for representing the conditional distribution of the data given the c.s.s. in terms of the marginal distribution of the ancillary statistic. This greatly simplifies the application of the Rao--Blackwell Theorem to a given unbiased estimate. Section 3 illustrates the application of Theorem 2.2. In Example 1, the $\mathscr{N} _p(\mu, \Sigma)$ distribution is considered, $\mu$ and $\Sigma$ both unknown. The distribution of a maximal invariant is characterized and used to represent in integral form the MVUE of any estimable parametric function. Application is given to set probabilities and to estimating the $\mathscr{N} _p(\mu, \Sigma)$ density. Example 2 is concerned with an application to $U$-statistics.