Let $\varepsilon(t,\mathbf{x})$ denote a stationary Gaussian process with $E_\varepsilon (t, \mathbf{x}) = 0, E_\varepsilon^2(t, \mathbf{x}) = \sigma^2\mathbf{\forall t} \in \mathscr{J}, \mathbf{x} \in \mathscr{X}$, and $E_\varepsilon(t_1, \mathbf{x}_1)_\varepsilon(t_2, \mathbf{x}_2) = 0 \mathbf{\forall t}_1 \neq t_2$ or $\mathbf{x}_1 \neq \mathbf{x}_2$. Let $\mathscr{J}$ be the set of integers and $\mathscr{X}$ a subset of the $r$-dimensional Euclidean space $R^r$. Given a coordinate system in $R^r$ and a time origin, observe $y(t, \mathbf{x}) = s(t, \mathbf{x}) + \varepsilon(t, \mathbf{x})$, where $s(t, \mathbf{x}) = \sum^{T-1}_{j=0}A(\omega j) \exp\{i\lbrack\omega jt - \mathbf{\kappa}(\omega j)'\mathbf{x}\rbrack\}, \omega j = 2\pi j/T, j = 0,1,\cdots, T - 1$, and $\mathbf{\kappa}(\omega j)$ is a vector of parameters in $R^r$. If $\mathbf{\kappa}(\omega) = (\omega/\nu)\mathbf{e}$, where $\mathbf{e'e} = 1, s(t, \mathbf{x})$ is the $r$-dimensional generalization of a (discrete-time) plane wave which is propagating with phase velocity $\nu$ in a direction parallel to $\mathbf{e}$. For a finite time let the process $y(t, \mathbf{x})$ be simultaneously observed at each $\mathbf{x} \in \mathscr{X} = S_1 \times S_2 \times \cdots \times S_r, S_j = \{1,2,\cdots, n\}$. The maximum likelihood estimators $\hat{A}(\omega j)$ and $\hat{\mathbf{\kappa}}(\omega j)$ of $A(\omega j)$ and $\mathbf{\kappa}(\omega j)$, respectively, have a joint limiting normal distribution in which appropriately normalized estimators of the $r$ components of $\mathbf{\kappa}(\omega j)$ are mutually independent, for each $j = 1,\cdots, T - 1$. The distributions of the estimators for different $\omega j$'s are mutually independent. The analysis is generalized to the case where $s(t, \mathbf{x})$ is a sum of plane waves with separation between the phase velocities.