Let $T$ be a compact metric space and let $D = \{t_1, t_2, \cdots\}$ be a countable dense subset. We propose to show that if for $x \varepsilon C(T)$ we define $x_n(t) = E(x(t) \mid x(t_1), \cdots, x(t_n))$ (conditional expectation) then $E\|x_n - x\|^p \rightarrow 0$ where $\|\cdot\|$ denotes the $\sup$ norm and $p \geqq 1$ is such that $E\|x\|^p < \infty$. Furthermore, if we measure the distance between $x_n$ and $x$ by $\int |x_n(t) - x(t)\|^2 d\mu(t)$ for some finite measure $\mu$ on $T$, then $x_n$ is (in a least square sense) the optimal prediction for $x$ given $x(t_1), \cdots, x(t_n)$. We also consider an optimization problem in this same probabilistic setting. Roughly stated, we consider how, given $x(t_1), \cdots, x(t_n)$, one should choose $t \varepsilon T$ so as to maximize $E(x(t))$. The existence of an optimal policy is proved. If we let $S(x)$ denote the supremum of $x$ over $T$ and let $v_n(x)$ denote $x(t)$ where $t$ is the point chosen in accordance with the optimal policy then it is shown that $E\|S(x) - v_n(x)\| \rightarrow 0$ as $n \rightarrow \infty$. This last result is obtained under the assumption that $E\|x\| < \infty$.