The purpose of this paper is to prove Theorem 1 stated in Section 1 below and Theorem 2 of Section 6 and the results of Section 7. These theorems are the generalizations to vector chance variables of Theorems 4 and 5 and Section 6 of [1], and state that the sample distribution function (d.f.) is asymptotically minimax for the large class of weight functions of the type described below. The main difficulties are embodied in the proof of Theorem 1 (Sections 2 to 5), where the loss function is a function of the maximum difference between estimated and true d.f. The proof utilizes the results of [2] and is not a straight-forward extension of the result of [1], because the sample d.f. is no longer "distribution free" (even in the limit), and hence it is necessary to prove the uniformity of approach, to its limit, of the d.f. of the normalized maximum deviation between sample and population d.f.'s (for a certain class of d.f.'s). The latter fact enables us essentially to infer the existence of a uniformly (with the sample number) approximately least favorable (to the statistician) d.f., by means of which the proof of the theorem is achieved. Theorem 2 (Section 6) considers loss functions of integral type, and more general loss functions are treated in Section 7.