In Part A (Sections 1-5) the canonical forms of experiments concerning two simple hypotheses, and their partial ordering, are discussed. It is proved that every such experiment is a mixture (in a probability sense) of simple experiments whose sample spaces contain only two points. In Parts B (Sections 6-8) some general aspects of inference and decision problems are discussed in the usual theoretical framework, in which the overall mathematical model of an experiment is the frame of reference for all interpretations of outcomes. In Part C (Sections 9-16), attention is directed to that traditional function and basic problem of mathematical statistics, called here "informative inference," whose object is to recognize and report in appropriate objective terms those features of experimental outcomes which constitute statistical evidence relevant to hypotheses (or parameter values) of interest. The mathematical structure of statistical evidence and its qualitative and quantitative properties, are analyzed by application of (1) the mathematical results of Part A, which show that conditional experimental frames of reference (in the mixture sense) exist and are recognizable much more widely than has previously been realized; and (2) a single extra-mathematical proposition which many statisticians seem inclined to accept as appropriate for purposes of informative inference, a "principle of conditionality" which asserts that any outcome of any experiment which is a mixture of component experiments should be interpreted in the same way as if it were an outcome of just a corresponding component experiment (with the overall mixture structure otherwise ignored). This analysis establishes the likelihood function as the appropriate basis from which statistical inferences can be made directly without other reference to the structure of an experiment. For the numerical values of the likelihood function, this analysis provides direct interpretations in terms of probabilities of errors. These probabilities admit frequency interpretations of the usual kind, but they are not in general defined with reference to the specific experiment from which an outcome is obtained: they express intrinsic objective properties of the likelihood function itself, which this analysis shows to be appropriately relevant and directly useful for purposes of informative inference. The relations of this analysis of problems of informative inference to problems of testing statistical hypotheses, decision-making, conclusions, and Bayesian treatments of inference problems are discussed briefly. Generalizations of these mathematical results and their interpretations for problems involving more than two simple hypotheses will be given in a following paper.