On the Sample Information About Parameter and Prediction
Ebrahimi, Nader ; Soofi, Ehsan S. ; Soyer, Refik
Statist. Sci., Tome 25 (2010) no. 1, p. 348-367 / Harvested from Project Euclid
The Bayesian measure of sample information about the parameter, known as Lindley’s measure, is widely used in various problems such as developing prior distributions, models for the likelihood functions and optimal designs. The predictive information is defined similarly and used for model selection and optimal designs, though to a lesser extent. The parameter and predictive information measures are proper utility functions and have been also used in combination. Yet the relationship between the two measures and the effects of conditional dependence between the observable quantities on the Bayesian information measures remain unexplored. We address both issues. The relationship between the two information measures is explored through the information provided by the sample about the parameter and prediction jointly. The role of dependence is explored along with the interplay between the information measures, prior and sampling design. For the conditionally independent sequence of observable quantities, decompositions of the joint information characterize Lindley’s measure as the sample information about the parameter and prediction jointly and the predictive information as part of it. For the conditionally dependent case, the joint information about parameter and prediction exceeds Lindley’s measure by an amount due to the dependence. More specific results are shown for the normal linear models and a broad subfamily of the exponential family. Conditionally independent samples provide relatively little information for prediction, and the gap between the parameter and predictive information measures grows rapidly with the sample size. Three dependence structures are studied: the intraclass (IC) and serially correlated (SC) normal models, and order statistics. For IC and SC models, the information about the mean parameter decreases and the predictive information increases with the correlation, but the joint information is not monotone and has a unique minimum. Compensation of the loss of parameter information due to dependence requires larger samples. For the order statistics, the joint information exceeds Lindley’s measure by an amount which does not depend on the prior or the model for the data, but it is not monotone in the sample size and has a unique maximum.
Publié le : 2010-08-15
Classification:  Bayesian predictive distribution,  entropy,  mutual information,  optimal design,  reference prior,  intraclass correlation,  serial correlation,  order statistics
@article{1294167964,
     author = {Ebrahimi, Nader and Soofi, Ehsan S. and Soyer, Refik},
     title = {On the Sample Information About Parameter and Prediction},
     journal = {Statist. Sci.},
     volume = {25},
     number = {1},
     year = {2010},
     pages = { 348-367},
     language = {en},
     url = {http://dml.mathdoc.fr/item/1294167964}
}
Ebrahimi, Nader; Soofi, Ehsan S.; Soyer, Refik. On the Sample Information About Parameter and Prediction. Statist. Sci., Tome 25 (2010) no. 1, pp.  348-367. http://gdmltest.u-ga.fr/item/1294167964/