The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of familiar ϕ-divergences. An extension of this inequality to ϕ-divergences between a finite number of probability distributions with pairwise bounded likelihood ratios is also given.
@article{bwmeta1.element.bwnjournal-article-zmv24i4p415bwm, author = {Andrew Rukhin}, title = {Information-type divergence when the likelihood ratios are bounded}, journal = {Applicationes Mathematicae}, volume = {24}, year = {1997}, pages = {415-423}, zbl = {0893.60008}, language = {en}, url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-zmv24i4p415bwm} }
Rukhin, Andrew. Information-type divergence when the likelihood ratios are bounded. Applicationes Mathematicae, Tome 24 (1997) pp. 415-423. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-zmv24i4p415bwm/
[000] [1] D. A. Bloch and L. E. Moses, Nonoptimally weighted least squares, Amer. Statist. 42 (1988), 50-53.
[001] [2] T. M. Cover, M. A. Freedman and M. E. Hellman, Optimal finite memory learning algorithms for the finite sample problem, Information Control 30 (1976), 49-85. | Zbl 0332.62017
[002] [3] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, New York, 1991. | Zbl 0762.94001
[003] [4] L. Devroy, Non-Uniform Random Variate Generation, Springer, New York, 1986.
[004] [5] G. S. Fishman, Monte Carlo: Concepts, Algorithms and Applications, Springer, New York, 1996.
[005] [6] L. Györfi and T. Nemetz, f-dissimilarity: A generalization of the affinity of several distributions, Ann. Inst. Statist. Math. 30 (1978), 105-113. | Zbl 0453.62014
[006] [7] G. Pólya and G. Szegő, Problems and Theorems in Analysis. Volume 1: Series, Integral Calculus, Theory of Functions, Springer, New York, 1972. | Zbl 0236.00003
[007] [8] A. L. Rukhin, Lower bound on the error probability for families with bounded likelihood ratios, Proc. Amer. Math. Soc. 119 (1993), 1307-1314. | Zbl 0816.62005
[008] [9] A. L. Rukhin, Recursive testing of multiple hypotheses: Consistency and efficiency of the Bayes rule, Ann. Statist. 22 (1994), 616-633. | Zbl 0815.62009
[009] [10] A. L. Rukhin, Change-point estimation: linear statistics and asymptotic Bayes risk, Math. Methods Statist. 5 (1996), 412-431. | Zbl 0884.62028
[010] [11] J. W. Tukey, Approximate weights, Ann. Math. Statist. 19 (1948), 91-92. | Zbl 0041.25901
[011] [12] I. Vajda, Theory of Statistical Inference and Information, Kluwer, Dordrecht, 1989.