Title :
Finding the Maximizers of the Information Divergence From an Exponential Family
Author_Institution :
Max Planck Inst. for Math. in the Sci., Leipzig, Germany
fDate :
6/1/2011 12:00:00 AM
Abstract :
This paper investigates maximizers of the information divergence from an exponential family ε. It is shown that the rI -projection of a maximizer P to ε is a convex combination of P and a probability measure P- with disjoint support and the same value of the sufficient statistics A. This observation can be used to transform the original problem of maximizing D(·∥ε) over the set of all probability measures into the maximization of a function D̅r over a convex subset of ker A. The global maximizers of both problems correspond to each other. Furthermore, finding all local maximizers of D̅r yields all local maximizers of D(·∥E). This paper also proposes two algorithms to find the maximizers of D̅r and applies them to two examples, where the maximizers of D(·∥ε) were not known before.
Keywords :
convex programming; entropy; convex combination; entropy; exponential family; information divergence; maximizers; Entropy; Equations; Kernel; Loss measurement; Mathematical model; Optimization; Probability; Binomial equations; exponential family; information divergence; optimization; relative entropy;
Journal_Title :
Information Theory, IEEE Transactions on
DOI :
10.1109/TIT.2011.2136230