Abstract :
The Kullback-Leibler (K-L) divergence rate is a natural extension of the familiar K-L divergence between probability vectors, to the situation where one is observing a sequence of dependent samples, such as for example the state sequence of a Markov chain or the output sequence of a hidden Markov model (HMM). In this paper, we study the problem of estimating the K-L divergence rate between two HMMs with a common output space, where the underlying Markov chains can evolve over different state spaces. At present, there is no closed-form formula for the K-L divergence rate for HMMs, though there is a closed-form formula for the K-L divergence rate between Markov chains over a common state space. In this paper, we give an alternate formulation of the K-L divergence rate between two stationary stochastic processes. Using this alternate formulation, we derive an upper bound for the K-L divergence rate between two HMMs. This estimate converges geometrically fast to the correct answer as the length of the observation increases. However, it is not a particularly elegant estimate. It is hoped that future research will lead to tighter estimates if not a closed-form formula for the K-L divergence rate.
Keywords :
hidden Markov models; matrix algebra; probability; Kullback-Leibler divergence rate; Markov chain; hidden Markov model; output sequence; probability vectors; state sequence; state space; state transition matrices; stationary stochastic process; Closed-form solution; Computational biology; Hidden Markov models; Information theory; Probability distribution; State estimation; State-space methods; Stochastic processes; USA Councils; Upper bound;