Title :
Comments on "Efficient training algorithms for HMMs using incremental estimation"
Author :
Byrne, William ; Gunawardana, Asela
Author_Institution :
Dept. of Electr. & Comput. Eng., Johns Hopkins Univ., Baltimore, MD, USA
Abstract :
The paper entitled "Efficient training algorithms for HMMs using incremental estimation" by Gotoh et al. (IEEE Trans. Speech Audio Processing, vol.6, p.539-48, Nov. 1998) investigated expectation maximization (EM) procedures that increase training speed. The claim of Gotoh et al. that these procedures are generalized EM (Dempster et al. 1977) procedures is shown to be incorrect in the present paper. We discuss why this is so, provide an example of nonmonotonic convergence to a local maximum in likelihood, and outline conditions that guarantee such convergence.
Keywords :
convergence of numerical methods; hidden Markov models; iterative methods; maximum likelihood estimation; speech processing; EM procedures; GEM; HMM; efficient training algorithms; expectation maximization algorithm; generalized EM methods; incremental estimation; local maximum likelihood; nonmonotonic convergence; training speed; Convergence; Hidden Markov models; Iterative algorithms; Iterative methods; Maximum likelihood estimation; Natural languages; Solids; Speech processing; Tomography; Training data;
Journal_Title :
Speech and Audio Processing, IEEE Transactions on