Title :
LVQ as a feature transformation for HMMs
Author_Institution :
IDIAP, Martigny, Switzerland
Abstract :
Presents a new way to take advantage of the discriminative power of learning vector quantization in combination with continuous density hidden Markov models. This is based on viewing LVQ as a non-linear feature transformation. Class-wise quantization errors of LVQ are modeled by continuous density HMMs, whereas the practice in the literature regarding LVQ/HMM hybrids is to use LVQ-codebooks as frame labelers and discrete observation HMMs to model a stream of such labels. As decision making at frame level is suboptimal for speech recognition, the presented method is able to preserve more information for the HMM stage. Experiments in both speaker dependent and speaker independent phoneme spotting tasks suggest that significant improvements are attainable over plain continuous density HMMs, or over the hybrid of LVQ and discrete HMMs
Keywords :
hidden Markov models; speech recognition; vector quantisation; LVQ-codebooks; class-wise quantization errors; continuous density hidden Markov models; decision making; discrete observation HMMs; discriminative power; feature transformation; frame labelers; learning vector quantization; speaker dependent phoneme spotting tasks; speaker independent phoneme spotting tasks; speech recognition; Artificial neural networks; Automatic speech recognition; Data mining; Decision making; Hidden Markov models; Maximum likelihood estimation; Speech recognition; Surface treatment; Vector quantization;
Conference_Titel :
Neural Networks for Signal Processing [1994] IV. Proceedings of the 1994 IEEE Workshop
Conference_Location :
Ermioni
Print_ISBN :
0-7803-2026-3
DOI :
10.1109/NNSP.1994.366037