DocumentCode :
284738
Title :
Self-structuring hidden control neural model for speech recognition
Author :
Sorensen, Helge B D ; Hartmann, Uwe
Author_Institution :
Inst. of Electron. Syst., Aalborg Univ., Denmark
Volume :
2
fYear :
1992
fDate :
23-26 Mar 1992
Firstpage :
353
Abstract :
The majority of neural models for pattern recognition have fixed architecture during training. A typical consequence is nonoptimal and often too large networks. A self-structuring hidden control (SHC) neural model for pattern recognition that establishes a near-optimal architecture during training is proposed. A network architecture reduction of approximately 80-90% in terms of the number of hidden processing elements (PEs) is typically achieved. The SHC model combines self-structuring architecture generation with nonlinear prediction and hidden Markov modeling. A theorem for self-structuring neural models that states that these models are universal approximators and thus relevant for real-world pattern recognition is presented. Using SHC models containing as few as five hidden PEs each for an isolated word recognition task resulted in a recognition rate of 98.4%. SHC models can furthermore be applied to continuous speech recognition
Keywords :
learning (artificial intelligence); neural nets; speech recognition; hidden processing elements; isolated word recognition; neural model; pattern recognition; self-structuring hidden control; speech recognition; training; Hidden Markov models; Multi-layer neural network; Neural networks; Nonlinear systems; Optimal control; Pattern recognition; Predictive models; Speech recognition; Time varying systems; Viterbi algorithm;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE International Conference on
Conference_Location :
San Francisco, CA
ISSN :
1520-6149
Print_ISBN :
0-7803-0532-9
Type :
conf
DOI :
10.1109/ICASSP.1992.226047
Filename :
226047
Link To Document :
بازگشت