DocumentCode
2965261
Title
Strategies for reducing the complexity of a RNN based speech recognizer
Author
Kasper, Klaus ; Reininger, H. ; Wust, H.
Author_Institution
Inst. fur Angewandte Phys., Frankfurt Univ., Germany
Volume
6
fYear
1996
fDate
7-10 May 1996
Firstpage
3354
Abstract
Recurrent neural networks (RNN) provide a solution for low cost speech recognition systems (SRS) in mass products or in products with energetic constraints if their inherent parallelism could be exploited in a hardware realization. Actually, the computational complexity of SRS based on fully recurrent neural networks (FRNN), e.g. the large number of connections, prevents a hardware realization. We introduce locally recurrent neural networks (LRNN) in order to keep the properties of RNN on the one hand and to reduce the connectivity density of the network on the other hand. By simulation experiments it is shown that the recognition capability of LRNN is equivalent to that of FRNN and superior to other proposed network architectures. Furthermore, it is shown that with an appropriate representation of the network parameters and a retraining of the network 5 Bit quantization of the weights and activities is possible without significant loss in recognition performance
Keywords
computational complexity; recurrent neural nets; speech recognition; computational complexity; connectivity density; locally recurrent neural networks; low cost speech recognition systems; recognition performance; speech recognizer; Computer networks; Costs; Hardware; Hidden Markov models; Network topology; Neurons; Parallel processing; Recurrent neural networks; Speech recognition; Vents;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on
Conference_Location
Atlanta, GA
ISSN
1520-6149
Print_ISBN
0-7803-3192-3
Type
conf
DOI
10.1109/ICASSP.1996.550596
Filename
550596
Link To Document