Title :
Improving deep neural networks by using sparse dropout strategy
Author :
Hao Zheng ; Mingming Chen ; Wenju Liu ; Zhanlei Yang ; Shan Liang
Author_Institution :
Nat. Lab. of Pattern Recognition, Inst. of Autom., Beijing, China
Abstract :
Recently, deep neural networks(DNNs) have achieved excellent results on benchmarks for acoustic modeling of speech recognition. By randomly discarding network units, a strategy which is called as dropout can improve the performance of DNNs by reducing the influence of over-fitting. However, the random dropout strategy treats units indiscriminately, which may lose information on distributions of units outputs. In this paper, we improve the dropout strategy by differential treatment to units according to their outputs. Only minor changes to an existing neural network system can achieve a significant improvement. Experiments of phone recognition on TIMIT show that the sparse dropout fine-tuning gets significant performance improvement.
Keywords :
learning (artificial intelligence); neural nets; speech recognition; DNN; TIMIT; acoustic modeling; deep neural networks; differential treatment; network units; neural network system; phone recognition; random dropout strategy; sparse dropout fine-tuning; sparse dropout strategy; speech recognition; Computational modeling; Data models; Hidden Markov models; Neural networks; Speech; Speech recognition; Training; deep learning; deep neural networks; dropout; sparse dropout;
Conference_Titel :
Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on
Conference_Location :
Xi´an
Print_ISBN :
978-1-4799-5401-8
DOI :
10.1109/ChinaSIP.2014.6889194