Title :
Robust feature learning by stacked autoencoder with maximum correntropy criterion
Author :
Yu Qi ; Yueming Wang ; Xiaoxiang Zheng ; Zhaohui Wu
Author_Institution :
Qiushi Acad. for Adv. Studies, Zhejiang Univ., Hangzhou, China
Abstract :
Unsupervised feature learning with deep networks has been widely studied in the recent years. Despite the progress, most existing models would be fragile to non-Gaussian noises and outliers due to the criterion of mean square error (MSE). In this paper, we propose a robust stacked autoencoder (R-SAE) based on maximum correntropy criterion (MCC) to deal with the data containing non-Gaussian noises and outliers. By replacing MSE with MCC, the anti-noise ability of stacked autoencoder is improved. The proposed method is evaluated using the MNIST benchmark dataset. Experimental results show that, compared with the ordinary stacked autoencoder, the R-SAE improves classification accuracy by 14% and reduces the reconstruction error by 39%, which demonstrates that R-SAE is capable of learning robust features on noisy data.
Keywords :
belief networks; data compression; encoding; maximum entropy methods; mean square error methods; unsupervised learning; MCC; MNIST benchmark dataset; MSE criterion; R-SAE; antinoise ability; deep networks; maximum correntropy criterion; mean square error criterion; nonGaussian noises; outliers; robust feature learning; robust stacked autoencoder; unsupervised feature learning; Accuracy; Feature extraction; Image reconstruction; Mean square error methods; Noise; Noise reduction; Robustness; Unsupervised feature learning; correntropy; deep learning; stacked autoencoder;
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on
Conference_Location :
Florence
DOI :
10.1109/ICASSP.2014.6854900