DocumentCode :
3748556
Title :
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Author :
Kaiming He;Xiangyu Zhang;Shaoqing Ren;Jian Sun
fYear :
2015
Firstpage :
1026
Lastpage :
1034
Abstract :
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset.
Keywords :
"Training","Computational modeling","Adaptation models","Testing","Gaussian distribution","Biological neural networks"
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN :
2380-7504
Type :
conf
DOI :
10.1109/ICCV.2015.123
Filename :
7410480
Link To Document :
بازگشت