DocumentCode
3673893
Title
Channel-Max, Channel-Drop and Stochastic Max-pooling
Author
Yuchi Huang; Xiuyu Sun; Ming Lu;Ming Xu
Author_Institution
NEC Labs, Beijing, China
fYear
2015
fDate
6/1/2015 12:00:00 AM
Firstpage
9
Lastpage
17
Abstract
We propose three regularization techniques to overcome drawbacks of local winner-take-all methods used in deep convolutional networks. Channel-Max inherits the max activation unit from Maxout networks, but otherwise adopts complementary subsets of input and filters with different kernel sizes as better companions to the max function. To balance the training on different pathways, Channel-Drop is employed to randomly discard half pathways before their inputs are convolved respectively. Stochastic Max-pooling is defined to reduce the overfitting caused by conventional max-pooling, in which half activations are randomly dropped in each pooling region during training and top largest activations are probabilistically averaged during testing. Using Channel-Max, Channel-Drop and Stochastic Max-pooling, we demonstrate state-of-the-art performance on four benchmark datasets: CIFAR-10, CIFAR-100, STL-10 and SVHN.
Keywords
"Training","Kernel","Stochastic processes","Convolution","Testing","Mathematical model","Color"
Publisher
ieee
Conference_Titel
Computer Vision and Pattern Recognition Workshops (CVPRW), 2015 IEEE Conference on
Electronic_ISBN
2160-7516
Type
conf
DOI
10.1109/CVPRW.2015.7301267
Filename
7301267
Link To Document