Title :
Learning in deep architectures with folding transformations
Author :
Szymanski, Lech ; McCane, Brendan
Author_Institution :
Dept. of Comput. Sci., Univ. of Otago, Dunedin, New Zealand
Abstract :
We propose a folding transformation paradigm for supervised layer-wise learning in deep neural networks by introducing concepts of internal decision making, mapping and shatter complexity. These concepts aid in the analysis of an individual hidden transformation in a deep architecture and help to map the capabilities of the proposed folding transformations. We justify the increase of VC-dimension due to depth by showing that the extra model complexity is needed to resolve large variability in the input data for complex problems. We provide an implementation and test the architecture´s performance on a classification task.
Keywords :
computational complexity; decision making; learning (artificial intelligence); pattern classification; VC-dimension; classification task; deep architectures; deep neural networks; extra model complexity; folding transformations; internal decision making; mapping complexity; shatter complexity; supervised layer-wise learning; Biological neural networks; Complexity theory; Geometry; Neurons; Support vector machines; Training; Vectors;
Conference_Titel :
Neural Networks (IJCNN), The 2013 International Joint Conference on
Conference_Location :
Dallas, TX
Print_ISBN :
978-1-4673-6128-6
DOI :
10.1109/IJCNN.2013.6706945