DocumentCode :
1930131
Title :
Networks of width one are universal classifiers
Author :
Rojas, Raul
Author_Institution :
Dept. of Electr. & Syst. Eng., Pennsylvania Univ., Philadelphia, PA, USA
Volume :
4
fYear :
2003
fDate :
20-24 July 2003
Firstpage :
3124
Abstract :
It is well known that a three-layered neural network can perfectly separate (classify) two classes, that is, two types of data points in n-dimensional space. The number of hidden units is adjusted according to the requirements of the classification problem and can be very high for data sets which are difficult to separate. This paper shows that a neural network of width one, i.e. containing at most one computing element in every layer, can perfectly separate two point sets. The network is slender, but can be long. This shows that there is tradeoff between the length and the width of a neural network. The computing elements considered here are perceptrons, and the topology of the network can be best described as a stack of perceptrons.
Keywords :
neural nets; pattern classification; perceptrons; classification problem; computing elements; data sets; hidden units; perceptron stack; universal classifiers; width one neural networks; Computer architecture; Computer networks; Data engineering; Multilayer perceptrons; Network topology; Neural networks; Systems engineering and theory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2003. Proceedings of the International Joint Conference on
ISSN :
1098-7576
Print_ISBN :
0-7803-7898-9
Type :
conf
DOI :
10.1109/IJCNN.2003.1224071
Filename :
1224071
Link To Document :
بازگشت