DocumentCode :
1341539
Title :
Direct Parallel Perceptrons (DPPs): Fast Analytical Calculation of the Parallel Perceptrons Weights With Margin Control for Classification Tasks
Author :
Fernández-Delgado, Manuel ; Ribeiro, Jorge ; Cernadas, Eva ; Ameneiro, Senén Barro
Author_Institution :
Intell. Syst. Group, San Sebastian, Spain
Volume :
22
Issue :
11
fYear :
2011
Firstpage :
1837
Lastpage :
1848
Abstract :
Parallel perceptrons (PPs) are very simple and efficient committee machines (a single layer of perceptrons with threshold activation functions and binary outputs, and a majority voting decision scheme), which nevertheless behave as universal approximators. The parallel delta (P-Delta) rule is an effective training algorithm, which, following the ideas of statistical learning theory used by the support vector machine (SVM), raises its generalization ability by maximizing the difference between the perceptron activations for the training patterns and the activation threshold (which corresponds to the separating hyperplane). In this paper, we propose an analytical closed-form expression to calculate the PPs´ weights for classification tasks. Our method, called Direct Parallel Perceptrons (DPPs), directly calculates (without iterations) the weights using the training patterns and their desired outputs, without any search or numeric function optimization. The calculated weights globally minimize an error function which simultaneously takes into account the training error and the classification margin. Given its analytical and noniterative nature, DPPs are computationally much more efficient than other related approaches (P-Delta and SVM), and its computational complexity is linear in the input dimensionality. Therefore, DPPs are very appealing, in terms of time complexity and memory consumption, and are very easy to use for high-dimensional classification tasks. On real benchmark datasets with two and multiple classes, DPPs are competitive with SVM and other approaches but they also allow online learning and, as opposed to most of them, have no tunable parameters.
Keywords :
approximation theory; computational complexity; generalisation (artificial intelligence); learning (artificial intelligence); optimisation; pattern classification; perceptrons; statistical analysis; support vector machines; weighing; P-Delta rule; analytical closed form expression; binary outputs; classification margin; committee machines; computational complexity; direct parallel perceptron weight; generalization ability; high-dimensional classification tasks; input dimensionality; majority voting decision scheme; margin control; memory consumption; parallel delta rule; perceptron activation threshold; statistical learning theory; support vector machine; threshold activation function; time complexity; training algorithm; training error; training pattern; universal approximators; Accuracy; Closed-form solutions; Kernel; Linear approximation; Statistical learning; Support vector machines; Training; Analytical closed-form weight calculation; linear computational complexity; margin maximization; online learning; parallel delta rule; parallel perceptrons; pattern classification; Algorithms; Artificial Intelligence; Classification; Databases, Factual; Linear Models; Neural Networks (Computer); Reproducibility of Results;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2011.2169086
Filename :
6035789
Link To Document :
بازگشت