DocumentCode :
285256
Title :
Derivation of learning vector quantization algorithms
Author :
Lo, Zhen-Ping ; Yu, Yaoqi ; Bavarian, Behnam
Author_Institution :
Dept. of Electr. & Comput. Eng., California Univ., Irvine, CA, USA
Volume :
3
fYear :
1992
fDate :
7-11 Jun 1992
Firstpage :
561
Abstract :
A formal derivation of three learning rules for the adaptation of the synaptic weight vectors of neurons representing the prototype vectors of the class distribution in a classifier is presented. A decision surface function and a set of adaptation algorithms for adjusting this surface which are derived by using the gradient-descent approach to minimize the classification error are derived. This also provides a formal analysis of the Kohonen learning vector quantization (LVQ1 and LVQ2) algorithms. In particular, it is shown that to minimize the classification error, one of the learning equations in the LVQ1 algorithm is not required. An application of the learning algorithms for designing a neural network classifier is presented. The performance of the classifier was tested and compared to the K-NN decision rule for the Iris real data set
Keywords :
learning (artificial intelligence); neural nets; pattern recognition; vector quantisation; Iris real data set; K-NN decision rule; Kohonen learning vector quantization; LVQ1; LVQ2; adaptation algorithms; classification error; decision surface function; formal analysis; gradient-descent approach; learning vector quantization algorithms; neurons; prototype vectors; synaptic weight vectors; Algorithm design and analysis; Classification algorithms; Equations; Iris; Neural networks; Neurons; Pattern classification; Prototypes; Testing; Vector quantization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
Type :
conf
DOI :
10.1109/IJCNN.1992.227115
Filename :
227115
Link To Document :
بازگشت