DocumentCode
1367147
Title
Repairs to GLVQ: a new family of competitive learning schemes
Author
Karayiannis, Nicolaos B. ; Bezdek, James C. ; Pal, Nikhil R. ; Hathaway, Richard J. ; Pai, Pin-I
Author_Institution
Dept. of Electr. & Comput. Eng., Houston Univ., TX, USA
Volume
7
Issue
5
fYear
1996
fDate
9/1/1996 12:00:00 AM
Firstpage
1062
Lastpage
1071
Abstract
First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models-the GLVQ-F family-that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ-F model and prove that they are invariant to all scalings of the data. We show that GLVQ-F offers a wide range of learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parameter increases, GLVQ-F then transitions to a model in which either all nodes may be excited according to their (inverse) distances from an input or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ-F updates all nodes equally. We illustrate the failure of GLVQ and success of GLVQ-F with the IRIS data
Keywords
unsupervised learning; vector quantisation; GLVQ-F; IRIS data; competitive learning schemes; generalized learning vector quantization; Clustering algorithms; Computer networks; Computer science; Iris; Machine intelligence; Mathematics; Prototypes; Terrorism; Vector quantization;
fLanguage
English
Journal_Title
Neural Networks, IEEE Transactions on
Publisher
ieee
ISSN
1045-9227
Type
jour
DOI
10.1109/72.536304
Filename
536304
Link To Document