Title :
Fuzzy algorithms for learning vector quantization
Author :
Karayiannis, Nicolaos B. ; Pai, Pin-I
Author_Institution :
Dept. of Electr. & Comput. Eng., Houston Univ., TX, USA
fDate :
9/1/1996 12:00:00 AM
Abstract :
This paper presents the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms are derived by minimizing the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of a competitive learning vector quantization (LVQ) network, which represent the prototypes. This formulation leads to competitive algorithms, which allow each input vector to attract all prototypes. The strength of attraction between each input and the prototypes is determined by a set of membership functions, which can be selected on the basis of specific criteria. A gradient-descent-based learning rule is derived for a general class of admissible membership functions which satisfy certain properties. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible membership functions with different properties. The proposed algorithms are tested and evaluated using the IRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization
Keywords :
competitive algorithms; fuzzy set theory; image processing; minimisation; neural nets; unsupervised learning; vector quantisation; FALVQ; competitive learning; feature vector; fuzzy algorithms; gradient-descent method; image compression; learning vector quantization; membership functions; squared Euclidean distances; weight vectors; Algorithm design and analysis; Artificial neural networks; Image coding; Iris; Lattices; Organizing; Prototypes; Testing; Unsupervised learning; Vector quantization;
Journal_Title :
Neural Networks, IEEE Transactions on