DocumentCode :
3493062
Title :
Magnification in divergence based neural maps
Author :
Villmann, T. ; Haase, S.
Author_Institution :
Dept. for Math./Natural & Comput. Sci, Univ. of Appl. Sci. Mittweida, Mittweida, Germany
fYear :
2011
fDate :
July 31 2011-Aug. 5 2011
Firstpage :
437
Lastpage :
441
Abstract :
In this paper, we consider the magnification behavior of neural maps using several (parametrized) divergences as dissimilarity measure instead of the Euclidean distance. We show experimentally that optimal magnification, i.e. information optimum data coding by the prototypes, can be achieved for properly chosen divergence parameters. Thereby, the divergences considered here represent all main classes of divergences. Hence, we can conclude that information optimal vector quantization can be processed independently from the divergence class by appropriate parameter setting.
Keywords :
self-organising feature maps; vector quantisation; Euclidean distance; divergence based neural map magnification behavior; information optimal vector quantization; information optimum data coding; Entropy; Euclidean distance; Prototypes; Self organizing feature maps; Vector quantization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2011 International Joint Conference on
Conference_Location :
San Jose, CA
ISSN :
2161-4393
Print_ISBN :
978-1-4244-9635-8
Type :
conf
DOI :
10.1109/IJCNN.2011.6033254
Filename :
6033254
Link To Document :
بازگشت