DocumentCode :
131940
Title :
Cross-modal associative memory by MultiSOM
Author :
Zhongwan Liu ; Xiaojie Wang
Author_Institution :
Center of Intell. Sci. & Technol. Res., Beijing Univ. of Posts & Telecommun., Beijing, China
fYear :
2014
fDate :
11-14 May 2014
Firstpage :
1
Lastpage :
5
Abstract :
This paper proposed a novel Associative Memory model base on Self-organization Map(SOM), called MultiSOM. This model could learn associative relationships between data from different sources, mostly in different modality. However, data and relationships between them will not be entered into the network and trained directly. Instead, they should be trained each with a same semantic data and at last share one topological map. Cross-modally, this paper trains the MultiSOM model to learn associative memory between images and human voice of Chinese characters, with their meanings as sematic data, and the experiment results suggest that this MultiSOM model could learn the bidirectional associative relationship.
Keywords :
content-addressable storage; image processing; natural language processing; self-organising feature maps; Chinese characters; MultiSOM; cross-modal associative memory; image processing; self-organization map; sematic data; Associative memory; Biological neural networks; Neurons; Organizations; Semantics; Training; Vectors; Associative Memory; Chinese processing; SOM; cross-modal;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems (VITAE), 2014 4th International Conference on
Conference_Location :
Aalborg
Print_ISBN :
978-1-4799-4626-6
Type :
conf
DOI :
10.1109/VITAE.2014.6934441
Filename :
6934441
Link To Document :
بازگشت