DocumentCode
477033
Title
A self-organizing neural model for multimedia information fusion
Author
Nguyen, Luong-Dong ; Woon, Kia-Van ; Tan, Ah-Hwee
Author_Institution
Sch. of Comput. Eng., Nanyang Technol. Univ., Singapore
fYear
2008
fDate
June 30 2008-July 3 2008
Firstpage
1
Lastpage
7
Abstract
This paper presents a self-organizing network model for the fusion of multimedia information. By synchronizing the encoding of information across multiple media channels, the neural model known as fusion adaptive resonance theory (fusion ART) generates clusters that encode the associative mappings across multimedia information in a real-time and continuous manner. In addition, by incorporating a semantic category channel, fusion ART further enables multimedia information to be fused into predefined themes or semantic categories. We illustrate the fusion ARTpsilas functionalities through experiments on two multimedia data sets in the terrorist domain and show the viability of the proposed approach.
Keywords
ART neural nets; encoding; learning (artificial intelligence); multimedia computing; self-organising feature maps; sensor fusion; associative mapping encoding; fusion ART; fusion adaptive resonance theory; information encoding; machine learning; media channel; multimedia information fusion; self-organizing neural network model; semantic category channel; terrorist domain;
fLanguage
English
Publisher
ieee
Conference_Titel
Information Fusion, 2008 11th International Conference on
Conference_Location
Cologne
Print_ISBN
978-3-8007-3092-6
Electronic_ISBN
978-3-00-024883-2
Type
conf
Filename
4632421
Link To Document