DocumentCode :
2936270
Title :
Multimodal signal fusion
Author :
Sun, Professor Ming-Ting
Author_Institution :
Dept. of Electr. Eng., Univ. of Washington, Seattle, WA, USA
fYear :
2009
fDate :
June 28 2009-July 3 2009
Firstpage :
1556
Lastpage :
1557
Abstract :
Our daily life involves multimodal signals (e.g., visual, audio, text, and signals from various sensors). Multimodal signals are highly correlated. For example, researchers have been using audio activities for video summarization of ball games and violence detection in movies. Multimodal signals are complementary to each other. For example, in multimodal surveillance, audio may carry important information not available in video. Usually, people don´t hire a blind or deaf person for surveillance duties, since a human naturally fuses multimodal signals for the best results.
Keywords :
hidden Markov models; learning (artificial intelligence); signal processing; activity recognition system; asynchronous HMM; audio activities; audio sensor; automatic human activity detection; automatic learning; ball games; coupled hidden Markov model; cross-modality correlation; heart-rate sensors; machine learning; motion sensors; multimedia applications; multimodal representation approach; multimodal sensors; multimodal signal fusion; multimodal signal processing; multimodal surveillance; semantic concepts; sleep monitoring; supervised learning; video summarization; violence detection; visual concepts; Circuits and systems; Hidden Markov models; Humans; Labeling; Multimodal sensors; Sensor systems; Sun; Supervised learning; Surveillance; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia and Expo, 2009. ICME 2009. IEEE International Conference on
Conference_Location :
New York, NY
ISSN :
1945-7871
Print_ISBN :
978-1-4244-4290-4
Electronic_ISBN :
1945-7871
Type :
conf
DOI :
10.1109/ICME.2009.5202805
Filename :
5202805
Link To Document :
بازگشت