DocumentCode :
2553920
Title :
Analyzing Image-Text Relations for Semantic Media Adaptation and Personalization
Author :
Hughes, Mark ; Salway, Andrew ; Jones, Gareth ; O´Connor, Noel
Author_Institution :
Dublin City Univ., Dublin
fYear :
2007
fDate :
17-18 Dec. 2007
Firstpage :
181
Lastpage :
186
Abstract :
Progress in semantic media adaptation and personalisation requires that we know more about how different media types, such as texts and images, work together in multimedia communication. To this end, we present our ongoing investigation into image-text relations. Our idea is that the ways in which the meanings of images and texts relate in multimodal documents, such as web pages, can be classified on the basis of low-level media features and that this classification should be an early processing step in systems targeting semantic multimedia analysis. In this paper we present the first empirical evidence that humans can predict something about the main theme of a text from an accompanying image, and that this prediction can be emulated by a machine via analysis of low-level image features. We close by discussing how these findings could impact on applications for news adaptation and personalisation, and how they may generalise to other kinds of multimodal documents and to applications for semantic media retrieval, browsing, adaptation and creation.
Keywords :
multimedia computing; text analysis; image-text relations; multimedia communication; multimodal documents; semantic media adaptation; semantic media personalisation; semantic multimedia analysis; Data analysis; Face detection; Focusing; Humans; Image analysis; Image recognition; Multimedia communication; Multimedia systems; Text recognition; Web pages;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Semantic Media Adaptation and Personalization, Second International Workshop on
Conference_Location :
Uxbridge
Print_ISBN :
0-7695-3040-0
Type :
conf
DOI :
10.1109/SMAP.2007.43
Filename :
4414407
Link To Document :
بازگشت