Title :
Conspicuity-based visual scene semantic similarity computing for video
Author :
Wei, Wei ; Yan, Tian-Yun ; Zhang, Yuan-mao
Author_Institution :
Dept. of Comput. Sci. & Technol., Chengdu Univ. of Inf. Technol., Chengdu, China
Abstract :
Based on saliency region representation of visual scene, a framework for quantifying the semantic similarity of two video scenes is proposed in this paper. Frame-segment key-frame strategy is used to concisely represent video content in temporal domain. Spatio-temporal conspicuity model for basic visual semantics, a neuromorphic model that simulates human visual system, is used to select dynamic and static spatial salient areas. With pattern classification technique, the basic visual semantics are recognized. Then, the similarity of two visual scenes is calculated according to information theoretic similarity principles and Tversky´s set-theoretic similarity. Experiment results demonstrate the framework could compute quantitative semantic similarity of two video scenes.
Keywords :
image classification; image representation; image segmentation; information theory; set theory; video signal processing; conspicuity-based visual scene semantic similarity computing; dynamic spatial salient area; frame-segment key-frame strategy; human visual system; information theoretic similarity principle; neuromorphic model; pattern classification; saliency region representation; set-theoretic similarity; spatio-temporal conspicuity model; static spatial salient area; temporal domain; video content representation; video scene; Computational modeling; Cybernetics; Machine learning; Pixel; Semantics; Streaming media; Visualization; Conspicuity region; Semantic Gap; Semantic Similarity; Video Semantic Analysis; Video Semantic Concept;
Conference_Titel :
Machine Learning and Cybernetics (ICMLC), 2010 International Conference on
Conference_Location :
Qingdao
Print_ISBN :
978-1-4244-6526-2
DOI :
10.1109/ICMLC.2010.5580490