Title :
An Interactive Video Annotation Frameowrk with Multiple Modalities
Author :
Meng Wang ; Xian-Sheng Hua ; Yan Song ; Li-Rong Dai ; Ren-Hua Wang
Author_Institution :
Univ. of Sci. & Technol. of China, Hefei, China
Abstract :
Active learning and semi-supervised learning methods are frequently applied in multimedia annotation tasks in order to reduce human labeling effort. However, in most of these methods only single modality is applied. This paper presents an interactive video annotation framework, which is based on semi-supervised learning and active learning with multiple multimodalities. In the proposed framework, unlabeled samples are iteratively selected to be annotated manually according to certain strategy which has taken the potentials of different modalities into account, and then a graph-based semi-supervised learning algorithm is conducted on each modality. This process repeats for several rounds, and the results obtained from multiple modalities are then fused to generate final output. The proposed framework is computationally efficient, and the experimental results on TRECVID 2005 benchmark show that the proposed framework considerably outperforms previous approaches.
Keywords :
graph theory; learning (artificial intelligence); video signal processing; TRECVID 2005 benchmark; active learning; interactive video annotation framework; multiple modalities; semisupervised learning methods; Asia; Content based retrieval; Degradation; Humans; Iterative algorithms; Labeling; Learning systems; Semisupervised learning; Training data; Video sequences; Video annotation; active learning; multimodality;
Conference_Titel :
Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on
Print_ISBN :
1-4244-0727-3
DOI :
10.1109/ICASSP.2007.366068