DocumentCode :
2388603
Title :
Generating text description from content-based annotated image
Author :
Zhu, Yan ; Xiang, Hui ; Feng, Wenjuan
Author_Institution :
Sch. of Comput. Sci. & Technol., Shandong Univ., Jinan, China
fYear :
2012
fDate :
19-20 May 2012
Firstpage :
805
Lastpage :
809
Abstract :
This paper proposes a statistical generative model to generate sentences from an annotated picture. The images are segmented into regions (using Graph-based algorithms) and then features are computed over each of these regions. Given a training set of images with annotations, we parse the image to get position information. We use SVM to get the probabilities of combinations between labels and prepositions, obtain the data to text set. We use a standard semantic representation to express the image message. Finally generate sentence from the xml report. In view of landscape pictures, this paper implemented experiments on the dataset we collected and annotated, obtained ideal results.
Keywords :
XML; content-based retrieval; graph theory; image retrieval; image segmentation; probability; support vector machines; text analysis; SVM; XML report; annotated picture; content-based annotated image; graph-based algorithms; image message; image segmentation; labels; landscape pictures; position information; prepositions; probabilities; sentence generation; standard semantic representation; statistical generative model; text description generation; Accuracy; Educational institutions; Image segmentation; Probability; Semantics; Training; XML; cross-media retrieval; image annotation; machine learning; text generation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems and Informatics (ICSAI), 2012 International Conference on
Conference_Location :
Yantai
Print_ISBN :
978-1-4673-0198-5
Type :
conf
DOI :
10.1109/ICSAI.2012.6223132
Filename :
6223132
Link To Document :
بازگشت