DocumentCode :
319919
Title :
Model-based synthetic view generation from a monocular video sequence
Author :
Tsai, Chun-Jen ; Eisert, Peter ; Girod, Bernd ; Katsaggelos, Aggelos K.
Author_Institution :
Dept. of Electr. & Comput. Eng., Northwestern Univ., Evanston, IL, USA
Volume :
1
fYear :
1997
fDate :
26-29 Oct 1997
Firstpage :
444
Abstract :
In this paper a model-based multi-view image generation system for video conferencing is presented. The system assumes that a 3-D model of the person in front of the camera is available. It extracts texture from speaking person sequence images and maps it to the static 3-D model during the videoconference session. Since only the incrementally updated texture information is transmitted during the whole session, the bandwidth requirement is very small. Based on the experimental results one can conclude that the proposed system is very promising for practical applications
Keywords :
feature extraction; image sequences; image texture; teleconferencing; video signal processing; 3-D model; bandwidth requirement; model-based multi-view image generation system; model-based synthetic view generation; monocular video sequence; speaking person sequence images; static 3-D model; texture; videoconferencing; Cameras; Communication channels; Displays; Image generation; Image sequences; Layout; Production systems; Video coding; Video sequences; Videoconference;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Image Processing, 1997. Proceedings., International Conference on
Conference_Location :
Santa Barbara, CA
Print_ISBN :
0-8186-8183-7
Type :
conf
DOI :
10.1109/ICIP.1997.647802
Filename :
647802
Link To Document :
بازگشت