DocumentCode
2180749
Title
VideoPlus
Author
Taylor, Camillo J.
Author_Institution
Dept. of Comput. & Inf. Sci., Pennsylvania Univ., Philadelphia, PA, USA
fYear
2000
fDate
2000
Firstpage
3
Lastpage
10
Abstract
This paper describes an approach to capturing the appearance and structure of immersive environments based on the video imagery obtained with an omnidirectional camera system. The scheme proceeds by recovering the 3D positions of a set of point and line features in the world from image correspondences in a small set of key frames in the image sequence. Once the locations of these features have been recovered the position of the camera during every frame in the sequence can be determined by using these recovered features as fiducials and estimating camera pose based on the location of corresponding image features in each frame. The end result of the procedure is an omnidirectional video sequence where every frame is augmented with its pose with respect to an absolute reference frame and a 3D model of the environment composed of point and line features in the scene. By augmenting the video clip with pose information we provide the viewer with the ability to navigate the image sequence in new and interesting ways. More specifically the user can use the pose information to travel through the video sequence with a trajectory different from the one taken by the original camera operator. This freedom presents the end user with an opportunity to immerse themselves within a remote environment and to control what they see
Keywords
image reconstruction; image sequences; video signal processing; virtual reality; VideoPlus; image correspondences; image sequence; immersive environments; omnidirectional camera; recovered features; video imagery; Chromium;
fLanguage
English
Publisher
ieee
Conference_Titel
Omnidirectional Vision, 2000. Proceedings. IEEE Workshop on
Conference_Location
Hilton Head Island, SC
Print_ISBN
0-7695-0704-2
Type
conf
DOI
10.1109/OMNVIS.2000.853795
Filename
853795
Link To Document