DocumentCode :
2261386
Title :
A direct visual-inertial sensor fusion approach in multi-state constraint Kalman filter
Author :
Jianjun, Gui ; Dongbing, Gu
Author_Institution :
School of Computer Science and Electronic Engneering, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, U. K.
fYear :
2015
fDate :
28-30 July 2015
Firstpage :
6105
Lastpage :
6110
Abstract :
Pose estimation only using a monocular camera and an inertial sensor triggers increasing popularity in recent years. In this paper, we propose a method, that tightly combines direct image information (intensity and gradient) from monocular camera with inertial information from three-axis gyroscope and accelerometer in a multi-state constraint Kalman filter (MSCKF) based framework to perform an effective pose estimation. In contrast to other pose estimation methods using vision, our solution gets rid of traditional feature extraction and expression, instead using image patches with distinct gradient to represent a visual measurement from environment. We adopt sequential inertial information and the poses between two consecutive keyframes to construct the state vector, imposing constraints on the poses and marginalising out expired ones, which would reduce the computational complexity linear to the number of selected patches. Furthermore, we view the data from the inertial sensor as intrinsic information applied in filter propagation, providing fast rate estimation for the state. The result of our method has been tested on real flying data of a micro aerial vehicle in indoor and outdoor environments.
Keywords :
Cameras; Covariance matrices; Estimation; Quaternions; Robot sensing systems; Trajectory; Visualization; Multi-Sensor Fusion; Pose Estimation; Visual-Inertial Odometry;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control Conference (CCC), 2015 34th Chinese
Conference_Location :
Hangzhou, China
Type :
conf
DOI :
10.1109/ChiCC.2015.7260595
Filename :
7260595
Link To Document :
بازگشت