DocumentCode :
174928
Title :
Vision-based context and height estimation for 3D indoor location
Author :
Kazemipur, Bashir ; Syed, Zahid ; Georgy, Jacques ; El-Sheimy, N.
Author_Institution :
Trusted Positioning Inc., Calgary, AB, Canada
fYear :
2014
fDate :
5-8 May 2014
Firstpage :
1336
Lastpage :
1342
Abstract :
Today´s smartphones are powerful devices whose continually increasing processing power and wide array of sensors make them well suited for use as personal navigation devices. In the absence of information from the Global Navigation Satellite System (GNSS), the onboard inertial sensors can be used to provide a relative navigation solution. However, these onboard inertial sensors suffer from the effects of different sensor errors which cause the inertial-only solution to deteriorate rapidly. As such, there is a need to constrain the inertial positioning solution when long term navigation is needed. GNSS positions and velocities, and WiFi positions are the most important forms of updates available for the inertial solution. However, updates from these two sources depend on external signals and may not always be available. A rich source of information about the outside world can be obtained using the device´s camera. Nearly all devices have at least one camera which has thus far been largely neglected as a navigation aid for these mobile devices. There are many indoor scenarios that require accurate height estimates. Traditionally, barometers have been used to provide height information. However, not all mobile devices that are equipped with inertial sensors are also equipped with a barometer. As nearly all devices are equipped with at least one camera, it is our aim to use information from the camera to aid the inertial-only solution with appropriate height estimates. Different pattern analysis techniques are used to identify the different scenarios. The results are presented for the following common use cases: (1) single floor texting mode, (2) stairs texting mode, (3) single floor calling mode, (4) stairs calling mode, and (5) fidgeting the phone while standing still on a single floor (i.e. “fidgeting”). For each of these use cases, first the context will be determined and then the relevant information will be used to calculate the height according- y. This work is patent pending.
Keywords :
barometers; cameras; computer vision; indoor radio; inertial navigation; satellite navigation; telecommunication computing; 3D indoor location; GNSS positions; GNSS velocity; WiFi positions; barometer; device camera; global navigation satellite system; height estimation; height information; inertial positioning solution; mobile devices; onboard inertial sensors; pattern analysis techniques; personal navigation devices; sensor array; sensor errors; single floor calling mode; single floor texting mode; smartphones; stairs calling mode; stairs texting mode; vision-based context estimation; Cameras; Computer vision; Context; Floors; Image motion analysis; Optical imaging; Sensors; computer vision; indoor navigation; sensor fusion;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Position, Location and Navigation Symposium - PLANS 2014, 2014 IEEE/ION
Conference_Location :
Monterey, CA
Print_ISBN :
978-1-4799-3319-8
Type :
conf
DOI :
10.1109/PLANS.2014.6851508
Filename :
6851508
Link To Document :
بازگشت