Author :
Shogo, Hanada ; Kanji, Tanaka ; Yuuto, Chokushi
Abstract :
Map matching is a critical problem of robotic mapping and localization which has attracted broad interests in robot vision community. Despite its accuracy and efficiency, the popular RANSAC-based algorithm suffers from large memory requirements, which is proportional to the number N and size of maps. In this paper, our goal is to realize fast succinct map matching by introducing a part-based scene modeling approach. Intuitively, part-based model is a powerful discriminative model, and it also can be compact when we explain a scene by fewer larger parts. The main difficulty is that part modeling of environment maps is a novel task and there is no accepted definition available in literature. To overcome this limitation, we exploit the fact that typical environments (e.g. indoor, street, forests, suburban, etc) contain highly repetitive patterns, and take the approach of repetitiveness-based scene compression. Our method consists of two key steps: (1) we mine common parts that well explain an input map from a known reference map, which we call dictionary map, via common pattern discovery (CPD) between the input and the reference maps; (2) we efficiently match the part-based maps, each of which can be compactly encoded in the form of a pair of bounding boxes (BBs), a keypoint BB and a descriptor BB. Our CPD-based approach is unsupervised, and does not require pre-trained part detectors, which enables a robot to learn a compact map model without human intervention. Evaluations via challenging map matching experiments using publicly available radish dataset show that the proposed approach achieves successful map matching with significant speedup and tens of times compact description of map data.