Title :
Online self-supervised segmentation of dynamic objects
Author :
Guizilini, Vitor ; Ramos, Felix
Author_Institution :
Australian Centre for Field Robot., Univ. of Sydney, Sydney, NSW, Australia
Abstract :
We address the problem of automatically segmenting dynamic objects in an urban environment from a moving camera without manual labelling, in an online, self-supervised learning manner. We use input images obtained from a single uncalibrated camera placed on top of a moving vehicle, extracting and matching pairs of sparse features that represent the optical flow information between frames. This optical flow information is initially divided into two classes, static or dynamic, where the static class represents features that comply to the constraints provided by the camera motion and the dynamic class represents the ones that do not. This initial classification is used to incrementally train a Gaussian Process (GP) classifier to segment dynamic objects in new images. The hyperparameters of the GP covariance function are optimized online during navigation, and the available self-supervised dataset is updated as new relevant data is added and redundant data is removed, resulting in a near-constant computing time even after long periods of navigation. The output is a vector containing the probability that each pixel in the image belongs to either the static or dynamic class (ranging from 0 to 1), along with the corresponding uncertainty estimate of the classification. Experiments conducted in an urban environment, with cars and pedestrians as dynamic objects and no prior knowledge or additional sensors, show promising results even when the vehicle is moving at considerable speeds (up to 50 km/h). This scenario produces a large quantity of featureless regions and false matches that is very challenging for conventional approaches. Results obtained using a portable camera device also testify to our algorithm´s ability to generalize over different environments and configurations without any fine-tuning of parameters.
Keywords :
Gaussian processes; feature extraction; image classification; image matching; image segmentation; image sensors; image sequences; learning (artificial intelligence); mobile robots; path planning; robot vision; Gaussian process classifier; autonomous robot; moving camera; near-constant computing time; online self-supervised dynamic object segmentation; optical flow information; self-supervised learning manner; single uncalibrated camera; sparse feature pair extraction; sparse feature pair matching; Cameras; Covariance matrices; Data models; Feature extraction; Heuristic algorithms; Optical imaging; Vehicle dynamics;
Conference_Titel :
Robotics and Automation (ICRA), 2013 IEEE International Conference on
Conference_Location :
Karlsruhe
Print_ISBN :
978-1-4673-5641-1
DOI :
10.1109/ICRA.2013.6631249