Abstract :
In this paper, we investigate what can be inferred from several silhouette probability maps, in multiview silhouette cue fusion. To this aim, we propose a new framework for multiview silhouette cue fusion. This framework work uses a space occupancy grid as a probabilistic 3D representation of scene contents. Such a representation is of great interest for various computer vision applications in perception, or localization for instance. Our main contribution is to introduce the occupancy grid concept, popular in the robotics, for multicamera environments. The idea is to consider each camera pixel as a statistical occupancy sensor. All pixel observations are then used jointly to infer where, and how likely, matter is present in the scene. As our results illustrate, this sample model has various advantages. Most sources of uncertainty are explicitly modeled, and no premature decisions about pixel labeling occur, thus preserving pixel knowledge. Consequently, optimal scene object localization, and robust volume reconstruction, can achieved, with no constraint on camera placement and object visibility. In addition, this representation allows to improve silhouette extraction in images
Keywords :
computer vision; feature extraction; image reconstruction; image representation; computer vision; image silhouette extraction; multiview silhouette cues; optimal scene object localization; probabilistic 3D scene representation; silhouette probability maps; space occupancy grid; statistical occupancy sensor; volume reconstruction; Application software; Cameras; Computer vision; Labeling; Layout; Orbital robotics; Robot sensing systems; Robot vision systems; Robustness; Uncertainty;