Title :
Joint learning of visual attributes, object classes and visual saliency
Author :
Wang, Gang ; Forsyth, David
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA
fDate :
Sept. 29 2009-Oct. 2 2009
Abstract :
We present a method to learn visual attributes (eg."red", "metal", "spotted") and object classes (eg. "car", "dress", "umbrella") together. We assume images are labeled with category, but not location, of an instance. We estimate models with an iterative procedure: the current model is used to produce a saliency score, which, together with a homogeneity cue, identifies likely locations for the object (resp. attribute); then those locations are used to produce better models with multiple instance learning. Crucially, the object and attribute models must agree on the potential locations of an object. This means that the more accurate of the two models can guide the improvement of the less accurate model. Our method is evaluated on two data sets of images of real scenes, one in which the attribute is color and the other in which it is material. We show that our joint learning produces improved detectors. We demonstrate generalization by detecting attribute-object pairs which do not appear in our training data. The iteration gives significant improvement in performance.
Keywords :
image processing; iterative methods; learning (artificial intelligence); object detection; attribute object pairs detection; homogeneity cue; iterative procedure; multiple instance learning; object class; saliency score; visual attributes joint learning; visual saliency; Computer science; Computer vision; Detectors; Layout; Learning systems; Object detection; Training data;
Conference_Titel :
Computer Vision, 2009 IEEE 12th International Conference on
Conference_Location :
Kyoto
Print_ISBN :
978-1-4244-4420-5
Electronic_ISBN :
1550-5499
DOI :
10.1109/ICCV.2009.5459194