Title :
Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation
Author :
Jain, Suyog Dutt ; Grauman, Kristen
Author_Institution :
Univ. of Texas at Austin, Austin, TX, USA
Abstract :
The mode of manual annotation used in an interactive segmentation algorithm affects both its accuracy and ease-of-use. For example, bounding boxes are fast to supply, yet may be too coarse to get good results on difficult images, freehand outlines are slower to supply and more specific, yet they may be overkill for simple images. Whereas existing methods assume a fixed form of input no matter the image, we propose to predict the tradeoff between accuracy and effort. Our approach learns whether a graph cuts segmentation will succeed if initialized with a given annotation mode, based on the image´s visual separability and foreground uncertainty. Using these predictions, we optimize the mode of input requested on new images a user wants segmented. Whether given a single image that should be segmented as quickly as possible, or a batch of images that must be segmented within a specified time budget, we show how to select the easiest modality that will be sufficiently strong to yield high quality segmentations. Extensive results with real users and three datasets demonstrate the impact.
Keywords :
graph theory; image segmentation; accuracy-effort tradeoff prediction; bounding boxes; foreground uncertainty; freehand outlines; graph cut segmentation; image segmention; image visual separability; interactive foreground segmentation; manual annotation mode; segmentation quality; sufficient annotation strength prediction; Accuracy; Image color analysis; Image segmentation; Prediction algorithms; Shape; Training; Uncertainty;
Conference_Titel :
Computer Vision (ICCV), 2013 IEEE International Conference on
Conference_Location :
Sydney, NSW
DOI :
10.1109/ICCV.2013.166