DocumentCode :
3672122
Title :
Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection
Author :
Grant Van Horn;Steve Branson;Ryan Farrell;Scott Haber;Jessie Barry;Panos Ipeirotis;Pietro Perona;Serge Belongie
Author_Institution :
Caltech, USA
fYear :
2015
fDate :
6/1/2015 12:00:00 AM
Firstpage :
595
Lastpage :
604
Abstract :
We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists - crowd annotators who are passionate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset containing 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird experts to measure the quality of popular datasets like CUB-200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algorithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately measure the performance of fine-grained computer vision systems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.
Keywords :
"Birds","Graphical user interfaces","Taxonomy","Insects","Computer vision","Classification algorithms","Visualization"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2015.7298658
Filename :
7298658
Link To Document :
بازگشت