DocumentCode
3724352
Title
Learning Curves for Automating Content Analysis: How Much Human Annotation is Needed?
Author
Emi Ishita;Douglas W. Oard;Kenneth R. Fleischmann;Yoichi Tomiura;Yasuhiro Takayama;An-Shou Cheng
Author_Institution
Univ. Libr., R&
fYear
2015
fDate
7/1/2015 12:00:00 AM
Firstpage
171
Lastpage
176
Abstract
In this paper, we explore the potential for reducing human effort when coding text segments for use in content analysis. The key idea is to do some coding by hand, to use the results of that initial effort as training data, and then to code the remainder of the content automatically. The test collection includes 102 written prepared statements about Net neutrality from public hearings held by the U.S Congress and the U.S. Federal Communications Commission (FCC). Six categories used in this analysis: wealth, social order, justice, freedom, innovation and honor. A support vector machine (SVM) classifier and a Naïve Bayes (NB) classifier were trained on manually annotated sentences from between one and 51 documents and tested on a held out of set of 51 documents. The results show that the inflection point for a standard measure of classifier accuracy (F1) occurs early, reaching at least 85% of the best achievable result by the SVM classifier with only 30 training documents, and at least 88% of the best achievable result by NB classifier with only 30 training documents. With the exception of honor, the results show that the scale of machine classification would reasonably be scaled up to larger collections of similar documents without additional human annotation effort.
Keywords
"Support vector machines","Training","Encoding","Niobium","Training data","Network neutrality","FCC"
Publisher
ieee
Conference_Titel
Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on
Print_ISBN
978-1-4799-9957-6
Type
conf
DOI
10.1109/IIAI-AAI.2015.295
Filename
7373896
Link To Document