Title :
Learning Curves for Automating Content Analysis: How Much Human Annotation is Needed?
Author :
Emi Ishita;Douglas W. Oard;Kenneth R. Fleischmann;Yoichi Tomiura;Yasuhiro Takayama;An-Shou Cheng
Author_Institution :
Univ. Libr., R&
fDate :
7/1/2015 12:00:00 AM
Abstract :
In this paper, we explore the potential for reducing human effort when coding text segments for use in content analysis. The key idea is to do some coding by hand, to use the results of that initial effort as training data, and then to code the remainder of the content automatically. The test collection includes 102 written prepared statements about Net neutrality from public hearings held by the U.S Congress and the U.S. Federal Communications Commission (FCC). Six categories used in this analysis: wealth, social order, justice, freedom, innovation and honor. A support vector machine (SVM) classifier and a Naïve Bayes (NB) classifier were trained on manually annotated sentences from between one and 51 documents and tested on a held out of set of 51 documents. The results show that the inflection point for a standard measure of classifier accuracy (F1) occurs early, reaching at least 85% of the best achievable result by the SVM classifier with only 30 training documents, and at least 88% of the best achievable result by NB classifier with only 30 training documents. With the exception of honor, the results show that the scale of machine classification would reasonably be scaled up to larger collections of similar documents without additional human annotation effort.
Keywords :
"Support vector machines","Training","Encoding","Niobium","Training data","Network neutrality","FCC"
Conference_Titel :
Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on
Print_ISBN :
978-1-4799-9957-6
DOI :
10.1109/IIAI-AAI.2015.295