DocumentCode :
1068727
Title :
Exploring Correlation Between ROUGE and Human Evaluation on Meeting Summaries
Author :
Liu, Feifan ; Liu, Yang
Author_Institution :
Dept. of Comput. Sci., Univ. of Texas at Dallas, Richardson, TX, USA
Volume :
18
Issue :
1
fYear :
2010
Firstpage :
187
Lastpage :
196
Abstract :
Automatic summarization evaluation is very important to the development of summarization systems. In text summarization, ROUGE has been shown to correlate well with human evaluation when measuring match of content units. However, there are many characteristics of the multiparty meeting domain, which may pose potential problems to ROUGE. The goal of this paper is to examine how well the ROUGE scores correlate with human evaluation for extractive meeting summarization, and explore different meeting domain specific factors that have an impact on the correlation. More analysis than those in our previous work has been conducted in this study. Our experiments show that generally the correlation between ROUGE and human evaluation is not great; however, when accounting for several unique meeting characteristics, such as disfluencies, speaker information, and stopwords in the ROUGE setting, better correlation can be achieved, especially on the system summaries. We also found that these factors have a different impact on human versus system summaries. In addition, we contrast the results using ROUGE with other automatic summarization evaluation metrics, such as Kappa and Pyramid, and show the appropriateness of using ROUGE for this study.
Keywords :
document handling; Kappa; Pyramid; ROUGE; automatic summarization evaluation; human evaluation; text summarization; Correlation; ROUGE; disfluencies; evaluation; meeting summarization;
fLanguage :
English
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
1558-7916
Type :
jour
DOI :
10.1109/TASL.2009.2025096
Filename :
5071230
Link To Document :
بازگشت