Evaluation Measures for Text Summarization
Josef Steinberger ; Karel Ježek
Computing and Informatics, Tome 28 (2012) no. 1, p. 251-275 / Harvested from Computing and Informatics
We explain the ideas of automatic text summarization approaches and the taxonomy of summary evaluation methods. Moreover, we propose a new evaluation measure for assessing the quality of a summary. The core of the measure is covered by Latent Semantic Analysis (LSA) which can capture the main topics of a document. The summarization systems are ranked according to the similarity of the main topics of their summaries and their reference documents. Results show a high correlation between human rankings and the LSA-based evaluation measure. The measure is designed to compare a summary with its full text. It can compare a summary with a human written abstract as well; however, in this case using a standard ROUGE measure gives more precise results. Nevertheless, if abstracts are not available for a given corpus, using the LSA-based measure is an appropriate choice.
Publié le : 2012-01-26
Classification: 
@article{cai37,
     author = {Josef Steinberger and Karel Je\v zek},
     title = {Evaluation Measures for Text Summarization},
     journal = {Computing and Informatics},
     volume = {28},
     number = {1},
     year = {2012},
     pages = { 251-275},
     language = {en},
     url = {http://dml.mathdoc.fr/item/cai37}
}
Josef Steinberger; Karel Ježek. Evaluation Measures for Text Summarization. Computing and Informatics, Tome 28 (2012) no. 1, pp.  251-275. http://gdmltest.u-ga.fr/item/cai37/