SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
May 07, 2020 ยท Entered Twilight ยท ๐ Annual Meeting of the Association for Computational Linguistics
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: README.md, data, evaluate_summary.py, generate_summary_ga.py, generate_summary_rl.py, ref_free_metrics, requirements.txt, resources.py, rouge, sentence_transformers, summ_eval, summariser, utils
Authors
Yang Gao, Wei Zhao, Steffen Eger
arXiv ID
2005.03724
Category
cs.CL: Computation & Language
Cross-listed
cs.IR
Citations
136
Venue
Annual Meeting of the Association for Computational Linguistics
Repository
https://github.com/yg211/acl20-ref-free-eval
โญ 96
Last Checked
1 month ago
Abstract
We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.). We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers. All source code is available at https://github.com/yg211/acl20-ref-free-eval.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted