Understanding Neural Abstractive Summarization Models via Uncertainty

October 15, 2020 ยท Entered Twilight ยท ๐Ÿ› Conference on Empirical Methods in Natural Language Processing

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .idea, LICENSE, README.md, analyze_attention.py, analyze_entropy.py, analyze_prob_attn.py, attention_layer.py, attention_y_entropy.py, attn_viz.py, configurations.py, convert_old_dump.py, data_collection.py, install.sh, pegasus_example.py, plot_fig_together.py, plot_figures.py, run_model_pegasus.py, slide.pdf, test.py, util.py, viz_prediction.py

Authors Jiacheng Xu, Shrey Desai, Greg Durrett arXiv ID 2010.07882 Category cs.CL: Computation & Language Citations 53 Venue Conference on Empirical Methods in Natural Language Processing Repository https://github.com/jiacheng-xu/text-sum-uncertainty โญ 30 Last Checked 1 month ago
Abstract
An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior. In this work, we analyze summarization decoders in both blackbox and whitebox ways by studying on the entropy, or uncertainty, of the model's token-level predictions. For two strong pre-trained models, PEGASUS and BART on two summarization datasets, we find a strong correlation between low prediction entropy and where the model copies tokens rather than generating novel text. The decoder's uncertainty also connects to factors like sentence position and syntactic distance between adjacent pairs of tokens, giving a sense of what factors make a context particularly selective for the model's next output token. Finally, we study the relationship of decoder uncertainty and attention behavior to understand how attention gives rise to these observed effects in the model. We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago