Do Sentence Interactions Matter? Leveraging Sentence Level Representations for Fake News Classification
October 27, 2019 ยท Entered Twilight ยท ๐ Conference on Empirical Methods in Natural Language Processing
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: EMNLP2019_TextGraphs_Supplementary.pdf, README.md, bert_classifier.py, data_loader.py, datasets.py, evaluator.py, layers.py, lib_semscore, main.py, model.py, models, plots, trainer.py, util.py
Authors
Vaibhav Vaibhav, Raghuram Mandyam Annasamy, Eduard Hovy
arXiv ID
1910.12203
Category
cs.CL: Computation & Language
Cross-listed
cs.LG,
stat.ML
Citations
75
Venue
Conference on Empirical Methods in Natural Language Processing
Repository
https://github.com/MysteryVaibhav/fake_news_semantics
โญ 24
Last Checked
1 month ago
Abstract
The rising growth of fake news and misleading information through online media outlets demands an automatic method for detecting such news articles. Of the few limited works which differentiate between trusted vs other types of news article (satire, propaganda, hoax), none of them model sentence interactions within a document. We observe an interesting pattern in the way sentences interact with each other across different kind of news articles. To capture this kind of information for long news articles, we propose a graph neural network-based model which does away with the need of feature engineering for fine grained fake news classification. Through experiments, we show that our proposed method beats strong neural baselines and achieves state-of-the-art accuracy on existing datasets. Moreover, we establish the generalizability of our model by evaluating its performance in out-of-domain scenarios. Code is available at https://github.com/MysteryVaibhav/fake_news_semantics
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted