Performance Analysis of Transformer Based Models (BERT, ALBERT and RoBERTa) in Fake News Detection

August 09, 2023 ยท Entered Twilight ยท ๐Ÿ› 2023 6th International Conference on Information and Communications Technology (ICOIACT)

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: 12RoBERTa,_adam,_2E_5,_bs_16,.ipynb, 12mult_bs_16_BERT_2e_5_Adam_fake_news_50_epoch.ipynb, IndoBERT_12.ipynb, Preprocessing.ipynb, README.md, requirements.txt, seed_12_bs_16_ALBERT_2e_5.ipynb

Authors Shafna Fitria Nur Azizah, Hasan Dwi Cahyono, Sari Widya Sihwi, Wisnu Widiarto arXiv ID 2308.04950 Category cs.CL: Computation & Language Cross-listed cs.CR, cs.LG Citations 28 Venue 2023 6th International Conference on Information and Communications Technology (ICOIACT) Repository https://github.com/Shafna81/fakenewsdetection.git โญ 1 Last Checked 1 month ago
Abstract
Fake news is fake material in a news media format but is not processed properly by news agencies. The fake material can provoke or defame significant entities or individuals or potentially even for the personal interests of the creators, causing problems for society. Distinguishing fake news and real news is challenging due to limited of domain knowledge and time constraints. According to the survey, the top three areas most exposed to hoaxes and misinformation by residents are in Banten, DKI Jakarta and West Java. The model of transformers is referring to an approach in the field of artificial intelligence (AI) in natural language processing utilizing the deep learning architectures. Transformers exercise a powerful attention mechanism to process text in parallel and produce rich and contextual word representations. A previous study indicates a superior performance of a transformer model known as BERT over and above non transformer approach. However, some studies suggest the performance can be improved with the use of improved BERT models known as ALBERT and RoBERTa. However, the modified BERT models are not well explored for detecting fake news in Bahasa Indonesia. In this research, we explore those transformer models and found that ALBERT outperformed other models with 87.6% accuracy, 86.9% precision, 86.9% F1-score, and 174.5 run-time (s/epoch) respectively. Source code available at: https://github.com/Shafna81/fakenewsdetection.git
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago