YNU-HPCC at SemEval-2020 Task 11: LSTM Network for Detection of Propaganda Techniques in News Articles

August 24, 2020 ยท Entered Twilight ยท ๐Ÿ› International Workshop on Semantic Evaluation

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, KMP.py, LICENSE, README.md, Untitled.ipynb, Word_Maps.py, Word_Maps_TC.py, bert.py, bert_bilstm_softmax.py, creat_model.py, creat_model_bert_lstm.py, creat_model_crf.py, creat_model_glove_bilstm.py, creat_model_tc.py, creat_model_tc_1.py, dev-task-TC-template.out, final_1.txt, get_model.py, get_model_0378.py, get_model_tc.py, get_model_test_1.py, get_model_try.py, get_wordVector_byGlove.py, glove_bilstm.h5, glove_bilstm.py, mapping.xlsx, mapping111.xlsx, mapping_TC.xlsx, mapping_TC_word.xlsx, new_pre_deal.py, new_pre_deal_bert.py, new_sigmoid_mode_1.h5, new_sigmoid_mode_2.h5, new_sigmoid_mode_3.h5, pre_deal.py, pre_deal_bert.py, propaganda-techniques-names-semeval2020task11.txt, sigmoid_mode_1.h5, sigmoid_mode_10.h5, sigmoid_mode_11.h5, sigmoid_mode_12.h5, sigmoid_mode_3.h5, sigmoid_mode_6.h5, sigmoid_mode_7.h5, sigmoid_mode_9.h5, softmax_mode_1.h5, test-mapping.xlsx, test-new.labels, test.py, test_all_vector.py, test_random.py, train-task1-SI.labels, train-task2-TC.labels, train_all_vector.py, train_dev_labels.txt, train_labels.txt, utils.py, wan.py, wan1.py, ้ข„ๅค„็†.ipynb

Authors Jiaxu Dao, Jin Wang, Xuejie Zhang arXiv ID 2008.10166 Category cs.CL: Computation & Language Cross-listed cs.IR Citations 7 Venue International Workshop on Semantic Evaluation Repository https://github.com/daojiaxu/semeval_11 โญ 5 Last Checked 1 month ago
Abstract
This paper summarizes our studies on propaganda detection techniques for news articles in the SemEval-2020 task 11. This task is divided into the SI and TC subtasks. We implemented the GloVe word representation, the BERT pretraining model, and the LSTM model architecture to accomplish this task. Our approach achieved good results for both the SI and TC subtasks. The macro-F1-score for the SI subtask is 0.406, and the micro-F1-score for the TC subtask is 0.505. Our method significantly outperforms the officially released baseline method, and the SI and TC subtasks rank 17th and 22nd, respectively, for the test set. This paper also compares the performances of different deep learning model architectures, such as the Bi-LSTM, LSTM, BERT, and XGBoost models, on the detection of news promotion techniques. The code of this paper is availabled at: https://github.com/daojiaxu/semeval_11.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago