YNU-HPCC at SemEval-2020 Task 8: Using a Parallel-Channel Model for Memotion Analysis
July 28, 2020 ยท Entered Twilight ยท ๐ International Workshop on Semantic Evaluation
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: README.md, pre_a_b_c, taska_token, taskb_operation
Authors
Li Yuan, Jin Wang, Xuejie Zhang
arXiv ID
2007.13968
Category
cs.CL: Computation & Language
Citations
6
Venue
International Workshop on Semantic Evaluation
Repository
https://github.com/YuanLi95/Semveal2020-Task8-emotion-analysis
โญ 4
Last Checked
2 months ago
Abstract
In recent years, the growing ubiquity of Internet memes on social media platforms, such as Facebook, Instagram, and Twitter, has become a topic of immense interest. However, the classification and recognition of memes is much more complicated than that of social text since it involves visual cues and language understanding. To address this issue, this paper proposed a parallel-channel model to process the textual and visual information in memes and then analyze the sentiment polarity of memes. In the shared task of identifying and categorizing memes, we preprocess the dataset according to the language behaviors on social media. Then, we adapt and fine-tune the Bidirectional Encoder Representations from Transformers (BERT), and two types of convolutional neural network models (CNNs) were used to extract the features from the pictures. We applied an ensemble model that combined the BiLSTM, BIGRU, and Attention models to perform cross domain suggestion mining. The officially released results show that our system performs better than the baseline algorithm. Our team won nineteenth place in subtask A (Sentiment Classification). The code of this paper is availabled at : https://github.com/YuanLi95/Semveal2020-Task8-emotion-analysis.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted