Emoji Prediction: Extensions and Benchmarking

July 14, 2020 ยท Declared Dead ยท ๐Ÿ› arXiv.org

๐Ÿฆด CAUSE OF DEATH: Skeleton Repo
Boilerplate only, no real code

Repo contents: MC_100_dev.csv, MC_100_test.csv, MC_100_train.csv.tar.bz2, MC_150_dev.csv, MC_150_test.csv, MC_150_train.csv.tar.bz2, MC_200_dev.csv, MC_200_test.csv, MC_200_train.csv.tar.bz2, MC_20_dev.csv, MC_20_test.csv, MC_20_train.csv.tar.bz2, MC_250_dev.csv, MC_250_test.csv, MC_250_train.csv.tar.bz2, MC_300_dev.csv, MC_300_test.csv, MC_300_train.csv.tar.bz2, MC_50_dev.csv, MC_50_test.csv, MC_50_train.csv.tar.bz2, MC_64_dev.csv, MC_64_test.csv, MC_64_train.csv.tar.bz2, ML_dev.csv, ML_test.csv, README.md

Authors Weicheng Ma, Ruibo Liu, Lili Wang, Soroush Vosoughi arXiv ID 2007.07389 Category cs.CL: Computation & Language Citations 21 Venue arXiv.org Repository https://github.com/hikari-NYU/Emoji_Prediction_Datasets_MMS โญ 19 Last Checked 1 month ago
Abstract
Emojis are a succinct form of language which can express concrete meanings, emotions, and intentions. Emojis also carry signals that can be used to better understand communicative intent. They have become a ubiquitous part of our daily lives, making them an important part of understanding user-generated content. The emoji prediction task aims at predicting the proper set of emojis associated with a piece of text. Through emoji prediction, models can learn rich representations of the communicative intent of the written text. While existing research on the emoji prediction task focus on a small subset of emoji types closely related to certain emotions, this setting oversimplifies the task and wastes the expressive power of emojis. In this paper, we extend the existing setting of the emoji prediction task to include a richer set of emojis and to allow multi-label classification on the task. We propose novel models for multi-class and multi-label emoji prediction based on Transformer networks. We also construct multiple emoji prediction datasets from Twitter using heuristics. The BERT models achieve state-of-the-art performances on all our datasets under all the settings, with relative improvements of 27.21% to 236.36% in accuracy, 2.01% to 88.28% in top-5 accuracy and 65.19% to 346.79% in F-1 score, compared to the prior state-of-the-art. Our results demonstrate the efficacy of deep Transformer-based models on the emoji prediction task. We also release our datasets at https://github.com/hikari-NYU/Emoji_Prediction_Datasets_MMS for future researchers.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago

Died the same way โ€” ๐Ÿฆด Skeleton Repo

R.I.P. ๐Ÿฆด Skeleton Repo

Neural Style Transfer: A Review

Yongcheng Jing, Yezhou Yang, ... (+4 more)

cs.CV ๐Ÿ› IEEE TVCG ๐Ÿ“š 828 cites 8 years ago