ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection
August 13, 2020 Β· Entered Twilight Β· π International Workshop on Semantic Evaluation
"Last commit was 5.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, .gitmodules, Pipfile, Pipfile.lock, README.md, bin, captum, data, models, notebooks, offenseval, setup.py, submissions, tests
Authors
Juan Manuel PΓ©rez, AymΓ© Arango, Franco Luque
arXiv ID
2008.06408
Category
cs.CL: Computation & Language
Citations
4
Venue
International Workshop on Semantic Evaluation
Repository
https://github.com/finiteautomata/offenseval2020
β 3
Last Checked
1 month ago
Abstract
This paper describes our participation in SemEval-2020 Task 12: Multilingual Offensive Language Detection. We jointly-trained a single model by fine-tuning Multilingual BERT to tackle the task across all the proposed languages: English, Danish, Turkish, Greek and Arabic. Our single model had competitive results, with a performance close to top-performing systems in spite of sharing the same parameters across all languages. Zero-shot and few-shot experiments were also conducted to analyze the transference performance among these languages. We make our code public for further research
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Computation & Language
π
π
Old Age
π
π
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
π»
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
π»
Ghosted