ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model for offensive language detection

August 13, 2020 Β· Entered Twilight Β· πŸ› International Workshop on Semantic Evaluation

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .gitmodules, Pipfile, Pipfile.lock, README.md, bin, captum, data, models, notebooks, offenseval, setup.py, submissions, tests

Authors Juan Manuel Pérez, Aymé Arango, Franco Luque arXiv ID 2008.06408 Category cs.CL: Computation & Language Citations 4 Venue International Workshop on Semantic Evaluation Repository https://github.com/finiteautomata/offenseval2020 ⭐ 3 Last Checked 1 month ago
Abstract
This paper describes our participation in SemEval-2020 Task 12: Multilingual Offensive Language Detection. We jointly-trained a single model by fine-tuning Multilingual BERT to tackle the task across all the proposed languages: English, Danish, Turkish, Greek and Arabic. Our single model had competitive results, with a performance close to top-performing systems in spite of sharing the same parameters across all languages. Zero-shot and few-shot experiments were also conducted to analyze the transference performance among these languages. We make our code public for further research
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computation & Language

πŸŒ… πŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL πŸ› NeurIPS πŸ“š 166.0K cites 8 years ago