๐
๐
Old Age
Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
October 04, 2022 ยท Entered Twilight ยท ๐ International Conference on Computational Linguistics
Repo contents: CODE_OF_CONDUCT.md, CONTRIBUTING.md, LICENSE.md, README.md, data, evaluate
Authors
Priyanka Sen, Alham Fikri Aji, Amir Saffari
arXiv ID
2210.01613
Category
cs.CL: Computation & Language
Citations
96
Venue
International Conference on Computational Linguistics
Repository
https://github.com/amazon-research/mintaka
โญ 118
Last Checked
1 month ago
Abstract
We introduce Mintaka, a complex, natural, and multilingual dataset designed for experimenting with end-to-end question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English, annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish for a total of 180,000 samples. Mintaka includes 8 types of complex questions, including superlative, intersection, and multi-hop questions, which were naturally elicited from crowd workers. We run baselines over Mintaka, the best of which achieves 38% hits@1 in English and 31% hits@1 multilingually, showing that existing models have room for improvement. We release Mintaka at https://github.com/amazon-research/mintaka.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted