Longformer for MS MARCO Document Re-ranking Task

September 20, 2020 Β· Entered Twilight Β· πŸ› Text Retrieval Conference

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, README.md, data, src

Authors Ivan SekuliΔ‡, Amir Soleimani, Mohammad Aliannejadi, Fabio Crestani arXiv ID 2009.09392 Category cs.IR: Information Retrieval Citations 13 Venue Text Retrieval Conference Repository https://github.com/isekulic/longformer-marco ⭐ 20 Last Checked 1 month ago
Abstract
Two step document ranking, where the initial retrieval is done by a classical information retrieval method, followed by neural re-ranking model, is the new standard. The best performance is achieved by using transformer-based models as re-rankers, e.g., BERT. We employ Longformer, a BERT-like model for long documents, on the MS MARCO document re-ranking task. The complete code used for training the model can be found on: https://github.com/isekulic/longformer-marco
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Information Retrieval