LongCoder: A Long-Range Pre-trained Language Model for Code Completion

June 26, 2023 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, CODE_OF_CONDUCT.md, CONTRIBUTING.md, CodeBERT, CodeExecutor, CodeReviewer, GraphCodeBERT, LICENSE, LongCoder, NOTICE.md, README.md, SECURITY.md, UniXcoder

Authors Daya Guo, Canwen Xu, Nan Duan, Jian Yin, Julian McAuley arXiv ID 2306.14893 Category cs.SE: Software Engineering Cross-listed cs.AI, cs.CL, cs.LG Citations 139 Venue International Conference on Machine Learning Repository https://github.com/microsoft/CodeBERT โญ 2734 Last Checked 1 month ago
Abstract
In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference. All the codes and data are available at https://github.com/microsoft/CodeBERT.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Software Engineering