When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

December 20, 2022 ยท Declared Dead ยท ๐Ÿ› Annual Meeting of the Association for Computational Linguistics

๐Ÿ“œ CAUSE OF DEATH: Death by README
Repo has only a README

Repo contents: .gitignore, LICENSE, README.md

Authors Zhuo Zhang, Yuanhang Yang, Yong Dai, Lizhen Qu, Zenglin Xu arXiv ID 2212.10025 Category cs.LG: Machine Learning Cross-listed cs.CL Citations 118 Venue Annual Meeting of the Association for Computational Linguistics Repository https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning} โญ 12 Last Checked 1 month ago
Abstract
With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ“œ Death by README