Efficient Federated Finetuning of Tiny Transformers with Resource-Constrained Devices

التفاصيل البيبلوغرافية
العنوان: Efficient Federated Finetuning of Tiny Transformers with Resource-Constrained Devices
المؤلفون: Pfeiffer, Kilian, Ahmed, Mohamed Aboelenien, Khalili, Ramin, Henkel, Jörg
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Distributed, Parallel, and Cluster Computing
الوصف: In recent years, Large Language Models (LLMs) through Transformer structures have dominated many machine learning tasks, especially text processing. However, these models require massive amounts of data for training and induce high resource requirements, particularly in terms of the large number of Floating Point Operations (FLOPs) and the high amounts of memory needed. To fine-tune such a model in a parameter-efficient way, techniques like Adapter or LoRA have been developed. However, we observe that the application of LoRA, when used in federated learning (FL), while still being parameter-efficient, is memory and FLOP inefficient. Based on that observation, we develop a novel layer finetuning scheme that allows devices in cross-device FL to make use of pretrained neural networks (NNs) while adhering to given resource constraints. We show that our presented scheme outperforms the current state of the art when dealing with homogeneous or heterogeneous computation and memory constraints and is on par with LoRA regarding limited communication, thereby achieving significantly higher accuracies in FL training.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2411.07826
رقم الانضمام: edsarx.2411.07826
قاعدة البيانات: arXiv