Report
Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
العنوان: | Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters |
---|---|
المؤلفون: | Peng, Junyi, Stafylakis, Themos, Gu, Rongzhi, Plchot, Oldřich, Mošner, Ladislav, Burget, Lukáš, Černocký, Jan |
سنة النشر: | 2022 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Sound, Electrical Engineering and Systems Science - Signal Processing |
الوصف: | Recently, the pre-trained Transformer models have received a rising interest in the field of speech processing thanks to their great success in various downstream tasks. However, most fine-tuning approaches update all the parameters of the pre-trained model, which becomes prohibitive as the model size grows and sometimes results in overfitting on small datasets. In this paper, we conduct a comprehensive analysis of applying parameter-efficient transfer learning (PETL) methods to reduce the required learnable parameters for adapting to speaker verification tasks. Specifically, during the fine-tuning process, the pre-trained models are frozen, and only lightweight modules inserted in each Transformer block are trainable (a method known as adapters). Moreover, to boost the performance in a cross-language low-resource scenario, the Transformer model is further tuned on a large intermediate dataset before directly fine-tuning it on a small dataset. With updating fewer than 4% of parameters, (our proposed) PETL-based methods achieve comparable performances with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%, Vox1-H:1.73%). Comment: submitted to ICASSP2023 |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2210.16032 |
رقم الانضمام: | edsarx.2210.16032 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |