APrompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language Models

Qifan Wang, Yuning Mao, Jingang Wang, Hanchao Yu, Shaoliang Nie, Sinong Wang, Fuli Feng, Lifu Huang, Xiaojun Quan, Zenglin Xu, Dongfang Liu. APrompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language Models. In Houda Bouamor, Juan Pino 0001, Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023. pages 9147-9160, Association for Computational Linguistics, 2023. [doi]

Abstract

Abstract is missing.