The following publications are possibly variants of this publication:
- LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language ModelsZhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu 0001, Soujanya Poria, Roy Ka-Wei Lee. emnlp 2023: 5254-5276 [doi]
- Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language ModelsRaymond Li, Gabriel Murray, Giuseppe Carenini. emnlp 2023: 9456-9469 [doi]
- Experience Adapter: Adapting Pre-trained Language Models for Continual Task PlanningJiatao Zhang, Jianfeng Liao, Tuocheng Hu, Tian Zhou, Haofu Qian, Haoyang Zhang, Han Li, Lanling Tang, Qiwei Meng, Wei Song 0008, Shiqiang Zhu. icira 2023: 389-400 [doi]
- K-Adapter: Infusing Knowledge into Pre-Trained Models with AdaptersRuize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou 0001. acl 2021: 1405-1418 [doi]
- One Adapter for All Programming Languages? Adapter Tuning for Code Search and SummarizationDeze Wang, Boxing Chen, Shanshan Li 0001, Wei Luo, Shaoliang Peng, Wei Dong 0006, Xiangke Liao. ICSE 2023: 5-16 [doi]
- Atten-Adapter: A Unified Attention-Based Adapter for Efficient TuningKaiwen Li, Wenzhe Gu, Maixuan Xue, Jiahua Xiao, Dahu Shi, Xing Wei. icip 2023: 1265-1269 [doi]