The following publications are possibly variants of this publication:
- Hub-Pathway: Transfer Learning from A Hub of Pre-trained ModelsYang Shu, Zhangjie Cao, Ziyang Zhang, Jianmin Wang 0001, Mingsheng Long. nips 2022: [doi]
- DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language ModelsXuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng 0001. acl 2023: 8208-8222 [doi]
- Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained ModelsYiwen Tang, Ray Zhang 0002, Zoey Guo, Xianzheng Ma, Bin Zhao, Zhigang Wang, Dong Wang, Xuelong Li. AAAI 2024: 5171-5179 [doi]
- APrompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language ModelsQifan Wang, Yuning Mao, Jingang Wang, Hanchao Yu, Shaoliang Nie, Sinong Wang, Fuli Feng, Lifu Huang, Xiaojun Quan, Zenglin Xu, Dongfang Liu. emnlp 2023: 9147-9160 [doi]
- Parameter-efficient fine-tuning of large-scale pre-trained language modelsNing Ding 0002, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu 0001, Hai-Tao Zheng 0002, Jianfei Chen, Yang Liu, Jie Tang 0001, Juanzi Li, Maosong Sun. natmi, 5(3):220-235, March 2023. [doi]