The following publications are possibly variants of this publication:
- Prompt Tuning for Discriminative Pre-trained Language ModelsYuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang 0001. acl 2022: 3468-3473 [doi]
- CoCoOpter: Pre-train, prompt, and fine-tune the vision-language model for few-shot image classificationJie Yan, Yuxiang Xie, Yanming Guo, Yingmei Wei, Xiaoping Zhang, Xidao Luan. ijmir, 12(2):27, December 2023. [doi]
- Dual Modality Prompt Tuning for Vision-Language Pre-Trained ModelYinghui Xing, Qirui Wu, De Cheng, Shizhou Zhang, Guoqiang Liang, Peng Wang 0015, Yanning Zhang. tmm, 26:2056-2068, 2024. [doi]
- POUF: Prompt-Oriented Unsupervised Fine-tuning for Large Pre-trained ModelsKorawat Tanwisuth, Shujian Zhang, Huangjie Zheng, Pengcheng He, Mingyuan Zhou. icml 2023: 33816-33832 [doi]
- Span Fine-tuning for Pre-trained Language ModelsRongzhou Bao, Zhuosheng Zhang 0001, Hai Zhao. emnlp 2021: 1970-1979 [doi]
- How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?Xinshuai Dong, Anh Tuan Luu, Min Lin, Shuicheng Yan, Hanwang Zhang. nips 2021: 4356-4369 [doi]