The following publications are possibly variants of this publication:
- Robust Lottery Tickets for Pre-trained Language ModelsRui Zheng, Bao Rong, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang. acl 2022: 2211-2224 [doi]
- Prompt Tuning for Discriminative Pre-trained Language ModelsYuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang 0001. acl 2022: 3468-3473 [doi]
- Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy ConstructionBoqi Chen, Fandi Yi, Dániel Varró. MoDELS 2023: 588-596 [doi]
- Knowledge Prompting in Pre-trained Language Model for Natural Language UnderstandingJianing Wang, Wenkang Huang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Xiang Li, Ming Gao. emnlp 2022: 3164-3177 [doi]
- Iteratively Prompt Pre-trained Language Models for Chain of ThoughtBoshi Wang, Xiang Deng 0001, Huan Sun. emnlp 2022: 2714-2730 [doi]
- Pre-trained Language Model with Prompts for Temporal Knowledge Graph CompletionWenjie Xu, Ben Liu, Miao Peng, Xu Jia, Min Peng. acl 2023: 7790-7803 [doi]
- APrompt: Attention Prompt Tuning for Efficient Adaptation of Pre-trained Language ModelsQifan Wang, Yuning Mao, Jingang Wang, Hanchao Yu, Shaoliang Nie, Sinong Wang, Fuli Feng, Lifu Huang, Xiaojun Quan, Zenglin Xu, Dongfang Liu. emnlp 2023: 9147-9160 [doi]