Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts

Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Zhang, Jun Zhou, Defu Lian, Ying Wei 0001. Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts. In Houda Bouamor, Juan Pino 0001, Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023. pages 12081-12095, Association for Computational Linguistics, 2023. [doi]

Abstract

Abstract is missing.