ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization

Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang. ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022. pages 4235-4252, Association for Computational Linguistics, 2022. [doi]

Authors

Hanwei Xu

This author has not been identified. Look up 'Hanwei Xu' in Google

Yujun Chen

This author has not been identified. Look up 'Yujun Chen' in Google

Yulun Du

This author has not been identified. Look up 'Yulun Du' in Google

Nan Shao

This author has not been identified. Look up 'Nan Shao' in Google

Yanggang Wang

This author has not been identified. Look up 'Yanggang Wang' in Google

Haiyu Li

This author has not been identified. Look up 'Haiyu Li' in Google

Zhilin Yang

This author has not been identified. Look up 'Zhilin Yang' in Google