ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization

Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang. ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022. pages 4235-4252, Association for Computational Linguistics, 2022. [doi]

@inproceedings{XuCDSWLY22a,
  title = {ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization},
  author = {Hanwei Xu and Yujun Chen and Yulun Du and Nan Shao and Yanggang Wang and Haiyu Li and Zhilin Yang},
  year = {2022},
  url = {https://aclanthology.org/2022.findings-emnlp.312},
  researchr = {https://researchr.org/publication/XuCDSWLY22a},
  cites = {0},
  citedby = {0},
  pages = {4235-4252},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}