GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model

Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Shu Zhao, Peng Zhang, Jie Tang. GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model. In Sunayana Sitaram, Beata Beigman Klebanov, Jason D. Williams, editors, Proceedings of the The 61st Annual Meeting of the Association for Computational Linguistics: Industry Track, ACL 2023, Toronto, Canada, July 9-14, 2023. pages 134-148, Association for Computational Linguistics, 2023. [doi]

Abstract

Abstract is missing.