A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models

Hayeon Lee, Rui Hou 0007, Jongpil Kim, Davis Liang, Sung Ju Hwang, Alexander Min. A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models. In Anna Rogers, Jordan L. Boyd-Graber, Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023. pages 11239-11246, Association for Computational Linguistics, 2023. [doi]

@inproceedings{Lee0KLHM23,
  title = {A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models},
  author = {Hayeon Lee and Rui Hou 0007 and Jongpil Kim and Davis Liang and Sung Ju Hwang and Alexander Min},
  year = {2023},
  url = {https://aclanthology.org/2023.findings-acl.714},
  researchr = {https://researchr.org/publication/Lee0KLHM23},
  cites = {0},
  citedby = {0},
  pages = {11239-11246},
  booktitle = {Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023},
  editor = {Anna Rogers and Jordan L. Boyd-Graber and Naoaki Okazaki},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-959429-62-3},
}