Multi-Granularity Structural Knowledge Distillation for Language Model Compression

Chang Liu, Chongyang Tao, Jiazhan Feng, Dongyan Zhao 0001. Multi-Granularity Structural Knowledge Distillation for Language Model Compression. In Smaranda Muresan, Preslav Nakov, Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022. pages 1001-1011, Association for Computational Linguistics, 2022. [doi]

@inproceedings{LiuTF022,
  title = {Multi-Granularity Structural Knowledge Distillation for Language Model Compression},
  author = {Chang Liu and Chongyang Tao and Jiazhan Feng and Dongyan Zhao 0001},
  year = {2022},
  url = {https://aclanthology.org/2022.acl-long.71},
  researchr = {https://researchr.org/publication/LiuTF022},
  cites = {0},
  citedby = {0},
  pages = {1001-1011},
  booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022},
  editor = {Smaranda Muresan and Preslav Nakov and Aline Villavicencio},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-955917-21-6},
}