Self-Distillation Based on High-level Information Supervision for Compressing End-to-End ASR Model

Qiang Xu, Tongtong Song, Longbiao Wang, Hao Shi, Yuqin Lin, Yongjie Lv, Meng Ge, Qiang Yu 0005, Jianwu Dang. Self-Distillation Based on High-level Information Supervision for Compressing End-to-End ASR Model. In Hanseok Ko, John H. L. Hansen, editors, Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022. pages 1716-1720, ISCA, 2022. [doi]

Abstract

Abstract is missing.