Compressing Transformer-Based ASR Model by Task-Driven Loss and Attention-Based Multi-Level Feature Distillation

Yongjie Lv, Longbiao Wang, Meng Ge, Sheng Li 0010, Chenchen Ding, Lixin Pan, Yuguang Wang, Jianwu Dang, Kiyoshi Honda. Compressing Transformer-Based ASR Model by Task-Driven Loss and Attention-Based Multi-Level Feature Distillation. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23-27 May 2022. pages 7992-7996, IEEE, 2022. [doi]

@inproceedings{LvWGLDPWDH22,
  title = {Compressing Transformer-Based ASR Model by Task-Driven Loss and Attention-Based Multi-Level Feature Distillation},
  author = {Yongjie Lv and Longbiao Wang and Meng Ge and Sheng Li 0010 and Chenchen Ding and Lixin Pan and Yuguang Wang and Jianwu Dang and Kiyoshi Honda},
  year = {2022},
  doi = {10.1109/ICASSP43922.2022.9746113},
  url = {https://doi.org/10.1109/ICASSP43922.2022.9746113},
  researchr = {https://researchr.org/publication/LvWGLDPWDH22},
  cites = {0},
  citedby = {0},
  pages = {7992-7996},
  booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23-27 May 2022},
  publisher = {IEEE},
  isbn = {978-1-6654-0540-9},
}