Knowledge Distillation for Recurrent Neural Network Language Modeling with Trust Regularization

Yangyang Shi, Mei-Yuh Hwang, Xin Lei, Haoyu Sheng. Knowledge Distillation for Recurrent Neural Network Language Modeling with Trust Regularization. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2019, Brighton, United Kingdom, May 12-17, 2019. pages 7230-7234, IEEE, 2019. [doi]

References

No references recorded for this publication.

Cited by

No citations of this publication recorded.