Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Text Data

Changfeng Gao, Gaofeng Cheng, Runyan Yang, Han Zhu, Pengyuan Zhang, Yonghong Yan 0002. Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Text Data. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021. pages 6543-6547, IEEE, 2021. [doi]

Authors

Changfeng Gao

This author has not been identified. Look up 'Changfeng Gao' in Google

Gaofeng Cheng

This author has not been identified. Look up 'Gaofeng Cheng' in Google

Runyan Yang

This author has not been identified. Look up 'Runyan Yang' in Google

Han Zhu

This author has not been identified. Look up 'Han Zhu' in Google

Pengyuan Zhang

This author has not been identified. Look up 'Pengyuan Zhang' in Google

Yonghong Yan 0002

This author has not been identified. Look up 'Yonghong Yan 0002' in Google