Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Text Data

Changfeng Gao, Gaofeng Cheng, Runyan Yang, Han Zhu, Pengyuan Zhang, Yonghong Yan 0002. Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Text Data. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021. pages 6543-6547, IEEE, 2021. [doi]

@inproceedings{GaoCYZZ021,
  title = {Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Text Data},
  author = {Changfeng Gao and Gaofeng Cheng and Runyan Yang and Han Zhu and Pengyuan Zhang and Yonghong Yan 0002},
  year = {2021},
  doi = {10.1109/ICASSP39728.2021.9414080},
  url = {https://doi.org/10.1109/ICASSP39728.2021.9414080},
  researchr = {https://researchr.org/publication/GaoCYZZ021},
  cites = {0},
  citedby = {0},
  pages = {6543-6547},
  booktitle = {IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021},
  publisher = {IEEE},
  isbn = {978-1-7281-7605-5},
}