WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Sanyuan Chen, Chengyi Wang 0002, Zhengyang Chen, Yu Wu 0012, Shujie Liu 0001, Zhuo Chen 0006, Jinyu Li 0001, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu 0027, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng 0001, Xiangzhan Yu, Furu Wei. WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing. J. Sel. Topics Signal Processing, 16(6):1505-1518, 2022. [doi]

Possibly Related Publications

The following publications are possibly variants of this publication: