Tokenwise Contrastive Pretraining for Finer Speech-to-BERT Alignment in End-to-End Speech-to-Intent Systems

Vishal Sunder, Eric Fosler-Lussier, Samuel Thomas 0001, Hong-Kwang Kuo, Brian Kingsbury. Tokenwise Contrastive Pretraining for Finer Speech-to-BERT Alignment in End-to-End Speech-to-Intent Systems. In Hanseok Ko, John H. L. Hansen, editors, Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022. pages 2683-2687, ISCA, 2022. [doi]

Authors

Vishal Sunder

This author has not been identified. Look up 'Vishal Sunder' in Google

Eric Fosler-Lussier

This author has not been identified. Look up 'Eric Fosler-Lussier' in Google

Samuel Thomas 0001

This author has not been identified. Look up 'Samuel Thomas 0001' in Google

Hong-Kwang Kuo

This author has not been identified. Look up 'Hong-Kwang Kuo' in Google

Brian Kingsbury

This author has not been identified. Look up 'Brian Kingsbury' in Google