Autoregressive Knowledge Distillation through Imitation Learning

Alexander Lin, Jeremy Wohlwend, Howard Chen, Tao Lei 0001. Autoregressive Knowledge Distillation through Imitation Learning. In Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020. pages 6121-6133, Association for Computational Linguistics, 2020. [doi]

Authors

Alexander Lin

This author has not been identified. Look up 'Alexander Lin' in Google

Jeremy Wohlwend

This author has not been identified. Look up 'Jeremy Wohlwend' in Google

Howard Chen

This author has not been identified. Look up 'Howard Chen' in Google

Tao Lei 0001

This author has not been identified. Look up 'Tao Lei 0001' in Google