MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies

Shiyue Zhang, Shijie Wu, Ozan Irsoy, Steven Lu, Mohit Bansal, Mark Dredze, David S. Rosenberg. MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies. In Anna Rogers, Jordan L. Boyd-Graber, Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023. pages 9027-9050, Association for Computational Linguistics, 2023. [doi]

Abstract

Abstract is missing.