Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees

Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher RĂ©, Ce Zhang 0001. Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. 2022. [doi]

Abstract

Abstract is missing.