Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees

Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Ré, Ce Zhang 0001. Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. 2022. [doi]

@inproceedings{WangYRHDCR022,
  title = {Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees},
  author = {Jue Wang and Binhang Yuan and Luka Rimanic and Yongjun He and Tri Dao and Beidi Chen and Christopher Ré and Ce Zhang 0001},
  year = {2022},
  url = {http://papers.nips.cc/paper_files/paper/2022/hash/7a43b8eb92cd5f652b78eeee3fb6f910-Abstract-Conference.html},
  researchr = {https://researchr.org/publication/WangYRHDCR022},
  cites = {0},
  citedby = {0},
  booktitle = {Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022},
  editor = {Sanmi Koyejo and S. Mohamed and A. Agarwal and Danielle Belgrave and K. Cho and A. Oh},
  isbn = {9781713871088},
}