Near-optimal sparse allreduce for distributed deep learning

Shigang Li 0002, Torsten Hoefler. Near-optimal sparse allreduce for distributed deep learning. In Jaejin Lee, Kunal Agrawal, Michael F. Spear, editors, PPoPP '22: 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seoul, Republic of Korea, April 2 - 6, 2022. pages 135-149, ACM, 2022. [doi]

@inproceedings{0002H22-0,
  title = {Near-optimal sparse allreduce for distributed deep learning},
  author = {Shigang Li 0002 and Torsten Hoefler},
  year = {2022},
  doi = {10.1145/3503221.3508399},
  url = {https://doi.org/10.1145/3503221.3508399},
  researchr = {https://researchr.org/publication/0002H22-0},
  cites = {0},
  citedby = {0},
  pages = {135-149},
  booktitle = {PPoPP '22: 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seoul, Republic of Korea, April 2 - 6, 2022},
  editor = {Jaejin Lee and Kunal Agrawal and Michael F. Spear},
  publisher = {ACM},
  isbn = {978-1-4503-9204-4},
}