The following publications are possibly variants of this publication:
- Adaptive gradient sparsification with layer and stage-wise for accelerating distributed DNN trainingWaixi Liu 0001, Jun Cai 0002, Yue Yin, Zhen-xin Zhang, Kongyang Chen, Jian-Tao Fu, Wen-Li Shang. cn, 276:111983, 2026. [doi]
- MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN TrainingDaegun Yoon, Sangyoon Oh 0001. hipc 2023: 87-96 [doi]
- Prediction Confidence based Low Complexity Gradient Computation for Accelerating DNN TrainingDongyeob Shin, Geonho Kim, Joongho Jo, Jongsun Park 0001. dac 2020: 1-6 [doi]
- A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU ClustersYimin Jiang, Yibo Zhu, Chang Lan, Bairen Yi, Yong Cui 0001, Chuanxiong Guo. osdi 2020: 463-479 [doi]
- AutoCCL: Automated Collective Communication Tuning for Accelerating Distributed and Parallel DNN TrainingGuanbin Xu, Zhihao Le, Yinhe Chen, Zhiqi Lin, Zewen Jin, Youshan Miao, Cheng Li. NSDI 2025: 667-683 [doi]
- PipePar: A Pipelined Hybrid Parallel Approach for Accelerating Distributed DNN TrainingJiange Li, Yuchen Wang, Jinghui Zhang, Jiahui Jin, Fang Dong 0001, Lei Qian. cscwd 2021: 470-475 [doi]