The following publications are possibly variants of this publication:
- Towards an optimized distributed deep learning framework for a heterogeneous multi-GPU clusterYoungrang Kim, Hyeonseong Choi, Jaehwan Lee 0001, Jik-Soo Kim, Hyunseung Jei, Hongchan Roh. cluster, 23(3):2287-2300, 2020. [doi]
- Comprehensive techniques of multi-GPU memory optimization for deep learning accelerationYoungrang Kim, Jaehwan Lee 0001, Jik-Soo Kim, Hyunseung Jei, Hongchan Roh. cluster, 23(3):2193-2204, 2020. [doi]
- 2PGraph: Accelerating GNN Training over Large Graphs on GPU ClustersLiZhi Zhang, Zhiquan Lai, Shengwei Li, Yu Tang, Feng Liu, Dongsheng Li. cluster 2021: 103-113 [doi]
- HPH: Hybrid Parallelism on Heterogeneous Clusters for Accelerating Large-scale DNNs TrainingYabo Duan, Zhiquan Lai, Shengwei Li, Weijie Liu, Keshi Ge, Peng Liang, Dongsheng Li. cluster 2022: 313-323 [doi]