The following publications are possibly variants of this publication:
- Auto-Divide GNN: Accelerating GNN Training with Subgraph DivisionHongyu Chen, Zhejiang Ran, Keshi Ge, Zhiquan Lai, Jingfei Jiang, Dongsheng Li 0001. europar 2023: 367-382 [doi]
- SCGraph: Accelerating Sample-based GNN Training by Staged Caching of Features on GPUsYuqi He, Zhiquan Lai, Zhejiang Ran, LiZhi Zhang, Dongsheng Li 0001. bdcloud 2022: 106-113 [doi]
- Accelerating GNN Training by Adapting Large Graphs to Distributed Heterogeneous ArchitecturesLiZhi Zhang, Kai Lu, Zhiquan Lai, Yongquan Fu, Yu Tang, Dongsheng Li 0001. TC, 72(12):3473-3488, December 2023. [doi]
- XGNN: Boosting Multi-GPU GNN Training via Global GNN Memory StoreDahai Tang, Jiali Wang, Rong Chen 0001, Lei Wang, Wenyuan Yu, Jingren Zhou, Kenli Li 0001. pvldb, 17(5):1105-1118, January 2024. [doi]
- Accelerating Distributed GNN Training by CodesYanhong Wang, Tianchan Guan, Dimin Niu, Qiaosha Zou, Hongzhong Zheng, C.-J. Richard Shi, Yuan Xie 0001. tpds, 34(9):2598-2614, September 2023. [doi]
- TurboMGNN: Improving Concurrent GNN Training Tasks on GPU With Fine-Grained Kernel FusionWenchao Wu, Xuanhua Shi, Ligang He, Hai Jin 0001. tpds, 34(6):1968-1981, June 2023. [doi]
- BLAD: Adaptive Load Balanced Scheduling and Operator Overlap Pipeline For Accelerating The Dynamic GNN TrainingKaihua Fu, Quan Chen 0002, Yuzhuo Yang, Jiuchen Shi, Chao Li, Minyi Guo. sc 2023: [doi]