The following publications are possibly variants of this publication:
- Adversarial robustness of deep neural networks: A survey from a formal verification perspectiveMeng, Mark Huasong, Bai, Guangdong, Teo, Sin Gee, Hou, Zhe, Xiao, Yan, Lin, Yun, Dong, Jin Song. IEEE Transactions on Dependable and Secure Computing, , 2022.
- Communication-Efficient Distributed Deep Learning with Merged Gradient Sparsification on GPUsShaohuai Shi, Qiang Wang, Xiaowen Chu, Bo Li, Yang Qin, Ruihao Liu, Xinxiao Zhao. infocom 2020: 406-415 [doi]
- A Convergence Analysis of Distributed SGD with Communication-Efficient Gradient SparsificationShaohuai Shi, Kaiyong Zhao, Qiang Wang, Zhenheng Tang, Xiaowen Chu. IJCAI 2019: 3411-3417 [doi]
- A Hierarchical Communication Algorithm for Distributed Deep Learning TrainingJiayu Zhang, Shaojun Cheng, Feng Dong, Ke Chen, Yong Qiao, Zhigang Mao, Jianfei Jiang 0001. mwscas 2023: 526-530 [doi]
- FFT-based Gradient Sparsification for the Distributed Training of Deep Neural NetworksLinnan Wang, Wei Wu 0016, Junyu Zhang, Hang Liu, George Bosilca, Maurice Herlihy, Rodrigo Fonseca. hpdc 2020: 113-124 [doi]
- Communication Usage Optimization of Gradient Sparsification with Aggregation in Deep LearningSheng-Ping Wang, Pangfeng Liu, Jan-Jan Wu. icncc 2018: 22-26 [doi]
- DFS: Joint data formatting and sparsification for efficient communication in Distributed Machine LearningCheng Yang, Yangming Zhao, Gongming Zhao, Hongli Xu. cn, 229:109777, June 2023. [doi]