The following publications are possibly variants of this publication:
- Towards Understanding How Self-training Tolerates Data Backdoor PoisoningSoumyadeep Pal, Ren Wang 0008, Yuguang Yao, Sijia Liu 0001. AAAI 2023: [doi]
- Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansingZhen Xiang, David J. Miller 0001, George Kesidis. compsec, 106:102280, 2021. [doi]
- Unlabeled backdoor poisoning on trained-from-scratch semi-supervised learningLe Feng, Zhenxing Qian, Xinpeng Zhang 0001, Sheng Li 0006. isci, 647:119453, November 2023. [doi]
- Chronic Poisoning: Backdoor Attack against Split LearningFangchao Yu, Bo Zeng, Kai Zhao, Zhi Pang, Lina Wang. AAAI 2024: 16531-16538 [doi]
- Progressive Poisoned Data Isolation for Training-Time Backdoor DefenseYiming Chen, Haiwei Wu, Jiantao Zhou 0001. AAAI 2024: 11425-11433 [doi]