The following publications are possibly variants of this publication:
- Defending against Membership Inference Attacks in Federated learning via Adversarial ExampleYuanyuan Xie, Bing Chen, Jiale Zhang, Di Wu. msn 2021: 153-160 [doi]
- Defense against backdoor attack in federated learningShiwei Lu, Ruihu Li, Wenbin Liu, Xuan Chen. compsec, 121:102819, 2022. [doi]
- DBA: Distributed Backdoor Attacks against Federated LearningChulin Xie, Keli Huang, Pin-Yu Chen, Bo Li. iclr 2020: [doi]
- Against Backdoor Attacks In Federated Learning With Differential PrivacyLu Miao, Wei Yang, Rong Hu, Lu Li, Liusheng Huang. icassp 2022: 2999-3003 [doi]
- A3FL: Adversarially Adaptive Backdoor Attacks to Federated LearningHangfan Zhang, Jinyuan Jia, Jinghui Chen, Lu Lin, Dinghao Wu. nips 2023: [doi]
- A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial ExamplesGuanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan. bigdataconf 2021: 834-846 [doi]
- RPFL: Robust and Privacy Federated Learning against Backdoor and Sample Inference AttacksDi Xiao, Zhuyang Yu, Lvjun Chen. icpads 2023: 1508-1515 [doi]
- Defending against Poisoning Backdoor Attacks on Federated Meta-learningChien-Lun Chen, Sara Babakniya, Marco Paolieri, Leana Golubchik. tist, 13(5), 2022. [doi]
- FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor AttacksWeiqi Wang, Chenhan Zhang, Shushu Liu, Mingjian Tang 0002, An Liu 0002, Shui Yu 0001. icc 2023: 4873-4878 [doi]