The following publications are possibly variants of this publication:
- LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference AttacksMengyao Ma, Yanjun Zhang, Mahawaga Arachchige Pathum Chamikara, Leo Yu Zhang, Mohan Baruwal Chhetri, Guangdong Bai. AsiaCCS 2023: 122-135 [doi]
- Defending Against Data Poisoning Attacks: From Distributed Learning to Federated LearningYuchen Tian, Weizhe Zhang, Andrew Simpson, Yang Liu, Zoe Lin Jiang. cj, 66(3):711-726, March 2023. [doi]
- FLOW: A Robust Federated Learning Framework to Defend Against Model Poisoning Attacks in IoTShukan Liu, Zhenyu Li, Qiao Sun, Lin Chen, Xianfeng Zhang, Li Duan. iotj, 11(9):15075-15086, May 2024. [doi]
- Defending Poisoning Attacks in Federated Learning via Loss Value Normal DistributionFei Han, Yao Zhang, Meng Zhao. cscwd 2023: 1644-1649 [doi]
- Defending Poisoning Attacks in Federated Learning via Adversarial Training MethodJiale Zhang, Di Wu, Chengyong Liu, Bing Chen 0002. fcs2 2020: 83-94 [doi]