The following publications are possibly variants of this publication:
- PPT: Backdoor Attacks on Pre-trained Models via Poisoned Prompt TuningWei Du, Yichun Zhao, Boqun Li, Gongshen Liu, Shilin Wang. IJCAI 2022: 680-686 [doi]
- BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation ModelsKangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang 0004, Jiwei Li, Chun Fan. iclr 2022: [doi]
- Multi-target Backdoor Attacks for Code Pre-trained ModelsYanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang 0004, Yang Liu 0003. acl 2023: 7236-7254 [doi]
- DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion ModelsJiachen Zhou, Peizhuo Lv, Yibing Lan, Guozhu Meng, Kai Chen 0012, Hualong Ma. AAAI 2024: 21850-21858 [doi]
- Poisoning-Based Backdoor Attacks in Computer VisionYiming Li. AAAI 2023: 16121-16122 [doi]
- Chronic Poisoning: Backdoor Attack against Split LearningFangchao Yu, Bo Zeng, Kai Zhao, Zhi Pang, Lina Wang. AAAI 2024: 16531-16538 [doi]