Interpreting Universal Adversarial Example Attacks on Image Classification Models

Yi Ding 0003, Fuyuan Tan, Ji Geng 0001, Zhen Qin 0002, Mingsheng Cao, Kim-Kwang Raymond Choo, Zhiguang Qin. Interpreting Universal Adversarial Example Attacks on Image Classification Models. IEEE Trans. Dependable Sec. Comput., 20(4):3392-3407, July - August 2023. [doi]

Abstract

Abstract is missing.