Interpreting Universal Adversarial Example Attacks on Image Classification Models

Yi Ding 0003, Fuyuan Tan, Ji Geng 0001, Zhen Qin 0002, Mingsheng Cao, Kim-Kwang Raymond Choo, Zhiguang Qin. Interpreting Universal Adversarial Example Attacks on Image Classification Models. IEEE Trans. Dependable Sec. Comput., 20(4):3392-3407, July - August 2023. [doi]

Authors

Yi Ding 0003

This author has not been identified. Look up 'Yi Ding 0003' in Google

Fuyuan Tan

This author has not been identified. Look up 'Fuyuan Tan' in Google

Ji Geng 0001

This author has not been identified. Look up 'Ji Geng 0001' in Google

Zhen Qin 0002

This author has not been identified. Look up 'Zhen Qin 0002' in Google

Mingsheng Cao

This author has not been identified. Look up 'Mingsheng Cao' in Google

Kim-Kwang Raymond Choo

This author has not been identified. Look up 'Kim-Kwang Raymond Choo' in Google

Zhiguang Qin

This author has not been identified. Look up 'Zhiguang Qin' in Google