Interpreting Universal Adversarial Example Attacks on Image Classification Models

Yi Ding 0003, Fuyuan Tan, Ji Geng 0001, Zhen Qin 0002, Mingsheng Cao, Kim-Kwang Raymond Choo, Zhiguang Qin. Interpreting Universal Adversarial Example Attacks on Image Classification Models. IEEE Trans. Dependable Sec. Comput., 20(4):3392-3407, July - August 2023. [doi]

@article{DingTGQCCQ23,
  title = {Interpreting Universal Adversarial Example Attacks on Image Classification Models},
  author = {Yi Ding 0003 and Fuyuan Tan and Ji Geng 0001 and Zhen Qin 0002 and Mingsheng Cao and Kim-Kwang Raymond Choo and Zhiguang Qin},
  year = {2023},
  month = {July - August},
  doi = {10.1109/TDSC.2022.3202544},
  url = {https://doi.org/10.1109/TDSC.2022.3202544},
  researchr = {https://researchr.org/publication/DingTGQCCQ23},
  cites = {0},
  citedby = {0},
  journal = {IEEE Trans. Dependable Sec. Comput.},
  volume = {20},
  number = {4},
  pages = {3392-3407},
}