The following publications are possibly variants of this publication:
- Selective Audio Adversarial Example in Evasion Attack on Speech Recognition SystemHyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi. tifs, 15:526-538, 2020. [doi]
- Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each ClassifierHyun Kwon, Changhyun Cho, Jun Lee. ieicetd, 105-D(11):1880-1889, November 2022. [doi]
- Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifierHyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi. compsec, 78:380-397, 2018. [doi]
- Restricted Evasion Attack: Generation of Restricted-Area Adversarial ExampleHyun Kwon, Hyunsoo Yoon, Daeseon Choi. access, 7:60908-60919, 2019. [doi]
- Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural NetworkHyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi. access, 6:46084-46096, 2018. [doi]
- Priority Adversarial Example in Evasion Attack on Multiple Deep Neural NetworksHyun Kwon, Hyunsoo Yoon, Daeseon Choi. icaiic 2019: 399-404 [doi]
- Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural NetworkHyun Kwon, Hyunsoo Yoon, Daeseon Choi. icisc 2018: 351-367 [doi]
- Random Untargeted Adversarial Example on Deep Neural NetworkHyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi. symmetry, 10(12):738, 2018. [doi]
- What Do Untargeted Adversarial Examples Reveal in Medical Image Segmentation?Gangin Park, Chunsan Hong, Bohyung Kim, Won Hwa Kim. miccai 2022: 47-56 [doi]
- Fooling a Neural Network in Military Environments: Random Untargeted Adversarial ExampleHyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi. milcom 2018: 456-461 [doi]