Evading Deep Neural Network and Random Forest Classifiers by Generating Adversarial Samples

Erick Eduardo Bernal Martinez, Bella Oh, Feng Li, Xiao Luo. Evading Deep Neural Network and Random Forest Classifiers by Generating Adversarial Samples. In A. Nur Zincir-Heywood, Guillaume Bonfante, Mourad Debbabi, Joaquín García-Alfaro, editors, Foundations and Practice of Security - 11th International Symposium, FPS 2018, Montreal, QC, Canada, November 13-15, 2018, Revised Selected Papers. Volume 11358 of Lecture Notes in Computer Science, pages 143-155, Springer, 2018. [doi]

@inproceedings{MartinezOLL18,
  title = {Evading Deep Neural Network and Random Forest Classifiers by Generating Adversarial Samples},
  author = {Erick Eduardo Bernal Martinez and Bella Oh and Feng Li and Xiao Luo},
  year = {2018},
  doi = {10.1007/978-3-030-18419-3_10},
  url = {https://doi.org/10.1007/978-3-030-18419-3_10},
  researchr = {https://researchr.org/publication/MartinezOLL18},
  cites = {0},
  citedby = {0},
  pages = {143-155},
  booktitle = {Foundations and Practice of Security - 11th International Symposium, FPS 2018, Montreal, QC, Canada, November 13-15, 2018, Revised Selected Papers},
  editor = {A. Nur Zincir-Heywood and Guillaume Bonfante and Mourad Debbabi and Joaquín García-Alfaro},
  volume = {11358},
  series = {Lecture Notes in Computer Science},
  publisher = {Springer},
  isbn = {978-3-030-18419-3},
}