Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice

Rongzhou Bao, Jiayi Wang, Hai Zhao. Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice. In Chengqing Zong, Fei Xia, Wenjie Li 0002, Roberto Navigli, editors, Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021. pages 3248-3258, Association for Computational Linguistics, 2021. [doi]

@inproceedings{BaoWZ21,
  title = {Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice},
  author = {Rongzhou Bao and Jiayi Wang and Hai Zhao},
  year = {2021},
  url = {https://aclanthology.org/2021.findings-acl.287},
  researchr = {https://researchr.org/publication/BaoWZ21},
  cites = {0},
  citedby = {0},
  pages = {3248-3258},
  booktitle = {Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021},
  editor = {Chengqing Zong and Fei Xia and Wenjie Li 0002 and Roberto Navigli},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-954085-54-1},
}