Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data

William Huang, Haokun Liu, Samuel R. Bowman. Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data. In Anna Rogers, João Sedoc, Anna Rumshisky, editors, Proceedings of the First Workshop on Insights from Negative Results in NLP, Insights 2020, Online, November 19, 2020. pages 82-87, Association for Computational Linguistics, 2020. [doi]

@inproceedings{HuangLB20,
  title = {Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data},
  author = {William Huang and Haokun Liu and Samuel R. Bowman},
  year = {2020},
  url = {https://www.aclweb.org/anthology/2020.insights-1.13/},
  researchr = {https://researchr.org/publication/HuangLB20},
  cites = {0},
  citedby = {0},
  pages = {82-87},
  booktitle = {Proceedings of the First Workshop on Insights from Negative Results in NLP, Insights 2020, Online, November 19, 2020},
  editor = {Anna Rogers and João Sedoc and Anna Rumshisky},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-952148-66-8},
}