Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

Eric Mitchell, Joseph J. Noh, Siyan Li, William S. Armstrong, Ananth Agarwal, Patrick Liu, Chelsea Finn, Christopher D. Manning. Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11. pages 1754-1768, Association for Computational Linguistics, 2022. [doi]

@inproceedings{MitchellNLAALFM22,
  title = {Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference},
  author = {Eric Mitchell and Joseph J. Noh and Siyan Li and William S. Armstrong and Ananth Agarwal and Patrick Liu and Chelsea Finn and Christopher D. Manning},
  year = {2022},
  url = {https://aclanthology.org/2022.emnlp-main.115},
  researchr = {https://researchr.org/publication/MitchellNLAALFM22},
  cites = {0},
  citedby = {0},
  pages = {1754-1768},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}