Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection

Rheeya Uppaal, Junjie Hu, Yixuan Li. Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection. In Anna Rogers, Jordan L. Boyd-Graber, Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023. pages 12813-12832, Association for Computational Linguistics, 2023. [doi]

@inproceedings{UppaalHL23,
  title = {Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection},
  author = {Rheeya Uppaal and Junjie Hu and Yixuan Li},
  year = {2023},
  url = {https://aclanthology.org/2023.acl-long.717},
  researchr = {https://researchr.org/publication/UppaalHL23},
  cites = {0},
  citedby = {0},
  pages = {12813-12832},
  booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023},
  editor = {Anna Rogers and Jordan L. Boyd-Graber and Naoaki Okazaki},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-959429-72-2},
}