Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection

Jianguo Zhang 0005, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu 0006, Caiming Xiong, Philip S. Yu. Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection. In Bing Liu, Alexandros Papangelis, Stefan Ultes, Abhinav Rastogi, Yun-Nung Chen, Georgios Spithourakis, Elnaz Nouri, Weiyan Shi, editors, Proceedings of the 4th Workshop on NLP for Conversational AI, ConvAI@ACL 2022, Dublin, Ireland, May 27, 2022. pages 12-20, Association for Computational Linguistics, 2022. [doi]

@inproceedings{ZhangHWLLXY22,
  title = {Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection},
  author = {Jianguo Zhang 0005 and Kazuma Hashimoto and Yao Wan and Zhiwei Liu and Ye Liu 0006 and Caiming Xiong and Philip S. Yu},
  year = {2022},
  url = {https://aclanthology.org/2022.nlp4convai-1.2},
  researchr = {https://researchr.org/publication/ZhangHWLLXY22},
  cites = {0},
  citedby = {0},
  pages = {12-20},
  booktitle = {Proceedings of the 4th Workshop on NLP for Conversational AI, ConvAI@ACL 2022, Dublin, Ireland, May 27, 2022},
  editor = {Bing Liu and Alexandros Papangelis and Stefan Ultes and Abhinav Rastogi and Yun-Nung Chen and Georgios Spithourakis and Elnaz Nouri and Weiyan Shi},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-955917-46-9},
}