Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning

Dantong Liu, Kaushik Pavani, Sunny Dasgupta. Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning. In Nafise Sadat Moosavi, Iryna Gurevych, Yufang Hou 0001, Gyuwan Kim, Young-Jin Kim, Tal Schuster, Ameeta Agrawal, editors, Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing, SustaiNLP 2023, Toronto, Canada (Hybrid), July 13, 2023. pages 110-120, Association for Computational Linguistics, 2023. [doi]

@inproceedings{LiuPD23,
  title = {Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning},
  author = {Dantong Liu and Kaushik Pavani and Sunny Dasgupta},
  year = {2023},
  url = {https://aclanthology.org/2023.sustainlp-1.7},
  researchr = {https://researchr.org/publication/LiuPD23},
  cites = {0},
  citedby = {0},
  pages = {110-120},
  booktitle = {Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing, SustaiNLP 2023, Toronto, Canada (Hybrid), July 13, 2023},
  editor = {Nafise Sadat Moosavi and Iryna Gurevych and Yufang Hou 0001 and Gyuwan Kim and Young-Jin Kim and Tal Schuster and Ameeta Agrawal},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-959429-79-1},
}