Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

Ruiqi Zhong, Dhruba Ghosh, Dan Klein, Jacob Steinhardt. Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level. In Chengqing Zong, Fei Xia, Wenjie Li 0002, Roberto Navigli, editors, Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021. pages 3813-3827, Association for Computational Linguistics, 2021. [doi]

@inproceedings{ZhongGKS21,
  title = {Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level},
  author = {Ruiqi Zhong and Dhruba Ghosh and Dan Klein and Jacob Steinhardt},
  year = {2021},
  url = {https://aclanthology.org/2021.findings-acl.334},
  researchr = {https://researchr.org/publication/ZhongGKS21},
  cites = {0},
  citedby = {0},
  pages = {3813-3827},
  booktitle = {Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021},
  editor = {Chengqing Zong and Fei Xia and Wenjie Li 0002 and Roberto Navigli},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-954085-54-1},
}