How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?

Simeng Sun, Brian Dillon, Mohit Iyyer. How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?. In Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun R. Akula, editors, Proceedings of the Third Workshop on Insights from Negative Results in NLP, Insights@ACL 2022, Dublin, Ireland, May 26, 2022. pages 46-53, Association for Computational Linguistics, 2022. [doi]

@inproceedings{SunDI22,
  title = {How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?},
  author = {Simeng Sun and Brian Dillon and Mohit Iyyer},
  year = {2022},
  url = {https://aclanthology.org/2022.insights-1.6},
  researchr = {https://researchr.org/publication/SunDI22},
  cites = {0},
  citedby = {0},
  pages = {46-53},
  booktitle = {Proceedings of the Third Workshop on Insights from Negative Results in NLP, Insights@ACL 2022, Dublin, Ireland, May 26, 2022},
  editor = {Shabnam Tafreshi and João Sedoc and Anna Rogers and Aleksandr Drozd and Anna Rumshisky and Arjun R. Akula},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-955917-40-7},
}