The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked Language Model Pretraining

Ting-Rui Chiang, Dani Yogatama. The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked Language Model Pretraining. In Houda Bouamor, Juan Pino 0001, Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023. pages 10305-10321, Association for Computational Linguistics, 2023. [doi]

@inproceedings{ChiangY23,
  title = {The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked Language Model Pretraining},
  author = {Ting-Rui Chiang and Dani Yogatama},
  year = {2023},
  url = {https://aclanthology.org/2023.emnlp-main.637},
  researchr = {https://researchr.org/publication/ChiangY23},
  cites = {0},
  citedby = {0},
  pages = {10305-10321},
  booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023},
  editor = {Houda Bouamor and Juan Pino 0001 and Kalika Bali},
  publisher = {Association for Computational Linguistics},
  isbn = {979-8-89176-060-8},
}