Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing

Abbas Ghaddar, Yimeng Wu, Sunyam Bagga, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Xinyu Duan, Zhefeng Wang, Baoxing Huai, Xin Jiang 0001, Qun Liu, Philippe Langlais. Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11. pages 3135-3151, Association for Computational Linguistics, 2022. [doi]

@inproceedings{GhaddarWBRBRXWD22,
  title = {Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Processing},
  author = {Abbas Ghaddar and Yimeng Wu and Sunyam Bagga and Ahmad Rashid and Khalil Bibi and Mehdi Rezagholizadeh and Chao Xing and Yasheng Wang and Xinyu Duan and Zhefeng Wang and Baoxing Huai and Xin Jiang 0001 and Qun Liu and Philippe Langlais},
  year = {2022},
  url = {https://aclanthology.org/2022.emnlp-main.205},
  researchr = {https://researchr.org/publication/GhaddarWBRBRXWD22},
  cites = {0},
  citedby = {0},
  pages = {3135-3151},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}