Code-Mixed Probes Show How Pre-Trained Models Generalise on Code-Switched Text

Frances Adriana Laureano De Leon, Harish Tayyar Madabushi, Mark Lee 0001. Code-Mixed Probes Show How Pre-Trained Models Generalise on Code-Switched Text. In Nicoletta Calzolari, Min-Yen Kan, VĂ©ronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue, editors, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy. pages 3457-3468, ELRA and ICCL, 2024. [doi]

Abstract

Abstract is missing.