Are Pretrained Multilingual Models Equally Fair across Languages?

Laura Cabello Piqueras, Anders Søgaard. Are Pretrained Multilingual Models Equally Fair across Languages?. In Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, YoungGyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na, editors, Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022. pages 3597-3605, International Committee on Computational Linguistics, 2022. [doi]

Authors

Laura Cabello Piqueras

This author has not been identified. Look up 'Laura Cabello Piqueras' in Google

Anders Søgaard

This author has not been identified. Look up 'Anders Søgaard' in Google