Multilingual Multimodal Pre-training for Zero-Shot Cross-Lingual Transfer of Vision-Language Models

Poyao Huang 0001, Mandela Patrick, Junjie Hu 0001, Graham Neubig, Florian Metze, Alex Hauptmann 0001. Multilingual Multimodal Pre-training for Zero-Shot Cross-Lingual Transfer of Vision-Language Models. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty 0002, Yichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. pages 2443-2459, Association for Computational Linguistics, 2021. [doi]

Authors

Poyao Huang 0001

This author has not been identified. Look up 'Poyao Huang 0001' in Google

Mandela Patrick

This author has not been identified. Look up 'Mandela Patrick' in Google

Junjie Hu 0001

This author has not been identified. Look up 'Junjie Hu 0001' in Google

Graham Neubig

This author has not been identified. Look up 'Graham Neubig' in Google

Florian Metze

This author has not been identified. Look up 'Florian Metze' in Google

Alex Hauptmann 0001

This author has not been identified. Look up 'Alex Hauptmann 0001' in Google