KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation

Yongfei Liu, Chenfei Wu, Shao-Yen Tseng, Vasudev Lal, Xuming He 0001, Nan Duan. KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation. In Marine Carpuat, Marie-Catherine de Marneffe, Iván Vladimir Meza Ruíz, editors, Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022. pages 1589-1600, Association for Computational Linguistics, 2022. [doi]

Abstract

Abstract is missing.