The following publications are possibly variants of this publication:
- Multi-modal co-attention relation networks for visual question answeringZihan Guo, Dezhi Han. vc, 39(11):5783-5795, November 2023. [doi]
- Dual Self-Guided Attention with Sparse Question Networks for Visual Question AnsweringXiang Shen, Dezhi Han, Chin-Chen Chang, Liang Zong. ieicetd, 105(4):785-796, 2022. [doi]
- Sparse co-attention visual question answering networks based on thresholdsZihan Guo, Dezhi Han. apin, 53(1):586-600, 2023. [doi]
- Answer-checking in Context: A Multi-modal Fully Attention Network for Visual Question AnsweringHantao Huang, Tao Han, Wei Han, Deep Yap, Cheng-Ming Chiang. icpr 2021: 1173-1180 [doi]
- Cross-modality co-attention networks for visual question answeringDezhi Han, Shuli Zhou, Kuan-Ching Li, Rodrigo Fernandes de Mello. soco, 25(7):5411-5421, 2021. [doi]
- Cross-Modal Multistep Fusion Network With Co-Attention for Visual Question AnsweringMingrui Lao, Yanming Guo, Hui Wang, Xin Zhang. access, 6:31516-31524, 2018. [doi]