The following publications are possibly variants of this publication:
- MMCN: Multi-Modal Co-attention Network for Medical Visual Question AnsweringMing Sun, Qilong Xu, Ercong Wang, Wenjun Wang 0006, Lei Tan, Xiu Yang Zhao. ccris 2022: 1-6 [doi]
- Multi-modal co-attention relation networks for visual question answeringZihan Guo, Dezhi Han. vc, 39(11):5783-5795, November 2023. [doi]
- Dynamic Fusion With Intra- and Inter-Modality Attention Flow for Visual Question AnsweringPeng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven C. H. Hoi, Xiaogang Wang, Hongsheng Li. cvpr 2018: 6639-6648 [doi]
- Dual self-attention with co-attention networks for visual question answeringYun Liu, Xiaoming Zhang 0001, Qianyun Zhang, Chaozhuo Li, Feiran Huang, Xianghong Tang, Zhoujun Li. PR, 117:107956, 2021. [doi]
- Multi-Head Attention Fusion Network for Visual Question AnsweringHaiyang Zhang, Ruoyu Li, Liang Liu. icmcs 2022: 1-6 [doi]
- Asymmetric cross-modal attention network with multimodal augmented mixup for medical visual question answeringYong Li, Qihao Yang, Fu Lee Wang, Lap-Kei Lee, Yingying Qu, Tianyong Hao. artmed, 144:102667, October 2023. [doi]
- Medical visual question answering with symmetric interaction attention and cross-modal gatingZhi Chen, Beiji Zou, Yulan Dai, Chengzhang Zhu, Guilan Kong, Wensheng Zhang. bspc, 85:105049, August 2023. [doi]