The following publications are possibly variants of this publication:
- CLIP-based fusion-modal reconstructing hashing for large-scale unsupervised cross-modal retrievalMingyong Li, Yewen Li, Mingyuan Ge, Longfei Ma. ijmir, 12(1):2, June 2023. [doi]
- Adaptive Graph Attention Hashing for Unsupervised Cross-Modal Retrieval via Multimodal TransformersYewen Li, Mingyuan Ge, Yucheng Ji, Mingyong Li. apweb 2024: 1-15 [doi]
- Graph Attention Hashing via Contrastive Learning for Unsupervised Cross-Modal RetrievalChen Yang, Shuyan Ding, LunBo Li, Jianhui Guo. iconip 2024: 497-509 [doi]
- Aggregation-Based Graph Convolutional Hashing for Unsupervised Cross-Modal RetrievalPeng-fei Zhang, Yang Li, Zi Huang, Xin-Shun Xu. tmm, 24:466-479, 2022. [doi]
- Self-Attentive CLIP Hashing for Unsupervised Cross-Modal RetrievalHeng Yu, Shuyan Ding, LunBo Li, Jiexin Wu. mmasia 2022: [doi]
- Multi-attention based semantic deep hashing for cross-modal retrievalLiping Zhu, Gangyi Tian, Bingyao Wang, Wenjie Wang, Di Zhang, Chengyang Li. apin, 51(8):5927-5939, 2021. [doi]
- Attention-Guided Semantic Hashing for Unsupervised Cross-Modal RetrievalXiao Shen, Haofeng Zhang, LunBo Li, Li Liu. icmcs 2021: 1-6 [doi]