The following publications are possibly variants of this publication: 
- Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction TuningFuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang. iclr 2024:  [doi] 
- Utilize the Flow Before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction TuningRunchuan Zhu, Zhipeng Ma, Jiang Wu, Junyuan Gao, Jiaqi Wang 0003, Dahua Lin, Conghui He. AAAI 2025: 26157-26165 [doi] 
- Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware PromptingJiarui Wu, Zhuo Liu, Hangfeng He 0001. naacl 2025: 3450-3468 [doi] 
- Towards Mitigating API Hallucination in Code Generated by LLMs with Hierarchical Dependency AwareYujia Chen, Mingyu Chen, Cuiyun Gao, Zhihan Jiang, Zhongqi Li, Yuchi Ma. FSE 2025: 468-479 [doi] 
- RAG-HAT: A Hallucination-Aware Tuning Pipeline for LLM in Retrieval-Augmented GenerationJuntong Song, Xingguang Wang, Juno Zhu, Yuanhao Wu, Xuxin Cheng, Randy Zhong, Cheng Niu. emnlp 2024: 1548-1558 [doi] 
- Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption RewritesLei Wang 0185, Jiabang He, Shenshen Li, Ning Liu, Ee-Peng Lim. mmm 2024: 32-45 [doi]