Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study

Adrian Ryser, Florian Allwein, Tim Schlippe. Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study. In 3rd International Conference on Foundation and Large Language Models, FLLM 2025, Vienna, Austria, November 25-28, 2025. pages 413-421, IEEE, 2025. [doi]

Authors

Adrian Ryser

This author has not been identified. Look up 'Adrian Ryser' in Google

Florian Allwein

This author has not been identified. Look up 'Florian Allwein' in Google

Tim Schlippe

This author has not been identified. Look up 'Tim Schlippe' in Google