Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration?

Ziming Wang, Zeyu Shi, Haoyi Zhou, Shiqi Gao, Qingyun Sun, Jianxin Li 0002. Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration?. In Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar, editors, Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2025, Vienna, Austria, July 27 - August 1, 2025. pages 14830-14853, Association for Computational Linguistics, 2025. [doi]

@inproceedings{WangSZGS025,
  title = {Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration?},
  author = {Ziming Wang and Zeyu Shi and Haoyi Zhou and Shiqi Gao and Qingyun Sun and Jianxin Li 0002},
  year = {2025},
  url = {https://aclanthology.org/2025.acl-long.722/},
  researchr = {https://researchr.org/publication/WangSZGS025},
  cites = {0},
  citedby = {0},
  pages = {14830-14853},
  booktitle = {Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2025, Vienna, Austria, July 27 - August 1, 2025},
  editor = {Wanxiang Che and Joyce Nabende and Ekaterina Shutova and Mohammad Taher Pilehvar},
  publisher = {Association for Computational Linguistics},
  isbn = {979-8-89176-251-0},
}