Multimodal Language Models See Better When They Look Shallower

Haoran Chen, Junyan Lin, Xinghao Chen 0009, Yue Fan, Jianfeng Dong, Xin Jin, Hui Su, JinLan Fu, Xiaoyu Shen 0001. Multimodal Language Models See Better When They Look Shallower. In Christos Christodoulopoulos 0001, Tanmoy Chakraborty 0002, Carolyn Rose, Violet Peng, editors, Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, EMNLP 2025, Suzhou, China, November 4-9, 2025. pages 6677-6695, Association for Computational Linguistics, 2025. [doi]

@inproceedings{ChenLCFDJSFS25,
  title = {Multimodal Language Models See Better When They Look Shallower},
  author = {Haoran Chen and Junyan Lin and Xinghao Chen 0009 and Yue Fan and Jianfeng Dong and Xin Jin and Hui Su and JinLan Fu and Xiaoyu Shen 0001},
  year = {2025},
  doi = {10.18653/v1/2025.emnlp-main.339},
  url = {https://doi.org/10.18653/v1/2025.emnlp-main.339},
  researchr = {https://researchr.org/publication/ChenLCFDJSFS25},
  cites = {0},
  citedby = {0},
  pages = {6677-6695},
  booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, EMNLP 2025, Suzhou, China, November 4-9, 2025},
  editor = {Christos Christodoulopoulos 0001 and Tanmoy Chakraborty 0002 and Carolyn Rose and Violet Peng},
  publisher = {Association for Computational Linguistics},
  isbn = {979-8-89176-332-6},
}