TokenSkip: Controllable Chain-of-Thought Compression in LLMs

Heming Xia, Chak Tou Leong, Wenjie Wang 0007, Yongqi Li 0001, Wenjie Li 0002. TokenSkip: Controllable Chain-of-Thought Compression in LLMs. In Christos Christodoulopoulos 0001, Tanmoy Chakraborty 0002, Carolyn Rose, Violet Peng, editors, Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, EMNLP 2025, Suzhou, China, November 4-9, 2025. pages 3351-3363, Association for Computational Linguistics, 2025. [doi]

Abstract

Abstract is missing.