EvolKV: Evolutionary KV Cache Compression for LLM Inference

Bohan Yu, Yekun Chai. EvolKV: Evolutionary KV Cache Compression for LLM Inference. In Christos Christodoulopoulos 0001, Tanmoy Chakraborty 0002, Carolyn Rose, Violet Peng, editors, Findings of the Association for Computational Linguistics: EMNLP 2025, Suzhou, China, November 4-9, 2025. pages 1673-1689, Association for Computational Linguistics, 2025. [doi]

Abstract

Abstract is missing.