Dongjie Yang, Xiaodong Han, Yan Gao, Yao Hu, Shilin Zhang, Hai Zhao 0001. PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference. In Lun-Wei Ku, Andre Martins, Vivek Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024. pages 3258-3270, Association for Computational Linguistics, 2024. [doi]
Abstract is missing.