LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

Guangyan Li, Yongqiang Tang, Wensheng Zhang 0002. LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. pages 28657-28672, OpenReview.net, 2024. [doi]

Abstract

Abstract is missing.