Pruning before Fine-tuning: A Retraining-free Compression Framework for Pre-trained Language Models

Pingjie Wang, Hongcheng Liu, Yanfeng Wang, Yu Wang. Pruning before Fine-tuning: A Retraining-free Compression Framework for Pre-trained Language Models. In Nicoletta Calzolari, Min-Yen Kan, VĂ©ronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue, editors, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy. pages 13279-13289, ELRA and ICCL, 2024. [doi]

Abstract

Abstract is missing.