Attentive Perturbation: Extending Prefix Tuning to Large Language Models Inner Representations

Louis Falissard, Séverine Affeldt, Mohamed Nadif. Attentive Perturbation: Extending Prefix Tuning to Large Language Models Inner Representations. In Giuseppe Nicosia, Varun Ojha 0001, Emanuele La Malfa, Gabriele La Malfa, Panos M. Pardalos, Renato Umeton, editors, Machine Learning, Optimization, and Data Science - 9th International Conference, LOD 2023, Grasmere, UK, September 22-26, 2023, Revised Selected Papers, Part I. Volume 14505 of Lecture Notes in Computer Science, pages 488-496, Springer, 2023. [doi]

Abstract

Abstract is missing.