AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha 0001, Nako Sung, Dongsoo Lee. AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022. pages 3288-3305, Association for Computational Linguistics, 2022. [doi]

Authors

Se Jung Kwon

This author has not been identified. Look up 'Se Jung Kwon' in Google

Jeonghoon Kim

This author has not been identified. Look up 'Jeonghoon Kim' in Google

Jeongin Bae

This author has not been identified. Look up 'Jeongin Bae' in Google

Kang Min Yoo

This author has not been identified. Look up 'Kang Min Yoo' in Google

Jin-Hwa Kim

This author has not been identified. Look up 'Jin-Hwa Kim' in Google

Baeseong Park

This author has not been identified. Look up 'Baeseong Park' in Google

Byeongwook Kim

This author has not been identified. Look up 'Byeongwook Kim' in Google

Jung-Woo Ha 0001

This author has not been identified. Look up 'Jung-Woo Ha 0001' in Google

Nako Sung

This author has not been identified. Look up 'Nako Sung' in Google

Dongsoo Lee

This author has not been identified. Look up 'Dongsoo Lee' in Google