AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha 0001, Nako Sung, Dongsoo Lee. AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022. pages 3288-3305, Association for Computational Linguistics, 2022. [doi]

@inproceedings{KwonKBYKPK0SL22,
  title = {AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models},
  author = {Se Jung Kwon and Jeonghoon Kim and Jeongin Bae and Kang Min Yoo and Jin-Hwa Kim and Baeseong Park and Byeongwook Kim and Jung-Woo Ha 0001 and Nako Sung and Dongsoo Lee},
  year = {2022},
  url = {https://aclanthology.org/2022.findings-emnlp.240},
  researchr = {https://researchr.org/publication/KwonKBYKPK0SL22},
  cites = {0},
  citedby = {0},
  pages = {3288-3305},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}