An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models

Zihan Zhao, Yuncong Liu, Lu Chen 0002, Qi Liu, Rao Ma, Kai Yu 0004. An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models. In Xiaodan Zhu, Min Zhang, Yu Hong, Ruifang He, editors, Natural Language Processing and Chinese Computing - 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14-18, 2020, Proceedings, Part I. Volume 12430 of Lecture Notes in Computer Science, pages 359-371, Springer, 2020. [doi]

@inproceedings{ZhaoLCLMY20,
  title = {An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models},
  author = {Zihan Zhao and Yuncong Liu and Lu Chen 0002 and Qi Liu and Rao Ma and Kai Yu 0004},
  year = {2020},
  doi = {10.1007/978-3-030-60450-9_29},
  url = {https://doi.org/10.1007/978-3-030-60450-9_29},
  researchr = {https://researchr.org/publication/ZhaoLCLMY20},
  cites = {0},
  citedby = {0},
  pages = {359-371},
  booktitle = {Natural Language Processing and Chinese Computing - 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14-18, 2020, Proceedings, Part I},
  editor = {Xiaodan Zhu and Min Zhang and Yu Hong and Ruifang He},
  volume = {12430},
  series = {Lecture Notes in Computer Science},
  publisher = {Springer},
  isbn = {978-3-030-60450-9},
}