An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models

Zihan Zhao, Yuncong Liu, Lu Chen 0002, Qi Liu, Rao Ma, Kai Yu 0004. An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models. In Xiaodan Zhu, Min Zhang, Yu Hong, Ruifang He, editors, Natural Language Processing and Chinese Computing - 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14-18, 2020, Proceedings, Part I. Volume 12430 of Lecture Notes in Computer Science, pages 359-371, Springer, 2020. [doi]

Bibliographies