The following publications are possibly variants of this publication:
- Once Quantization-Aware Training: High Performance Extremely Low-bit Architecture SearchMingzhu Shen, Feng Liang, Ruihao Gong, Yuhang Li, Chuming Li, Chen Lin 0003, Fengwei Yu, Junjie Yan, Wanli Ouyang. iccv 2021: 5320-5329 [doi]
- RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training QuantizationHongyi Yao, Pu Li, Jian Cao, Xiangcheng Liu, Chenying Xie, Bingzhang Wang. IJCAI 2022: 1573-1579 [doi]
- Extremely Low-bit Convolution Optimization for Quantized Neural Network on Modern Computer ArchitecturesQingchang Han, Yongmin Hu, Fengwei Yu, Hailong Yang, Bing Liu, Peng Hu, Ruihao Gong, Yanfei Wang, Rui Wang, Zhongzhi Luan, Depei Qian. icpp 2020: [doi]
- LKBQ: Pushing the Limit of Post-Training Quantization to Extreme 1 bitTianxiang Li, Bin Chen, Qian-Wei Wang, Yujun Huang, Shu-Tao Xia. icip 2023: 1775-1779 [doi]
- Bit-shrinking: Limiting Instantaneous Sharpness for Improving Post-training QuantizationChen Lin, Bo Peng, Zheyang Li, Wenming Tan, Ye Ren, Jun Xiao 0001, Shiliang Pu. cvpr 2023: 16196-16205 [doi]
- PTMQ: Post-training Multi-Bit Quantization of Neural NetworksKe Xu, Zhongcheng Li, Shanshan Wang, Xingyi Zhang 0001. AAAI 2024: 16193-16201 [doi]