The following publications are possibly variants of this publication:
- PTMQ: Post-training Multi-Bit Quantization of Neural NetworksKe Xu, Zhongcheng Li, Shanshan Wang, Xingyi Zhang 0001. AAAI 2024: 16193-16201 [doi]
- Alternating Multi-bit Quantization for Recurrent Neural NetworksChen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, Hongbin Zha. iclr 2018: [doi]
- Only Train Once: A One-Shot Neural Network Training And Pruning FrameworkTianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, Xiao Tu. nips 2021: 19637-19651 [doi]
- Train Once and Explain Everywhere: Pre-training Interpretable Graph Neural NetworksJun Yin, Chaozhuo Li, Hao Yan, Jianxun Lian, Senzhang Wang. nips 2023: [doi]
- Bit-Level Optimized Neural Network for Multi-Antenna Channel QuantizationChao Lu 0007, Wei Xu 0001, Shi Jin, Kezhi Wang. wcl, 9(1):87-90, 2020. [doi]