The following publications are possibly variants of this publication:
- RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision TransformersZhikai Li, Junrui Xiao, Lianwei Yang, Qingyi Gu. iccv 2023: 17181-17190 [doi]
- TSPTQ-ViT: Two-Scaled Post-Training Quantization for Vision TransformerYu-Shan Tai, Ming-Guang Lin, An-Yeu Andy Wu. icassp 2023: 1-5 [doi]
- Q-ViT: Accurate and Fully Quantized Low-bit Vision TransformerYanjing Li, Sheng Xu, Baochang Zhang 0001, Xianbin Cao 0001, Peng Gao 0007, Guodong Guo. nips 2022: [doi]
- Post-Training Quantization for Vision Transformer in Transformed DomainKai-Feng, Zhuo Chen, Fei Gao, Zhe Wang, Long Xu, Weisi Lin. icmcs 2023: 1457-1462 [doi]
- Bi-ViT: Pushing the Limit of Vision Transformer QuantizationYanjing Li, Sheng Xu, Mingbao Lin, Xianbin Cao 0001, Chuanjian Liu, Xiao Sun, Baochang Zhang 0001. AAAI 2024: 3243-3251 [doi]
- NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization for Vision TransformersYijiang Liu, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, Shanghang Zhang. cvpr 2023: 20321-20330 [doi]
- I-ViT: Integer-only Quantization for Efficient Vision Transformer InferenceZhikai Li, Qingyi Gu. iccv 2023: 17019-17029 [doi]