The following publications are possibly variants of this publication:
- GTP-ViT: Efficient Vision Transformers via Graph-based Token PropagationXuwei Xu, Sen Wang, Yudong Chen, Yanping Zheng, Zhewei Wei, Jiajun Liu. wacv 2024: 86-95 [doi]
- All Tokens Matter: Token Labeling for Training Better Vision TransformersZihang Jiang, Qibin Hou, Li Yuan 0007, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, Jiashi Feng. nips 2021: 18590-18602 [doi]
- Training Object Detectors from Scratch: An Empirical Study in the Era of Vision TransformerWeixiang Hong, Jiangwei Lao, Wang Ren, Jian Wang, Jingdong Chen, Wei Chu. cvpr 2022: 4652-4661 [doi]
- Making Vision Transformers Efficient from A Token Sparsification ViewShuning Chang, Pichao Wang, Ming Lin, Fan Wang, David Junhao Zhang, Rong Jin 0001, Mike Zheng Shou. cvpr 2023: 6195-6205 [doi]
- FGPTQ-ViT: Fine-Grained Post-training Quantization for Vision TransformersCaihua Liu, Hongyang Shi, Xinyu He. prcv 2024: 79-90 [doi]