The following publications are possibly variants of this publication:
- Scale-space Tokenization for Improving the Robustness of Vision TransformersLei Xu, Rei Kawakami, Nakamasa Inoue. mm 2023: 2684-2693 [doi]
- No Token Left Behind: Efficient Vision Transformer via Dynamic Token IdlingXuwei Xu, Changlin Li, Yudong Chen 0002, Xiaojun Chang, Jiajun Liu, Sen Wang 0001. ausai 2024: 28-41 [doi]
- Dynamic Token Pruning in Plain Vision Transformers for Semantic SegmentationQuan Tang 0001, Bowen Zhang, Jiajun Liu, Fagui Liu, Yifan Liu. iccv 2023: 777-786 [doi]
- Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision TransformerYifan Xu, Zhijie Zhang, Mengdan Zhang, Kekai Sheng, Ke Li, Weiming Dong, Liqing Zhang, Changsheng Xu, Xing Sun. AAAI 2022: 2964-2972 [doi]