The following publications are possibly variants of this publication:
- All in Tokens: Unifying Output Space of Visual Tasks via Soft TokenJia Ning, Chen Li, Zheng Zhang, Chunyu Wang, Zigang Geng, Qi Dai, Kun He 0001, Han Hu 0001. iccv 2023: 19843-19853 [doi]
- All Tokens Matter: Token Labeling for Training Better Vision TransformersZihang Jiang, Qibin Hou, Li Yuan 0007, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, Jiashi Feng. nips 2021: 18590-18602 [doi]
- No Token Left Behind: Efficient Vision Transformer via Dynamic Token IdlingXuwei Xu, Changlin Li, Yudong Chen 0002, Xiaojun Chang, Jiajun Liu, Sen Wang 0001. ausai 2024: 28-41 [doi]
- EViT: Expediting Vision Transformers via Token ReorganizationsYouwei Liang, Chongjian Ge, Zhan Tong, Yibing Song, Jue Wang 0001, Pengtao Xie. iclr 2022: [doi]