The following publications are possibly variants of this publication:
- TripLe: Revisiting Pretrained Model Reuse and Progressive Learning for Efficient Vision Transformer Scaling and SearchingCheng Fu, Hanxian Huang, Zixuan Jiang, Yun Ni, Lifeng Nai, Gang Wu, Liqun Cheng, Yanqi Zhou, Sheng Li 0007, Andrew Li, Jishen Zhao. iccv 2023: 17107-17117 [doi]
- On the Transformer Growth for Progressive BERT TrainingXiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen 0005, Jiawei Han 0001. naacl 2021: 5174-5180 [doi]
- Efficient Self-supervised Vision Transformers for Representation LearningChunyuan Li, Jianwei Yang, Pengchuan Zhang, Mei Gao, Bin Xiao, Xiyang Dai, Lu Yuan, Jianfeng Gao. iclr 2022: [doi]