The following publications are possibly variants of this publication:
- DropPos: Pre-Training Vision Transformers by Reconstructing Dropped PositionsHaochen Wang, Junsong Fan, Yuxi Wang, Kaiyou Song, Tong Wang, Zhao-Xiang Zhang. nips 2023: [doi]
- Improving Seismic Fault Recognition with Self-Supervised Pre-Training: A Study of 3D Transformer-Based with Multi-Scale Decoding and FusionZeren Zhang, Ran Chen, Jinwen Ma. remotesensing, 16(5):922, March 2024. [doi]
- Token Boosting for Robust Self-Supervised Visual Transformer Pre-trainingTianjiao Li, Lin Geng Foo, Ping Hu, Xindi Shang, Hossein Rahmani, Zehuan Yuan, Jun Liu 0036. cvpr 2023: 24027-24038 [doi]
- DiT: Self-supervised Pre-training for Document Image TransformerJunlong Li, Yiheng Xu, Tengchao Lv, Lei Cui 0001, Cha Zhang, Furu Wei. mm 2022: 3530-3539 [doi]
- Motion-transformer: self-supervised pre-training for skeleton-based action recognitionYi-Bin Cheng, Xipeng Chen, Dongyu Zhang, Liang Lin. mmasia 2021: [doi]
- S3T: Self-Supervised Pre-Training with Swin Transformer For Music ClassificationHang Zhao, Chen Zhang, Bilei Zhu, Zejun Ma, Kejun Zhang. icassp 2022: 606-610 [doi]