The following publications are possibly variants of this publication:
- MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-Supervised Pretraining and Transfer LearningHuaicheng Zhang, Wenhan Liu, Jiguang Shi, Sheng Chang, Hao Wang 0046, Jin He 0002, Qijun Huang. tim, 72:1-15, 2023. [doi]
- MaeFuse: Transferring Omni Features With Pretrained Masked Autoencoders for Infrared and Visible Image Fusion via Guided TrainingJiayang Li, Junjun Jiang, Pengwei Liang, Jiayi Ma 0001, Liqiang Nie. TIP, 34:1340-1353, 2025. [doi]
- Bootstrapped Masked Autoencoders for Vision BERT PretrainingXiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen 0001, Weiming Zhang 0001, Lu Yuan, Dong Chen 0003, Fang Wen 0001, Nenghai Yu. eccv 2022: 247-264 [doi]
- HSIMAE: A Unified Masked Autoencoder With Large-Scale Pretraining for Hyperspectral Image ClassificationYue Wang, Ming Wen 0003, Hailiang Zhang, Jinyu Sun, Qiong Yang, Zhimin Zhang, Hongmei Lu. staeors, 17:14064-14079, 2024. [doi]
- Rethinking Masked-Autoencoder-Based 3D Point Cloud PretrainingNuo Cheng, Chuanyu Luo, Xinzhe Li, Ruizhi Hu, Han Li, Sikun Ma, Zhong Ren, Haipeng Jiang, Xiaohan Li, Shengguang Lei, Pu Li 0001. ivs 2024: 2763-2768 [doi]
- Self-Supervised Pretraining Vision Transformer With Masked Autoencoders for Building Subsurface ModelYuanyuan Li, Tariq Alkhalifah, Jianping Huang, Zhenchun Li. tgrs, 61:1-10, 2023. [doi]