The following publications are possibly variants of this publication:
- Efficient Technique to Accelerate Neural Network Training by Freezing Hidden LayersXu-Hui Chen, Ejaz Ui Haq, Chengyu Zhou. ACISicis 2019: 542-546 [doi]
- SmartFRZ: An Efficient Training Framework using Attention-Based Layer FreezingSheng Li, Geng Yuan, Yue Dai, Youtao Zhang, Yanzhi Wang, Xulong Tang. iclr 2023: [doi]
- FreezePipe: An Efficient Dynamic Pipeline Parallel Approach Based on Freezing Mechanism for Distributed DNN TrainingCaishan Weng, Zhiyang Shu, Zhengjia Xu, Jinghui Zhang, Junzhou Luo, Fang Dong 0001, Peng Wang, Zhengang Wang. cscwd 2023: 303-308 [doi]
- Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse TrainingGeng Yuan, Yanyu Li, Sheng Li 0019, Zhenglun Kong, Sergey Tulyakov, Xulong Tang, Yanzhi Wang, Jian Ren. nips 2022: [doi]