The following publications are possibly variants of this publication:
- Knowledge Distillation with Feature Enhancement MaskYue Xiao, Longye Wang, Wentao Li, Xiaoli Zeng. icann 2023: 432-443 [doi]
- Feature Enhancement with Deep Feature Losses for Speaker VerificationSaurabh Kataria, Phani Sankar Nidadavolu, Jesús Villalba, Nanxin Chen, L. Paola García-Perera, Najim Dehak. icassp 2020: 7584-7588 [doi]
- Pushing the Limits of Self-Supervised Speaker Verification using Regularized Distillation FrameworkYafeng Chen, Siqi Zheng, Hui Wang, Luyao Cheng, Qian Chen. icassp 2023: 1-5 [doi]
- Network Specialization via Feature-level Knowledge DistillationGaowen Liu, Yuzhang Shang, Yuguang Yao, Ramana Kompella. cvpr 2023: 3368-3375 [doi]
- Knowledge Distillation via Multi-Teacher Feature EnsembleXin Ye, Rongxin Jiang 0001, Xiang Tian 0002, Rui Zhang, Yaowu Chen. spl, 31:566-570, 2024. [doi]
- Enhancing Tiny Tissues Segmentation via Self-DistillationChuan Zhou 0004, Yuchu Chen, Minghao Fan, Yang Wen, Hang Chen, Leiting Chen. bibm 2020: 934-940 [doi]