The following publications are possibly variants of this publication:
- Robust Training under Label Noise by Over-parameterizationSheng Liu, Zhihui Zhu, Qing Qu 0001, Chong You. icml 2022: 14153-14172 [doi]
- Beyond Lazy Training for Over-parameterized Tensor DecompositionXiang Wang, Chenwei Wu 0002, Jason D. Lee, Tengyu Ma, Rong Ge 0001. nips 2020: [doi]
- Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse TrainingShiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy. icml 2021: 6989-7000 [doi]
- Parameterized Approaches to Orthogonal CompactionWalter Didimo, Siddharth Gupta 0002, Philipp Kindermann, Giuseppe Liotta, Alexander Wolff 0001, Meirav Zehavi. sofsem 2023: 111-125 [doi]
- Train Faster, Perform Better: Modular Adaptive Training in Over-Parameterized ModelsYubin Shi, Yixuan Chen 0003, Mingzhi Dong, Xiaochen Yang, Dongsheng Li, Yujiang Wang 0001, Robert P. Dick, Qin Lv, Yingying Zhao, Fan Yang 0001, Tun Lu, Ning Gu, Li Shang. nips 2023: [doi]
- Does Preprocessing Help Training Over-parameterized Neural Networks?Zhao Song 0002, Shuo Yang, Ruizhe Zhang 0001. nips 2021: 22890-22904 [doi]