The following publications are possibly variants of this publication:
- A Novel DNN Training Framework via Data Sampling and Multi-Task OptimizationBoyu Zhang, A. Kai Qin, Hong Pan, Timos Sellis. ijcnn 2020: 1-8 [doi]
- Universal Style Transfer via Feature TransformsYijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang. nips 2017: 385-395 [doi]
- NS-FTL: Alleviating the Uneven Bit-Level Wearing of NVRAM-based FTL via NAND-SPINWei-Chun Cheng, Shuo-Han Chen, Yuan-Hao Chang, Kuan-Hsun Chen, Jian-Jia Chen, Tseng-Yi Chen, Ming-Chang Yang, Wei Kuan Shih. nvmsa 2020: 1-6 [doi]
- An FPGA-Based Reconfigurable Accelerator for Low-Bit DNN TrainingHaikuo Shao, Jinming Lu, Jun Lin, Zhongfeng Wang. isvlsi 2021: 254-259 [doi]
- ESRU: Extremely Low-Bit and Hardware-Efficient Stochastic Rounding Unit Design for Low-Bit DNN TrainingSung-En Chang, Geng Yuan, Alec Lu, Mengshu Sun, Yanyu Li, Xiaolong Ma, Zhengang Li, Yanyue Xie, Minghai Qin, Xue Lin, Zhenman Fang, Yanzhi Wang. date 2023: 1-6 [doi]
- You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic RoundingGeng Yuan, Sung-En Chang, Qing Jin, Alec Lu, Yanyu Li, Yushu Wu, Zhenglun Kong, Yanyue Xie, Peiyan Dong, Minghai Qin, Xiaolong Ma, Xulong Tang, Zhenman Fang, Yanzhi Wang. eccv 2022: 34-51 [doi]