The following publications are possibly variants of this publication:
- ACBN: Approximate Calculated Batch Normalization for Efficient DNN On-Device Training ProcessorBaoting Li, Hang Wang, Fujie Luo, Xuchong Zhang, Hongbin Sun 0001, Nanning Zheng 0001. tvlsi, 31(6):738-748, June 2023. [doi]
- EUNNet: Efficient UN-Normalized Convolution Layer for Stable Training of Deep Residual Networks Without Batch Normalization LayerKhanh-Binh Nguyen, Jaehyuk Choi 0001, Joon-Sung Yang. access, 11:76977-76988, 2023. [doi]
- Training Binary Neural Network without Batch Normalization for Image Super-ResolutionXinrui Jiang, Nannan Wang, Jingwei Xin, Keyu Li, Xi Yang 0011, Xinbo Gao 0001. AAAI 2021: 1700-1707 [doi]
- Training high-performance and large-scale deep neural networks with full 8-bit integersYukuan Yang, Lei Deng 0003, Shuang Wu, Tianyi Yan, Yuan Xie 0001, Guoqi Li. NN, 125:70-82, 2020. [doi]
- L1-Norm Batch Normalization for Efficient Training of Deep Neural NetworksShuang Wu, Guoqi Li, Lei Deng, Liu Liu, Dong Wu, Yuan Xie 0001, Luping Shi. tnn, 30(7):2043-2051, 2019. [doi]
- Towards Fully 8-bit Integer Inference for the Transformer ModelYe-Lin, Yanyang Li, Tengbo Liu, Tong Xiao, Tongran Liu, Jingbo Zhu. IJCAI 2020: 3759-3765 [doi]
- Mercury: Efficient On-Device Distributed DNN Training via Stochastic Importance SamplingXiao Zeng, Ming Yan, Mi Zhang 0002. sensys 2021: 29-41 [doi]