Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training

TaiYu Cheng, Yukata Masuda, Jun Chen, Jaehoon Yu, Masanori Hashimoto. Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training. Integration, 74:19-31, 2020. [doi]

Abstract

Abstract is missing.