LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16

Jinsu Lee, Juhyoung Lee, Donghyeon Han, Jinmook Lee, Gwangtae Park, Hoi-Jun Yoo. LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16. In IEEE International Solid- State Circuits Conference, ISSCC 2019, San Francisco, CA, USA, February 17-21, 2019. pages 142-144, IEEE, 2019. [doi]

Authors

Jinsu Lee

This author has not been identified. Look up 'Jinsu Lee' in Google

Juhyoung Lee

This author has not been identified. Look up 'Juhyoung Lee' in Google

Donghyeon Han

This author has not been identified. Look up 'Donghyeon Han' in Google

Jinmook Lee

This author has not been identified. Look up 'Jinmook Lee' in Google

Gwangtae Park

This author has not been identified. Look up 'Gwangtae Park' in Google

Hoi-Jun Yoo

This author has not been identified. Look up 'Hoi-Jun Yoo' in Google