Improving Inference Latency and Energy of Network-on-Chip based Convolutional Neural Networks through Weights Compression

Giuseppe Ascia, Vincenzo Catania, John Jose, Salvatore Monteleone, Maurizio Palesi, Davide Patti. Improving Inference Latency and Energy of Network-on-Chip based Convolutional Neural Networks through Weights Compression. In 2020 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020, New Orleans, LA, USA, May 18-22, 2020. pages 54-63, IEEE, 2020. [doi]

Authors

Giuseppe Ascia

This author has not been identified. Look up 'Giuseppe Ascia' in Google

Vincenzo Catania

This author has not been identified. Look up 'Vincenzo Catania' in Google

John Jose

This author has not been identified. Look up 'John Jose' in Google

Salvatore Monteleone

This author has not been identified. Look up 'Salvatore Monteleone' in Google

Maurizio Palesi

This author has not been identified. It may be one of the following persons: Look up 'Maurizio Palesi' in Google

Davide Patti

This author has not been identified. Look up 'Davide Patti' in Google