The following publications are possibly variants of this publication:
- Compute-Efficient Neural-Network AccelerationEphrem Wu, Xiaoqian Zhang, David Berman, Inkeun Cho, John Thendean. fpga 2019: 191-200 [doi]
- FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional ActivationsYichi Zhang, Junhao Pan, Xinheng Liu, Hongzheng Chen, Deming Chen, Zhiru Zhang. fpga 2021: 171-182 [doi]
- FILM-QNN: Efficient FPGA Acceleration of Deep Neural Networks with Intra-Layer, Mixed-Precision QuantizationMengshu Sun, Zhengang Li, Alec Lu, Yanyu Li, Sung-En Chang, Xiaolong Ma, Xue Lin, Zhenman Fang. fpga 2022: 134-145 [doi]
- Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory SystemChi Zhang, Viktor K. Prasanna. fpga 2017: 35-44 [doi]
- A 7.663-TOPS 8.2-W Energy-efficient FPGA Accelerator for Binary Convolutional Neural Networks (Abstract Only)Yixing Li, Zichuan Liu, Kai Xu, Hao Yu, Fengbo Ren. fpga 2017: 290-291 [doi]
- Unleashing the Power of Soft Logic for Convolutional Neural Network Acceleration via Product QuantizationJialiang Zhang, Jing Li. fpga 2019: 120 [doi]
- SparseBNN: Joint Algorithm/Hardware Optimization to Exploit Structured Sparsity in Binary Neural NetworkXin He, Liu Ke, Xuan Zhang. fpga 2019: 117-118 [doi]
- Chaotic and Hyperchaotic Attractors in Time-Delayed Neural NetworksDong Zhang, Jian Xu. bertinoro 2009: 1193-1202 [doi]