Abstract is missing.
- Massively Parallel Neural Processing Array (MPNA): A CNN Accelerator for Embedded SystemsRachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique 0001. 3-24 [doi]
- Photonic NoCs for Energy-Efficient Data-Centric ComputingFebin P. Sunny, Asif Mirza, Ishan G. Thakkar, Mahdi Nikdast, Sudeep Pasricha. 25-61 [doi]
- Low- and Mixed-Precision Inference AcceleratorsMaarten J. Molendijk, Floran A. M. de Putter, Henk Corporaal. 63-88 [doi]
- Designing Resource-Efficient Hardware Arithmetic for FPGA-Based Accelerators Leveraging Approximations and Mixed QuantizationsSalim Ullah, Siva Satyendra Sahoo, Akash Kumar 0001. 89-119 [doi]
- Efficient Hardware Acceleration of Emerging Neural Networks for Embedded Machine Learning: An Industry PerspectiveArnab Raha, Raymond Sung, Soumendu Ghosh 0003, Praveen Kumar Gupta, Deepak A. Mathaikutty, Umer I. Cheema, Kevin Hyland, Cormac Brick, Vijay Raghunathan. 121-172 [doi]
- An Off-Chip Memory Access Optimization for Embedded Deep Learning SystemsRachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique 0001. 175-198 [doi]
- In-Memory Computing for AI Accelerators: Challenges and SolutionsGokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae-sun Seo, Ümit Y. Ogras, Yu Cao 0001. 199-224 [doi]
- Efficient Deep Learning Using Non-volatile Memory Technology in GPU ArchitecturesAhmet Inci, Mehmet Meric Isgenc, Diana Marculescu. 225-252 [doi]
- SoC-GANs: Energy-Efficient Memory Management for System-on-Chip Generative Adversarial NetworksRehan Ahmed, Muhammad Zuhaib Akbar, Muhammad Abdullah Hanif, Muhammad Shafique 0001. 253-274 [doi]
- Using Approximate DRAM for Enabling Energy-Efficient, High-Performance Deep Neural Network InferenceLois Orosa 0001, Skanda Koppula, Konstantinos Kanellopoulos, A. Giray Yaglikçi, Onur Mutlu. 275-314 [doi]
- On-Chip DNN Training for Direct Feedback Alignment in FeFETFan Chen. 317-335 [doi]
- Platform-Based Design of Embedded Neuromorphic SystemsM. Lakshmi Varshika, Anup Das 0001. 337-358 [doi]
- Light Speed Machine Learning Inference on the EdgeFebin P. Sunny, Asif Mirza, Mahdi Nikdast, Sudeep Pasricha. 359-392 [doi]
- Low-Latency, Energy-Efficient In-DRAM CNN Acceleration with Bit-Parallel Unary ComputingIshan G. Thakkar, Supreeth Mysore Shivanandamurthy, Sayed Ahmad Salehi. 393-409 [doi]