Abstract is missing.
- Communication Quantization for Data-Parallel Training of Deep Neural NetworksNikoli Dryden, Tim Moon, Sam Ade Jacobs, Brian Van Essen. 1-8 [doi]
- Performance-Portable Autotuning of OpenCL Kernels for Convolutional Layers of Deep Neural NetworksYaohung M. Tsai, Piotr Luszczek, Jakub Kurzak, Jack J. Dongarra. 9-18 [doi]
- Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel ScalabilityJanis Keuper, Franz-Josef Pfreundt. 19-26 [doi]
- A Scalable Parallel Q-Learning Algorithm for Resource Constrained Decentralized Computing EnvironmentsMiguel Camelo, Jeroen Famaey, Steven Latré. 27-35 [doi]
- Parallel Evolutionary Optimization for Neuromorphic Network TrainingCatherine D. Schuman, Adam Disney, Susheela P. Singh, Grant Bruer, J. Parker Mitchell, Aleksander Klibisz, James S. Plank. 36-46 [doi]
- A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum ComputersThomas E. Potok, Catherine D. Schuman, Steven R. Young, Robert M. Patton, Federico Spedalieri, Jeremy Liu, Ke-Thia Yao, Garrett Rose, Gangotree Chakma. 47-55 [doi]
- Practical Efficiency of Asynchronous Stochastic Gradient DescentOnkar Bhardwaj, Guojing Cong. 56-62 [doi]