Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds

Santosh Vempala, John Wilmes. Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds. In Alina Beygelzimer, Daniel Hsu 0001, editors, Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA. Volume 99 of Proceedings of Machine Learning Research, pages 3115-3117, PMLR, 2019. [doi]

Abstract

Abstract is missing.