Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds

Santosh Vempala, John Wilmes. Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds. In Alina Beygelzimer, Daniel Hsu 0001, editors, Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA. Volume 99 of Proceedings of Machine Learning Research, pages 3115-3117, PMLR, 2019. [doi]

@inproceedings{VempalaW19,
  title = {Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds},
  author = {Santosh Vempala and John Wilmes},
  year = {2019},
  url = {http://proceedings.mlr.press/v99/vempala19a.html},
  researchr = {https://researchr.org/publication/VempalaW19},
  cites = {0},
  citedby = {0},
  pages = {3115-3117},
  booktitle = {Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA},
  editor = {Alina Beygelzimer and Daniel Hsu 0001},
  volume = {99},
  series = {Proceedings of Machine Learning Research},
  publisher = {PMLR},
}