Explicit loss asymptotics in the gradient descent training of neural networks

Maksim Velikanov, Dmitry Yarotsky. Explicit loss asymptotics in the gradient descent training of neural networks. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual. pages 2570-2582, 2021. [doi]

@inproceedings{VelikanovY21,
  title = {Explicit loss asymptotics in the gradient descent training of neural networks},
  author = {Maksim Velikanov and Dmitry Yarotsky},
  year = {2021},
  url = {https://proceedings.neurips.cc/paper/2021/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html},
  researchr = {https://researchr.org/publication/VelikanovY21},
  cites = {0},
  citedby = {0},
  pages = {2570-2582},
  booktitle = {Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual},
  editor = {Marc'Aurelio Ranzato and Alina Beygelzimer and Yann N. Dauphin and Percy Liang and Jennifer Wortman Vaughan},
}