Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity

Laixi Shi, Gen Li 0005, Yuting Wei, Yuxin Chen 0002, Yuejie Chi. Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu 0001, Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA. Volume 162 of Proceedings of Machine Learning Research, pages 19967-20025, PMLR, 2022. [doi]

@inproceedings{Shi0W0C22,
  title = {Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity},
  author = {Laixi Shi and Gen Li 0005 and Yuting Wei and Yuxin Chen 0002 and Yuejie Chi},
  year = {2022},
  url = {https://proceedings.mlr.press/v162/shi22c.html},
  researchr = {https://researchr.org/publication/Shi0W0C22},
  cites = {0},
  citedby = {0},
  pages = {19967-20025},
  booktitle = {International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA},
  editor = {Kamalika Chaudhuri and Stefanie Jegelka and Le Song and Csaba Szepesvári and Gang Niu 0001 and Sivan Sabato},
  volume = {162},
  series = {Proceedings of Machine Learning Research},
  publisher = {PMLR},
}