Do Massively Pretrained Language Models Make Better Storytellers?

Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning. Do Massively Pretrained Language Models Make Better Storytellers?. In Mohit Bansal, Aline Villavicencio, editors, Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019. pages 843-861, Association for Computational Linguistics, 2019. [doi]

@inproceedings{SeePSYM19,
  title = {Do Massively Pretrained Language Models Make Better Storytellers?},
  author = {Abigail See and Aneesh Pappu and Rohun Saxena and Akhila Yerukola and Christopher D. Manning},
  year = {2019},
  doi = {10.18653/v1/K19-1079},
  url = {https://doi.org/10.18653/v1/K19-1079},
  researchr = {https://researchr.org/publication/SeePSYM19},
  cites = {0},
  citedby = {0},
  pages = {843-861},
  booktitle = {Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019},
  editor = {Mohit Bansal and Aline Villavicencio},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-950737-72-7},
}