Using Pre-Trained Transformer for Better Lay Summarization

Seungwon Kim. Using Pre-Trained Transformer for Better Lay Summarization. In Muthu Kumar Chandrasekaran, Anita de Waard, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Eduard H. Hovy, Petr Knoth, David Konopnicki, Philipp Mayr 0001, Robert M. Patton, Michal Shmueli-Scheuer, editors, Proceedings of the First Workshop on Scholarly Document Processing, SDP@EMNLP 2020, Online, November 19, 2020. pages 328-335, Association for Computational Linguistics, 2020. [doi]

@inproceedings{Kim20-56,
  title = {Using Pre-Trained Transformer for Better Lay Summarization},
  author = {Seungwon Kim},
  year = {2020},
  doi = {10.18653/v1/2020.sdp-1.38},
  url = {https://doi.org/10.18653/v1/2020.sdp-1.38},
  researchr = {https://researchr.org/publication/Kim20-56},
  cites = {0},
  citedby = {0},
  pages = {328-335},
  booktitle = {Proceedings of the First Workshop on Scholarly Document Processing, SDP@EMNLP 2020, Online, November 19, 2020},
  editor = {Muthu Kumar Chandrasekaran and Anita de Waard and Guy Feigenblat and Dayne Freitag and Tirthankar Ghosal and Eduard H. Hovy and Petr Knoth and David Konopnicki and Philipp Mayr 0001 and Robert M. Patton and Michal Shmueli-Scheuer},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-952148-70-5},
}