Using Pre-Trained Transformer for Better Lay Summarization

Seungwon Kim. Using Pre-Trained Transformer for Better Lay Summarization. In Muthu Kumar Chandrasekaran, Anita de Waard, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Eduard H. Hovy, Petr Knoth, David Konopnicki, Philipp Mayr 0001, Robert M. Patton, Michal Shmueli-Scheuer, editors, Proceedings of the First Workshop on Scholarly Document Processing, SDP@EMNLP 2020, Online, November 19, 2020. pages 328-335, Association for Computational Linguistics, 2020. [doi]

Abstract

Abstract is missing.