Abstract is missing.
- Findings of the Third Workshop on Neural Generation and TranslationHiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew M. Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh. 1-14 [doi]
- Hello, It's GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue SystemsPawel Budzianowski, Ivan Vulic. 15-22 [doi]
- Recycling a Pre-trained BERT Encoder for Neural Machine TranslationKenji Imamura, Eiichiro Sumita. 23-31 [doi]
- Generating a Common Question from Multiple Documents using Multi-source Encoder-Decoder ModelsWoon Sang Cho, Yizhe Zhang, Sudha Rao, Chris Brockett, Sungjin Lee. 32-43 [doi]
- Generating Diverse Story Continuations with Controllable SemanticsLifu Tu, Xiaoan Ding, Dong Yu, Kevin Gimpel. 44-58 [doi]
- Domain Differential Adaptation for Neural Machine TranslationZi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig. 59-69 [doi]
- Transformer-based Model for Single Documents Neural SummarizationElozino Egonmwan, Yllias Chali. 70-79 [doi]
- Making Asynchronous Stochastic Gradient Descent Work for TransformersAlham Fikri Aji, Kenneth Heafield. 80-89 [doi]
- Controlled Text Generation for Data Augmentation in Intelligent Artificial AgentsNikolaos Malandrakis, Minmin Shen, Anuj Kumar Goyal, Shuyang Gao, Abhishek Sethi, Angeliki Metallinou. 90-98 [doi]
- Zero-Resource Neural Machine Translation with Monolingual Pivot DataAnna Currey, Kenneth Heafield. 99-107 [doi]
- On the use of BERT for Neural Machine TranslationStéphane Clinchant, Kweon Woo Jung, Vassilina Nikoulina. 108-117 [doi]
- On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text GenerationVictor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mohammad Taher Pilehvar, Nigel Collier. 118-127 [doi]
- Decomposing Textual Information For Style TransferIvan P. Yamshchikov, Viacheslav Shibaev, Aleksander Nagaev, Jürgen Jost, Alexey Tikhonov. 128-137 [doi]
- Unsupervised Evaluation Metrics and Learning Criteria for Non-Parallel Textual TransferRichard Yuanzhe Pang, Kevin Gimpel. 138-147 [doi]
- Enhanced Transformer Model for Data-to-Text GenerationLi Gong, Josep Maria Crego, Jean Senellart. 148-156 [doi]
- Generalization in Generation: A closer look at Exposure BiasFlorian Schmidt. 157-167 [doi]
- Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and RobustnessAlexandre Berard, Ioan Calapodescu, Marc Dymetman, Claude Roux, Jean-Luc Meunier, Vassilina Nikoulina. 168-176 [doi]
- Adaptively Scheduled Multitask Learning: The Case of Low-Resource Neural Machine TranslationPoorya ZareMoodi, Gholamreza Haffari. 177-186 [doi]
- On the Importance of Word Boundaries in Character-level Neural Machine TranslationDuygu Ataman, Orhan Firat, Mattia Antonino Di Gangi, Marcello Federico, Alexandra Birch. 187-193 [doi]
- Big Bidirectional Insertion Representations for DocumentsLala Li, William Chan. 194-198 [doi]
- A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine TranslationGayatri Bhat, Sachin Kumar, Yulia Tsvetkov. 199-205 [doi]
- Mixed Multi-Head Self-Attention for Neural Machine TranslationHongyi Cui, Shohei Iida, Po-Hsuan Hung, Takehito Utsuro, Masaaki Nagata. 206-214 [doi]
- Paraphrasing with Large Language ModelsSam Witteveen, Martin Andrews. 215-220 [doi]
- Interrogating the Explanatory Power of Attention in Neural Machine TranslationPooya Moradi, Nishant Kambhatla, Anoop Sarkar. 221-230 [doi]
- Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine TranslationKenton Murray, Jeffery Kinnison, Toan Q. Nguyen, Walter J. Scheirer, David Chiang 0001. 231-240 [doi]
- Learning to Generate Word- and Phrase-Embeddings for Efficient Phrase-Based Neural Machine TranslationChan Young Park, Yulia Tsvetkov. 241-248 [doi]
- Transformer and seq2seq model for Paraphrase GenerationElozino Egonmwan, Yllias Chali. 249-255 [doi]
- Monash University's Submissions to the WNGT 2019 Document Translation TaskSameen Maruf, Gholamreza Haffari. 256-261 [doi]
- SYSTRAN @ WNGT 2019: DGT TaskLi Gong, Josep Maria Crego, Jean Senellart. 262-267 [doi]
- University of Edinburgh's submission to the Document-level Generation and Translation Shared TaskRatish Puduppully, Jonathan Mallinson, Mirella Lapata. 268-272 [doi]
- Naver Labs Europe's Systems for the Document-Level Generation and Translation Task at WNGT 2019Fahimeh Saleh, Alexandre Berard, Ioan Calapodescu, Laurent Besacier. 273-279 [doi]
- From Research to Production and Back: Ludicrously Fast Neural Machine TranslationYoung-Jin Kim, Marcin Junczys-Dowmunt, Hany Hassan, Alham Fikri Aji, Kenneth Heafield, Roman Grundkiewicz, Nikolay Bogoychev. 280-288 [doi]
- Selecting, Planning, and Rewriting: A Modular Approach for Data-to-Document Generation and TranslationLesly Miculicich, Marc Marone, Hany Hassan. 289-296 [doi]
- Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency TaskKenton Murray, Brian DuSell, David Chiang 0001. 297-301 [doi]