Abstract is missing.
- Findings of the Fourth Workshop on Neural Generation and TranslationKenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew M. Finch, Graham Neubig, Xian Li, Alexandra Birch. 1-9 [doi]
- Learning to Generate Multiple Style Transfer Outputs for an Input SentenceKevin Lin, Ming-Yu Liu 0001, Ming-Ting Sun, Jan Kautz. 10-23 [doi]
- Balancing Cost and Benefit with Tied-Multi TransformersRaj Dabre, Raphael Rubino, Atsushi Fujita. 24-34 [doi]
- Compressing Neural Machine Translation Models with 4-bit PrecisionAlham Fikri Aji, Kenneth Heafield. 35-42 [doi]
- Meta-Learning for Few-Shot NMT AdaptationAmr Sharaf, Hany Hassan, Hal Daumé III. 43-53 [doi]
- Automatically Ranked Russian Paraphrase Corpus for Text GenerationVadim Gudkov, Olga Mitrofanova, Elizaveta Filippskikh. 54-59 [doi]
- A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity RewardsZi-Yi Dou, Sachin Kumar, Yulia Tsvetkov. 60-68 [doi]
- A Question Type Driven and Copy Loss Enhanced Frameworkfor Answer-Agnostic Neural Question GenerationXiuyu Wu, Nan Jiang, Yunfang Wu. 69-78 [doi]
- A Generative Approach to Titling and Clustering Wikipedia SectionsAnjalie Field, Sascha Rothe, Simon Baumgartner, Cong Yu 0001, Abe Ittycheriah. 79-87 [doi]
- The Unreasonable Volatility of Neural Machine Translation ModelsMarzieh Fadaee, Christof Monz. 88-96 [doi]
- Leveraging Sentence Similarity in Natural Language Generation: Improving Beam Search using Range VotingSebastian Borgeaud, Guy Emerson. 97-109 [doi]
- Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine TranslationMitchell A. Gordon, Kevin Duh. 110-118 [doi]
- Training and Inference Methods for High-Coverage Neural Machine TranslationMichael Yang, Yixin Liu, Rahul Mayuranath. 119-128 [doi]
- Meeting the 2020 Duolingo Challenge on a ShoestringTadashi Nomoto. 129-133 [doi]
- English-to-Japanese Diverse Translation by Combining Forward and Backward OutputsMasahiro Kaneko, Aizhan Imankulova, Tosho Hirasawa, Mamoru Komachi. 134-138 [doi]
- POSTECH Submission on Duolingo Shared TaskJunsu Park, Hong-Seok Kwon, Jong-Hyeok Lee. 139-143 [doi]
- The ADAPT System Description for the STAPLE 2020 English-to-Portuguese Translation TaskRejwanul Haque, Yasmin Moslem, Andy Way. 144-152 [doi]
- Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared TaskJindrich Libovický, Zdenek Kasner, Jindrich Helcl, Ondrej Dusek. 153-160 [doi]
- Exploring Model Consensus to Generate Translation ParaphrasesZhenhao Li, Marina Fomicheva, Lucia Specia. 161-168 [doi]
- Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine TranslationEl Moatez Billah Nagoudi, Muhammad Abdul-Mageed, Hasan Cavusoglu. 169-177 [doi]
- Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE TaskSweta Agrawal, Marine Carpuat. 178-187 [doi]
- The JHU Submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language EducationHuda Khayrallah, Jacob Bremerman, Arya D. McCarthy, Kenton Murray, Winston Wu, Matt Post. 188-197 [doi]
- Simultaneous paraphrasing and translation by fine-tuning Transformer modelsRakesh Chada. 198-203 [doi]
- The NiuTrans System for WNGT 2020 Efficiency TaskChi Hu, Bei Li, Yinqiao Li, Ye-Lin, Yanyang Li, Chenglong Wang, Tong Xiao, Jingbo Zhu. 204-210 [doi]
- Efficient and High-Quality Neural Machine Translation with OpenNMTGuillaume Klein, Dakun Zhang, Clément Chouteau, Josep Maria Crego, Jean Senellart. 211-217 [doi]
- Edinburgh's Submissions to the 2020 Machine Translation Efficiency TaskNikolay Bogoychev, Roman Grundkiewicz, Alham Fikri Aji, Maximiliana Behnke, Kenneth Heafield, Sidharth Kashyap, Emmanouil-Ioannis Farsarakis, Mateusz Chudyk. 218-224 [doi]
- Improving Document-Level Neural Machine Translation with Domain AdaptationSami ul Haq, Sadaf Abdul-Rauf, Arslan Shoukat, Noor-e-Hira. 225-231 [doi]
- Simultaneous Translation and Paraphrase for Language EducationStephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, Burr Settles. 232-243 [doi]