Abstract is missing.
- AutoTemplate: A Simple Recipe for Lexically Constrained Text GenerationHayate Iso. 1-12 [doi]
- Noisy Pairing and Partial Supervision for Stylized Opinion SummarizationHayate Iso, Xiaolan Wang 0001, Yoshi Suhara. 13-23 [doi]
- LLM Neologism: Emergence of Mutated Characters due to Byte EncodingRan Iwamoto, Hiroshi Kanayama. 24-29 [doi]
- Communicating Uncertainty in Explanations of the Outcomes of Machine Learning ModelsIngrid Zukerman, Sameen Maruf. 30-46 [doi]
- Entity-aware Multi-task Training Helps Rare Word Machine TranslationMatiss Rikters, Makoto Miwa. 47-54 [doi]
- CEval: A Benchmark for Evaluating Counterfactual Text GenerationVan Bach Nguyen, Christin Seifert, Jörg Schlötterer. 55-69 [doi]
- Generating from AMRs into High and Low-Resource Languages using Phylogenetic Knowledge and Hierarchical QLoRA Training (HQL)William Soto Martinez, Yannick Parmentier 0001, Claire Gardent. 70-81 [doi]
- AMERICANO: Argument Generation with Discourse-driven Decomposition and Agent InteractionZhe Hu, Hou Pong Chan, Yu Yin 0001. 82-102 [doi]
- Generating Simple, Conservative and Unifying Explanations for Logistic Regression ModelsSameen Maruf, Ingrid Zukerman, Xuelin Situ, Cécile Paris, Gholamreza Haffari. 103-120 [doi]
- Extractive Summarization via Fine-grained Semantic Tuple ExtractionYubin Ge, Sullam Jeoung, Jana Diesner. 121-133 [doi]
- Evaluating RDF-to-text Generation Models for English and Russian on Out Of Domain DataAnna Nikiforovskaya, Claire Gardent. 134-144 [doi]
- Forecasting Implicit Emotions Elicited in ConversationsYurie Koga, Shunsuke Kando, Yusuke Miyao. 145-152 [doi]
- German Voter Personas Can Radicalize LLM Chatbots via the Echo Chamber EffectMaximilian Bleick, Nils Feldhus, Aljoscha Burchardt, Sebastian Möller 0001. 153-164 [doi]
- Quantifying Memorization and Detecting Training Data of Pre-trained Language Models using Japanese NewspaperShotaro Ishihara, Hiromu Takahashi. 165-179 [doi]
- Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt LLMs for DialogueSimone Alghisi, Massimo Rizzoli, Gabriel Roccabruna, Seyed Mahed Mousavi, Giuseppe Riccardi. 180-197 [doi]
- Automating True-False Multiple-Choice Question Generation and Evaluation with Retrieval-based Accuracy DifferentialChen-Jui Yu, Wen Hung Lee, Lin Tse Ke, Shih-Wei Guo, Yao-Chung Fan. 198-212 [doi]
- Transfer-Learning based on Extract, Paraphrase and Compress Models for Neural Abstractive Multi-Document SummarizationYllias Chali, Elozino Egonmwan. 213-221 [doi]
- Enhancing Presentation Slide Generation by LLMs with a Multi-Staged End-to-End ApproachSambaran Bandyopadhyay, Himanshu Maheshwari, Anandhavelu Natarajan, Apoorv Saxena. 222-229 [doi]
- Is Machine Psychology here? On Requirements for Using Human Psychological Tests on Large Language ModelsLea Löhn, Niklas Kiehne, Alexander Ljapunov, Wolf-Tilo Balke. 230-242 [doi]
- Exploring the impact of data representation on neural data-to-text generationDavid M. Howcroft, Lewis N. Watson, Olesia Nedopas, Dimitra Gkatzia. 243-253 [doi]
- Automatically Generating IsiZulu Words From Indo-Arabic NumeralsZola Mahlaza, Tadiwa Magwenzi, C. Maria Keet, Langa Khumalo. 254-271 [doi]
- (Mostly) Automatic Experiment Execution for Human Evaluations of NLP SystemsCraig Thomson, Anya Belz. 272-279 [doi]
- Generating Hotel Highlights from Unstructured Text using LLMsSrinivas Ramesh Kamath, Fahime Same, Saad Mahamood. 280-288 [doi]
- Text2Traj2Text: Learning-by-Synthesis Framework for Contextual Captioning of Human Movement TrajectoriesHikaru Asano, Ryo Yonetani, Taiki Sekii, Hiroki Ouchi. 289-302 [doi]
- n-gram F-score for Evaluating Grammatical Error CorrectionShota Koyama, Ryo Nagata, Hiroya Takamura, Naoaki Okazaki. 303-313 [doi]
- Personalized Cloze Test Generation with Large Language Models: Streamlining MCQ Development and Enhancing Adaptive LearningChih-Hsuan Shen, Yi-Li Kuo, Yao-Chung Fan. 314-319 [doi]
- Pipeline Neural Data-to-text with Large Language ModelsChinonso Cynthia Osuji, Brian Timoney, Thiago Castro Ferreira, Brian Davis. 320-329 [doi]
- Reduction-Synthesis: Plug-and-Play for Sentiment Style TransferSheng Xu, Fumiyo Fukumoto, Yoshimi Suzuki. 330-343 [doi]
- Resilience through Scene Context in Visual Referring Expression GenerationSimeon Junker, Sina Zarrieß. 344-357 [doi]
- The Unreasonable Ineffectiveness of Nucleus Sampling on Mitigating Text MemorizationLuka Borec, Philipp Sadler, David Schlangen. 358-370 [doi]
- CADGE: Context-Aware Dialogue Generation Enhanced with Graph-Structured Knowledge AggregationChen Tang, Hongbo Zhang, Tyler Loakman, Bohao Yang, Stefan Goetze, Chenghua Lin. 371-383 [doi]
- Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive LearningYingjin Song, Denis Paperno, Albert Gatt. 384-401 [doi]
- Enhancing Editorial Tasks: A Case Study on Rewriting Customer Help Page Contents Using Large Language ModelsAleksandra Gabryszak, Daniel Röder, Arne Binder, Luca Sion, Leonhard Hennig. 402-411 [doi]
- Customizing Large Language Model Generation Style using Parameter-Efficient FinetuningXinyue Liu, Harshita Diddee, Daphne Ippolito. 412-426 [doi]
- Towards Fine-Grained Citation Evaluation in Generated Text: A Comparative Analysis of Faithfulness MetricsWeijia Zhang 0004, Mohammad Aliannejadi, Yifei Yuan 0002, Jiahuan Pei, Jia-Hong Huang, Evangelos Kanoulas. 427-439 [doi]
- Audio-visual training for improved grounding in video-text LLMsShivprasad Sagare, Hemachandran S, Kinshuk Sarabhai, Prashant Ullegaddi, Rajeshkumar SA. 440-445 [doi]
- aiXplain SDK: A High-Level and Standardized Toolkit for AI AssetsShreyas Sharma, Lucas Pavanelli, Thiago Castro Ferreira, Mohamed Al-Badrashiny, Hassan Sawaf. 446-452 [doi]
- Referring Expression Generation in Visually Grounded Dialogue with Discourse-aware Comprehension GuidingBram Willemsen, Gabriel Skantze. 453-469 [doi]
- The Gricean Maxims in NLP - A SurveyLea Krause, Piek T. J. M. Vossen. 470-485 [doi]
- Leveraging Plug-and-Play Models for Rhetorical Structure Control in Text GenerationYuka Yokogawa, Tatsuya Ishigaki, Hiroya Takamura, Yusuke Miyao, Ichiro Kobayashi. 486-493 [doi]
- Multilingual Text Style Transfer: Datasets & Models for Indian LanguagesSourabrata Mukherjee, Atul kr. Ojha, Akanksha Bansal, Deepak Alok, John P. McCrae, Ondrej Dusek. 494-522 [doi]
- Are Large Language Models Actually Good at Text Style Transfer?Sourabrata Mukherjee, Atul kr. Ojha, Ondrej Dusek. 523-539 [doi]
- Towards Effective Long Conversation Generation with Dynamic Topic Tracking and RecommendationTrevor Ashby, Adithya Kulkarni, Jingyuan Qi, Minqian Liu, Eunah Cho, Vaibhav Kumar, Lifu Huang. 540-556 [doi]
- Automatic Metrics in Natural Language Generation: A survey of Current Evaluation PracticesPatrícia Schmidtová, Saad Mahamood, Simone Balloccu, Ondrej Dusek, Albert Gatt, Dimitra Gkatzia, David M. Howcroft, Ondrej Plátek, Adarsa Sivaprasad. 557-583 [doi]
- A Comprehensive Analysis of Memorization in Large Language ModelsHirokazu Kiyomaru, Issa Sugiura, Daisuke Kawahara, Sadao Kurohashi. 584-596 [doi]
- Generating Attractive Ad Text by Facilitating the Reuse of Landing Page ExpressionsHidetaka Kamigaito, Soichiro Murakami, Peinan Zhang, Hiroya Takamura, Manabu Okumura. 597-608 [doi]
- Differences in Semantic Errors Made by Different Types of Data-to-text SystemsRudali Huidrom, Anya Belz, Michela Lorandi. 609-621 [doi]
- Leveraging Large Language Models for Building Interpretable Rule-Based Data-to-Text SystemsJedrzej Warczynski, Mateusz Lango, Ondrej Dusek. 622-630 [doi]
- Explainability Meets Text Summarization: A SurveyMahdi Dhaini, Ege Erdogan, Smarth Bakshi, Gjergji Kasneci. 631-645 [doi]
- Generating Faithful and Salient Text from Multimodal DataTahsina Hashem, Weiqing Wang 0001, Derry Tanti Wijaya, Mohammed Eunus Ali, Yuan-Fang Li. 646-662 [doi]
- Investigating Paraphrase Generation as a Data Augmentation Strategy for Low-Resource AMR-to-Text GenerationMarco Antonio Sobrevilla Cabezudo, Marcio Lima Inácio, Thiago Alexandre Salgueiro Pardo. 663-675 [doi]
- Zooming in on Zero-Shot Intent-Guided and Grounded Document Generation using LLMsPritika Ramu, Pranshu Gaur, Rishita Emandi, Himanshu Maheshwari, Danish Javed, Aparna Garimella. 676-694 [doi]
- Zero-shot cross-lingual transfer in instruction tuning of large language modelsNadezhda Chirkova, Vassilina Nikoulina. 695-708 [doi]