Abstract is missing.
- Frontmatter [doi]
- Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with SemanticsYuhan Zhang, Edward Gibson, Forrest Davis. 1-14 [doi]
- ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of MindXiaomeng Ma, Lingyu Gao, Qihui Xu. 15-26 [doi]
- The Zipfian Challenge: Learning the statistical fingerprint of natural languagesChristian Bentz. 27-37 [doi]
- On the Effects of Structural Modeling for Neural Semantic ParsingXiang Zhang, Shizhu He, Kang Liu 0001, Jun Zhao. 38-57 [doi]
- Humans and language models diverge when predicting repeating textAditya R. Vaidya, Javier Turek, Alexander Huth. 58-69 [doi]
- Investigating the Nature of Disagreements on Mid-Scale Ratings: A Case Study on the Abstractness-Concreteness ContinuumUrban Knuples, Diego Frassinelli, Sabine Schulte im Walde. 70-86 [doi]
- ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural LanguagesMohammad Akbari, Saeed Ranjbar Alvar, Behnam Kamranian, Amin Banitalebi-Dehkordi, Yong Zhang 0004. 87-107 [doi]
- A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language ModelsKarin de Langis, Dongyeop Kang. 108-121 [doi]
- PROPRES: Investigating the Projectivity of Presupposition with Various Triggers and EnvironmentsDaiki Asami, Saku Sugawara. 122-137 [doi]
- A Minimal Approach for Natural Language Action Space in Text-based GamesDongwon Ryu, Meng Fang, Gholamreza Haffari, Shirui Pan, Ehsan Shareghi. 138-154 [doi]
- Structural Ambiguity and its Disambiguation in Language Model Based Parsers: the Case of Dutch Clause RelativizationGijs Wijnholds, Michael Moortgat. 155-164 [doi]
- On the utility of enhancing BERT syntactic bias with Token Reordering PretrainingYassir El Mesbahi, Atif Mahmud, Abbas Ghaddar, Mehdi Rezagholizadeh, Philippe Langlais, Prasanna Parthasarathi. 165-182 [doi]
- Quirk or Palmer: A Comparative Study of Modal Verb Frameworks with Annotated DatasetsRisako Owan, Maria L. Gini, Dongyeop Kang. 183-199 [doi]
- Quantifying Information of Tokens for Simple and Flexible Simultaneous Machine TranslationDonghyun Lee, Minkyung Park, Byung Jun Lee. 200-210 [doi]
- Enhancing Code-mixed Text Generation Using Synthetic Data Filtering in Neural Machine TranslationDama Sravani, Radhika Mamidi. 211-220 [doi]
- Towards Better Evaluation of Instruction-Following: A Case-Study in SummarizationOndrej Skopek, Rahul Aralikatte, Sian Gooding, Victor Carbune. 221-237 [doi]
- Syntactic Inductive Bias in Transformer Language Models: Especially Helpful for Low-Resource Languages?Luke Gessler, Nathan Schneider 0001. 238-253 [doi]
- Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in DialogueAron Molnar, Jaap Jumelet, Mario Giulianelli, Arabella Sinclair. 254-273 [doi]
- The Validity of Evaluation Results: Assessing Concurrence Across Compositionality BenchmarksKaiser Sun, Adina Williams, Dieuwke Hupkes. 274-293 [doi]
- Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learningLucas Weber, Elia Bruni, Dieuwke Hupkes. 294-313 [doi]
- Med-HALT: Medical Domain Hallucination Test for Large Language ModelsAnkit Pal, Logesh Kumar Umapathi, Malaikannan Sankarasubbu. 314-334 [doi]
- Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental ProcessingBrielen Madureira, Pelin Çelikkol, David Schlangen. 335-351 [doi]
- ChiSCor: A Corpus of Freely-Told Fantasy Stories by Dutch Children for Computational Linguistics and Cognitive ScienceBram van Dijk, Max J. van Duijn, Suzan Verberne, Marco Spruit. 352-363 [doi]
- HNC: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension CapabilitiesEsra Dönmez, Pascal Tilli, Hsiu-Yu Yang, Ngoc Thang Vu, Carina Silberer. 364-388 [doi]
- Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced TestsMax J. van Duijn, Bram van Dijk, Tom Kouwenhoven, Werner de Valk, Marco Spruit, Peter vanderPutten. 389-402 [doi]
- A Block Metropolis-Hastings Sampler for Controllable Energy-based Text GenerationJarad Forristal, Fatemehsadat Mireshghallah, Greg Durrett, Taylor Berg-Kirkpatrick. 403-413 [doi]
- How Fragile is Relation Extraction under Entity Replacements?Yiwei Wang 0001, Bryan Hooi, Fei Wang 0060, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang 0004, Manjuan Duan, Muhao Chen. 414-423 [doi]
- JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning ModelsYuiga Wada, Kanta Kaneda, Komei Sugiura. 424-435 [doi]
- MuLER: Detailed and Scalable Reference-based EvaluationTaelin Karidi, Leshem Choshen, Gal Patel, Omri Abend. 436-455 [doi]
- The Impact of Familiarity on Naming Variation: A Study on Object Naming in Mandarin ChineseYunke He, Xixian Liao, Jialing Liang, Gemma Boleda. 456-475 [doi]
- PSST! Prosodic Speech Segmentation with TransformersNathan Roll, Calbert Graham, Simon Todd. 476-487 [doi]
- Alignment via Mutual InformationShinjini Ghosh, Yoon Kim, Ramón Fernandez Astudillo, Tahira Naseem, Jacob Andreas. 488-497 [doi]
- Challenging the "One Single Vector per Token" AssumptionMathieu Dehouck. 498-507 [doi]
- Strategies to Improve Low-Resource Agglutinative Languages Morphological InflectionGulinigeer Abudouwaili, Wayit Abliz, Kahaerjiang Abiderexiti, Aishan Wumaier, Nian Yi. 508-520 [doi]
- Exploring Transformers as Compact, Data-efficient Language ModelsClayton Fields, Casey Kennington. 521-531 [doi]
- Tree-shape Uncertainty for Analyzing the Inherent Branching Bias of Unsupervised Parsing ModelsTaiga Ishii, Yusuke Miyao. 532-547 [doi]
- Future Lens: Anticipating Subsequent Tokens from a Single Hidden StateKoyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, David Bau. 548-560 [doi]
- Cross-Document Event Coreference Resolution: Instruct Humans or Instruct GPT?Jin Zhao, Nianwen Xue, Bonan Min. 561-574 [doi]
- Implications of Annotation Artifacts in Edge Probing Test DatasetsSagnik Ray Choudhury, Jushaan Kalra. 575-586 [doi]
- REFER: An End-to-end Rationale Extraction Framework for Explanation RegularizationMohammad Reza Ghasemi Madani, Pasquale Minervini. 587-602 [doi]