Abstract is missing.
- On Isotropy Calibration of Transformer ModelsYue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer. 1-9 [doi]
- Do Dependency Relations Help in the Task of Stance Detection?Alessandra Teresa Cignarella, Cristina Bosco, Paolo Rosso. 10-17 [doi]
- Evaluating the Practical Utility of Confidence-score based Techniques for Unsupervised Open-world ClassificationSopan Khosla, Rashmi Gangadharaiah. 18-23 [doi]
- Extending the Scope of Out-of-Domain: Examining QA models in multiple subdomainsChenyang Lyu, Jennifer Foster, Yvette Graham. 24-37 [doi]
- What Do You Get When You Cross Beam Search with Nucleus Sampling?Uri Shaham, Omer Levy. 38-45 [doi]
- How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?Simeng Sun, Brian Dillon, Mohit Iyyer. 46-53 [doi]
- Cross-lingual Inflection as a Data Augmentation Method for ParsingAlberto Muñoz-Ortiz, Carlos Gómez-Rodríguez, David Vilares. 54-61 [doi]
- Is BERT Robust to Label Noise? A Study on Learning with Noisy Labels in Text ClassificationDawei Zhu, Michael A. Hedderich, Fangzhou Zhai, David Ifeoluwa Adelani, Dietrich Klakow. 62-67 [doi]
- Ancestor-to-Creole Transfer is Not a Walk in the ParkHeather C. Lent, Emanuele Bugliarello, Anders Søgaard. 68-74 [doi]
- What GPT Knows About Who is WhoXiaohan Yang, Eduardo Peynetti, Vasco Meerman, Chris Tanner. 75-81 [doi]
- Evaluating Biomedical Word Embeddings for Vocabulary Alignment at Scale in the UMLS Metathesaurus Using Siamese NetworksGoonmeet Bajaj, Vinh Nguyen, Thilini Wijesiriwardene, Hong Yung Yip, Vishesh Javangula, Amit P. Sheth, Srinivasan Parthasarathy 0001, Olivier Bodenreider. 82-87 [doi]
- On the Impact of Data Augmentation on Downstream Performance in Natural Language ProcessingItsuki Okimura, Machel Reid, Makoto Kawano, Yutaka Matsuo. 88-93 [doi]
- Can Question Rewriting Help Conversational Question Answering?Etsuko Ishii, Yan Xu 0012, Samuel Cahyawijaya, Bryan Wilie. 94-99 [doi]
- Clustering Examples in Multi-Dataset Benchmarks with Item Response TheoryPedro Rodríguez 0001, Phu Mon Htut, John Lalor, João Sedoc. 100-112 [doi]
- On the Limits of Evaluating Embodied Agent Model Generalization Using Validation SetsHyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur. 113-118 [doi]
- Do Data-based Curricula Work?Maxim K. Surkov, Vladislav D. Mosin, Ivan P. Yamshchikov. 119-128 [doi]
- The Document Vectors Using Cosine Similarity RevisitedBingyu Zhang, Nikolay Arefyev. 129-133 [doi]
- Challenges in including extra-linguistic context in pre-trained language modelsIonut Sorodoc, Laura Aina, Gemma Boleda. 134-138 [doi]
- Label Errors in BANKING77Cecilia Ying, Stephen Thomas. 139-143 [doi]
- Pathologies of Pre-trained Language Models in Few-shot Fine-tuningHanjie Chen, Guoqing Zheng, Ahmed Hassan Awadallah, Yangfeng Ji. 144-153 [doi]
- An Empirical study to understand the Compositional Prowess of Neural Dialog ModelsVinayshekhar Bannihatti Kumar, Vaibhav Kumar, Mukul Bhutani, Alexander Rudnicky. 154-158 [doi]
- Combining Extraction and Generation for Constructing Belief-Consequence Causal LinksMaria Alexeeva, Allegra A. Beal, Mihai Surdeanu. 159-164 [doi]
- Replicability under Near-Perfect Conditions - A Case-Study from Automatic SummarizationMargot Mieskes. 165-171 [doi]
- BPE beyond Word Boundary: How NOT to use Multi Word Expressions in Neural Machine TranslationDipesh Kumar, Avijit Thawani. 172-179 [doi]
- Pre-trained language models evaluating themselves - A comparative studyPhilipp Koch, Matthias Aßenmacher, Christian Heumann. 180-187 [doi]