Abstract is missing.
- A Minimal Model for Compositional Generalization on gSCANAlice Hein, Klaus Diepold. 1-15 [doi]
- Sparse Interventions in Language Models with Differentiable MaskingNicola De Cao, Leon Schmid, Dieuwke Hupkes, Ivan Titov. 16-27 [doi]
- Where's the Learning in Representation Learning for Compositional Semantics and the Case of Thematic FitMughilan Muthupari, Samrat Halder, Asad B. Sayeed, Yuval Marton. 28-39 [doi]
- Sentence Ambiguity, Grammaticality and Complexity ProbesSunit Bhattacharya, Vilém Zouhar, Ondrej Bojar. 40-50 [doi]
- Post-Hoc Interpretation of Transformer Hyperparameters with Explainable Boosting MachinesKiron Deb, Xuan Zhang, Kevin Duh. 51-61 [doi]
- Revisit Systematic Generalization via Meaningful LearningNing Shi, Boxin Wang, Wei Wang, Xiangyu Liu, Zhouhan Lin. 62-79 [doi]
- Is It Smaller Than a Tennis Ball? Language Models Play the Game of Twenty QuestionsMaxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, Walter Daelemans. 80-90 [doi]
- Post-hoc analysis of Arabic transformer modelsAhmed Abdelali, Nadir Durrani, Fahim Dalvi, Hassan Sajjad. 91-103 [doi]
- Universal Evasion Attacks on Summarization ScoringWenchuan Mu, Kwan Hui Lim 0001. 104-118 [doi]
- How (Un)Faithful is Attention?Hessam Amini, Leila Kosseim. 119-130 [doi]
- Are Multilingual Sentiment Models Equally Right for the Right Reasons?Rasmus Kær Jørgensen, Fiammetta Caccavale, Christian Igel, Anders Søgaard. 131-141 [doi]
- Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language ModelsDavid K. Yi, James V. Bruno, Jiayu Han, Peter Zukerman, Shane Steinert-Threlkeld. 142-152 [doi]
- Analyzing Gender Translation Errors to Identify Information Flows between the Encoder and Decoder of a NMT SystemGuillaume Wisniewski, Lichao Zhu, Nicolas Ballier, François Yvon. 153-163 [doi]
- Human Ratings Do Not Reflect Downstream Utility: A Study of Free-Text Explanations for Model PredictionsJenny Kunz, Martin Jirenius, Oskar Holmström, Marco Kuhlmann. 164-177 [doi]
- Analyzing the Representational Geometry of Acoustic Word EmbeddingsBadr Abdullah, Dietrich Klakow. 178-191 [doi]
- Understanding Domain Learning in Language Models Through Subpopulation AnalysisZheng Zhao, Yftah Ziser, Shay B. Cohen. 192-209 [doi]
- Intermediate Entity-based Sparse Interpretable Representation LearningDiego García-Olano, Yasumasa Onoe, Joydeep Ghosh, Byron C. Wallace. 210-224 [doi]
- Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment InformationIsar Nejadgholi, Esma Balkir, Kathleen C. Fraser, Svetlana Kiritchenko. 225-237 [doi]
- Investigating the Characteristics of a Transformer in a Few-Shot Setup: Does Freezing Layers in RoBERTa Help?Digvijay Ingle, Rishabh Kumar Tripathi, Ayush Kumar, Kevin Patel, Jithendra Vepa. 238-248 [doi]
- It Is Not Easy To Detect Paraphrases: Analysing Semantic Similarity With Antonyms and Negation Using the New SemAntoNeg BenchmarkTeemu Vahtola, Mathias Creutz, Jörg Tiedemann. 249-262 [doi]
- Controlling for Stereotypes in Multimodal Language Model EvaluationManuj Malik, Richard Johansson. 263-271 [doi]
- On the Compositional Generalization Gap of In-Context LearningArian Hosseini, Ankit Vani, Dzmitry Bahdanau, Alessandro Sordoni, Aaron C. Courville. 272-280 [doi]
- Explaining Translationese: why are Neural Classifiers Better and what do they Learn?Kwabena Amponsah-Kaakyire, Daria Pylypenko, Josef van Genabith, Cristina España-Bonet. 281-296 [doi]
- Probing GPT-3's Linguistic Knowledge on Semantic TasksLining Zhang, Mengchen Wang, Liben Chen, Wenxin Zhang. 297-304 [doi]
- Garden Path Traversal in GPT-2William Jurayj, William Rudman, Carsten Eickhoff. 305-313 [doi]
- Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation AnalysisPangbo Ban, Yifan Jiang, Tianran Liu, Shane Steinert-Threlkeld. 314-324 [doi]
- Using Roark-Hollingshead Distance to Probe BERT's Syntactic CompetenceJingcheng Niu, Wenjie Lu, Eric Corlett, Gerald Penn. 325-334 [doi]
- DALLE-2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image ModelsRoyi Rassin, Shauli Ravfogel, Yoav Goldberg. 335-345 [doi]
- Practical Benefits of Feature Feedback Under Distribution ShiftAnurag Katakkar, Clay H. Yoo, Weiqin Wang, Zachary C. Lipton, Divyansh Kaushik. 346-355 [doi]
- Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text ClassificationRuixuan Tang, Hanjie Chen, Yangfeng Ji. 356-370 [doi]
- Probing Pretrained Models of Source CodesSergey Troshin, Nadezhda Chirkova. 371-383 [doi]
- Probing the representations of named entities in Transformer-based Language ModelsStefan Schouten, Peter Bloem, Piek Vossen. 384-393 [doi]
- Decomposing Natural Logic Inferences for Neural NLIJulia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, André Freitas. 394-403 [doi]
- Probing with Noise: Unpicking the Warp and Weft of EmbeddingsFilip Klubicka, John D. Kelleher. 404-417 [doi]
- Look to the Right: Mitigating Relative Position Bias in Extractive Question AnsweringKazutoshi Shinoda, Saku Sugawara, Akiko Aizawa. 418-425 [doi]
- A Continuum of Generation Tasks for Investigating Length Bias and Degenerate RepetitionDarcey Riley, David Chiang 0001. 426-440 [doi]
- Universal and Independent: Multilingual Probing Framework for Exhaustive Model Interpretation and EvaluationOleg Serikov, Vitaly Protasov, Ekaterina Voloshina, Viktoria Knyazkova, Tatiana Shavrina. 441-456 [doi]