Abstract is missing.
- To what extent do human explanations of model behavior align with actual model behavior?Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela, Adina Williams. 1-14 [doi]
- Test Harder than You Train: Probing with Extrapolation SplitsJenny Kunz, Marco Kuhlmann. 15-25 [doi]
- Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human RatingsHendrik Schuff, Hsiu-Yu Yang, Heike Adel, Ngoc Thang Vu. 26-41 [doi]
- The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through GenerationLaura Aina, Tal Linzen. 42-57 [doi]
- On the Limits of Minimal Pairs in Contrastive EvaluationJannis Vamvas, Rico Sennrich. 58-68 [doi]
- What Models Know About Their Attackers: Deriving Attacker Information From Latent RepresentationsZhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd, Sameer Singh. 69-78 [doi]
- ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their PrototypicalityMarianna Apidianaki, Aina Garí Soler. 79-94 [doi]
- ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial PerspectiveTessa Masis, Carolyn Anderson. 95-135 [doi]
- Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCANRahma Chaabouni, Roberto Dessì, Eugene Kharitonov. 136-148 [doi]
- Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?Tobias Norlund, Lovisa Hagström, Richard Johansson. 149-162 [doi]
- Discrete representations in neural models of spoken languageBertrand Higy, Lieke Gelderloos, Afra Alishahi, Grzegorz Chrupala. 163-176 [doi]
- Word Equations: Inherently Interpretable Sparse Word Embeddings through Sparse CodingAdly Templeton. 177-191 [doi]
- A howling success or a working sea? Testing what BERT knows about metaphorsPaolo Pedinotti, Eliana Di Palma, Ludovica Cerini, Alessandro Lenci. 192-204 [doi]
- How Length Prediction Influence the Performance of Non-Autoregressive Translation?Minghan Wang, Jiaxin Guo, Yuxia Wang, Yimeng Chen, Chang Su, Hengchao Shang, Min Zhang, Shimin Tao, Hao Yang. 205-213 [doi]
- On the Language-specificity of Multilingual BERT and the Impact of Fine-tuningMarc Tanti, Lonneke van der Plas, Claudia Borg, Albert Gatt. 214-227 [doi]
- Relating Neural Text Degeneration to Exposure BiasTing-Rui Chiang, Yun-Nung Chen. 228-239 [doi]
- Efficient Explanations from Empirical ExplainersRobert Schwarzenberg, Nils Feldhus, Sebastian Möller 0001. 240-249 [doi]
- Variation and generality in encoding of syntactic anomaly information in sentence embeddingsQinxuan Wu, Allyson Ettinger. 250-264 [doi]
- Enhancing Interpretable Clauses Semantically using Pretrained Word RepresentationRohan Kumar Yadav, Lei Jiao 0001, Ole-Christoffer Granmo, Morten Goodwin. 265-274 [doi]
- Analyzing BERT's Knowledge of Hypernymy via PromptingMichael Hanna, David Marecek. 275-282 [doi]
- An in-depth look at Euclidean disk embeddings for structure preserving parsingFederico Fancellu, Lan Xiao, Allan D. Jepson, Afsaneh Fazly. 283-295 [doi]
- Training Dynamic based data filtering may not work for NLP datasetsArka Talukdar, Monika Dagar, Prachi Gupta, Varun Menon. 296-302 [doi]
- Multi-Layer Random Perturbation Training for improving Model Generalization EfficientlyLis Kanashiro Pereira, Yuki Taya, Ichiro Kobayashi. 303-310 [doi]
- Screening Gender Transfer in Neural Machine TranslationGuillaume Wisniewski, Lichao Zhu, Nicolas Bailler, François Yvon. 311-321 [doi]
- What BERT Based Language Model Learns in Spoken Transcripts: An Empirical StudyAyush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa. 322-336 [doi]
- Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language InferenceHitomi Yanaka, Koji Mineshima. 337-349 [doi]
- Investigating Negation in Pre-trained Vision-and-language ModelsRadina Dobreva, Frank Keller. 350-362 [doi]
- Not all parameters are born equal: Attention is mostly what you needNikolay Bogoychev. 363-374 [doi]
- Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' RepresentationsMohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar. 375-388 [doi]
- Learning Mathematical Properties of IntegersMaria Ryskina, Kevin Knight. 389-395 [doi]
- Probing Language Models for Understanding of Temporal ExpressionsShivin Thukral, Kunal Kukreja, Christian Kavouras. 396-406 [doi]
- How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word EmbeddingsBadr Abdullah, Iuliia Zaitova, Tania Avgustinova, Bernd Möbius, Dietrich Klakow. 407-419 [doi]
- Perturbing Inputs for Fragile Interpretations in Deep Natural Language ProcessingSanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, Yanjun Qi. 420-434 [doi]
- An Investigation of Language Model Interpretability via Sentence EditingSamuel Stevens, Yu Su. 435-446 [doi]
- Interacting Knowledge Sources, Inspection and Analysis: Case-studies on Biomedical text processingParsa Bagherzadeh, Sabine Bergler. 447-456 [doi]
- Attacks against Ranking Algorithms with Text Embeddings: A Case Study on Recruitment AlgorithmsAnahita Samadi, Debapriya Banerjee, Shirin Nilizadeh. 457-467 [doi]
- Controlled tasks for model analysis: Retrieving discrete information from sequencesIonut-Teodor Sorodoc, Gemma Boleda, Marco Baroni. 468-478 [doi]
- The Acceptability Delta Criterion: Testing Knowledge of Language using the Gradience of Sentence AcceptabilityHéctor Vázquez Martínez. 479-495 [doi]
- How Does BERT Rerank Passages? An Attribution Analysis with Information BottlenecksZhiying Jiang, Raphael Tang, Ji Xin, Jimmy Lin. 496-509 [doi]
- Do Language Models Know the Way to Rome?Bastien Liétard, Mostafa Abdou, Anders Søgaard. 510-517 [doi]
- Exploratory Model Analysis Using Data-Driven Neuron RepresentationsDaisuke Oba, Naoki Yoshinaga 0001, Masashi Toyoda. 518-528 [doi]
- Fine-Tuned Transformers Show Clusters of Similar Representations Across LayersJason Phang, Haokun Liu, Samuel R. Bowman. 529-538 [doi]
- BERT Has Uncommon Sense: Similarity Ranking for Word Sense BERTologyLuke Gessler, Nathan Schneider 0001. 539-547 [doi]