Abstract is missing.
- Transcoding Compositionally: Using Attention to Find More Generalizable SolutionsKris Korrel, Dieuwke Hupkes, Verna Dankers, Elia Bruni. 1-11 [doi]
- Sentiment Analysis Is Not Solved! Assessing and Probing Sentiment ClassificationJeremy Barnes, Lilja Øvrelid, Erik Velldal. 12-23 [doi]
- Second-order Co-occurrence Sensitivity of Skip-Gram with Negative SamplingDominik Schlechtweg, Cennet Oguz, Sabine Schulte im Walde. 24-30 [doi]
- Can Neural Networks Understand Monotonicity Reasoning?Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos. 31-40 [doi]
- Multi-Granular Text Encoding for Self-Explaining CategorizationZhiguo Wang, Yue Zhang, Mo Yu, Wei Zhang, Lin Pan, Linfeng Song, Kun Xu, Yousef El-Kurdi. 41-45 [doi]
- The Meaning of "Most" for Visual Question Answering ModelsAlexander Kuhnle, Ann A. Copestake. 46-55 [doi]
- Do Human Rationales Improve Machine Explanations?Julia Strout, Ye Zhang, Raymond J. Mooney. 56-62 [doi]
- Analyzing the Structure of Attention in a Transformer Language ModelJesse Vig, Yonatan Belinkov. 63-76 [doi]
- Detecting Political Bias in News Articles Using Headline AttentionRama Rohit Reddy Gangula, Suma Reddy Duggenpudi, Radhika Mamidi. 77-84 [doi]
- Testing the Generalization Power of Neural Network Models across NLI BenchmarksAarne Talman, Stergios Chatzikyriakidis. 85-94 [doi]
- Character Eyes: Seeing Language through Character-Level TaggersYuval Pinter, Marc Marone, Jacob Eisenstein. 95-102 [doi]
- Faithful Multimodal Explanation for Visual Question AnsweringJialin Wu, Raymond J. Mooney. 103-112 [doi]
- Evaluating Recurrent Neural Network ExplanationsLeila Arras, Ahmed Osman, Klaus-Robert Müller, Wojciech Samek. 113-126 [doi]
- On the Realization of Compositionality in Neural NetworksJoris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni. 127-137 [doi]
- Learning the Dyck Language with Attention-based Seq2Seq ModelsXiang Yu, Ngoc Thang Vu, Jonas Kuhn. 138-146 [doi]
- Modeling Paths for Explainable Knowledge Base CompletionJosua Stadelmaier, Sebastian Padó. 147-157 [doi]
- Probing Word and Sentence Embeddings for Long-distance Dependencies Effects in French and EnglishPaola Merlo. 158-172 [doi]
- Derivational Morphological Relations in Word EmbeddingsTomás Musil, Jonás Vidra, David Marecek. 173-180 [doi]
- Hierarchical Representation in Neural Language Models: Suppression and Recovery of ExpectationsEthan Wilcox, Roger Levy, Richard Futrell. 181-190 [doi]
- Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and BrainsSamira Abnar, Lisa Beinborn, Rochelle Choenni, Willem H. Zuidema. 191-203 [doi]
- An LSTM Adaptation Study of (Un)grammaticalityShammur Absar Chowdhury, Roberto Zamparelli. 204-212 [doi]
- An Analysis of Source-Side Grammatical Errors in NMTAntonios Anastasopoulos. 213-223 [doi]
- Finding Hierarchical Structure in Neural Stacks Using Unsupervised ParsingWilliam Merrill, Lenny Khazan, Noah Amsel, Yiding Hao, Simon Mendelsohn, Robert Frank. 224-232 [doi]
- Adversarial Attack on Sentiment ClassificationYi-Ting Tsai, Min-Chu Yang, Han-Yu Chen. 233-240 [doi]
- Open Sesame: Getting inside BERT's Linguistic KnowledgeYongjie Lin, Yi Chern Tan, Robert Frank. 241-253 [doi]
- GEval: Tool for Debugging NLP Datasets and ModelsFilip Gralinski, Anna Wróblewska, Tomasz Stanislawek, Kamil Grabowski, Tomasz Górecki. 254-262 [doi]
- From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-AttentionsDavid Marecek, Rudolf Rosa. 263-275 [doi]
- What Does BERT Look at? An Analysis of BERT's AttentionKevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning. 276-286 [doi]