Abstract is missing.
- BERTering RAMS: What and How Much does BERT Already Know About Event Arguments? - A Study on the RAMS DatasetVarun Gangal, Eduard H. Hovy. 1-10 [doi]
- Emergent Language Generalization and Acquisition Speed are not tied to CompositionalityEugene Kharitonov, Marco Baroni. 11-15 [doi]
- Examining the rhetorical capacities of neural language modelsZining Zhu, Chuer Pan, Mohamed Abdalla, Frank Rudzicz. 16-32 [doi]
- What Happens To BERT Embeddings During Fine-tuning?Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, Ian Tenney. 33-44 [doi]
- It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERTHila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg. 45-56 [doi]
- Leveraging Extracted Model Adversaries for Improved Black Box AttacksNaveen Jafer Nizar, Ari Kobren. 57-67 [doi]
- On the Interplay Between Fine-tuning and Sentence-Level Probing for Linguistic Knowledge in Pre-Trained TransformersMarius Mosbach, Anna Khokhlova, Michael A. Hedderich, Dietrich Klakow. 68-82 [doi]
- Unsupervised Evaluation for Question Answering with TransformersLukas Muttenthaler, Isabelle Augenstein, Johannes Bjerva. 83-90 [doi]
- Unsupervised Distillation of Syntactic Information from Contextualized Word RepresentationsShauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav Goldberg. 91-106 [doi]
- The Explanation Game: Towards Prediction Explainability through Sparse CommunicationMarcos V. Treviso, André F. T. Martins. 107-118 [doi]
- Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?Yian Zhang. 119-125 [doi]
- Linguistically-Informed Transformations (LIT): A Method for Automatically Generating Contrast SetsChuanrong Li, Lin Shengshuo, Zeyu Liu, Xinyi Wu, Xuhui Zhou, Shane Steinert-Threlkeld. 126-135 [doi]
- Controlling the Imprint of Passivization and Negation in Contextualized RepresentationsHande Çelikkanat, Sami Virpioja, Jörg Tiedemann, Marianna Apidianaki. 136-148 [doi]
- The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?Jasmijn Bastings, Katja Filippova. 149-155 [doi]
- How does BERT capture semantics? A closer look at polysemous wordsDavid Yenicelik, Florian Schmidt 0005, Yannic Kilcher. 156-162 [doi]
- Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and NegationAtticus Geiger, Kyle Richardson 0001, Christopher Potts. 163-173 [doi]
- BERTnesia: Investigating the capture and forgetting of knowledge in BERTJaspreet Singh, Jonas Wallat, Avishek Anand. 174-183 [doi]
- Probing for Multilingual Numerical Understanding in Transformer-Based Language ModelsDevin Johnson, Denise Mak, Andrew Barker, Lexi Loessberg-Zahl. 184-192 [doi]
- Dissecting Lottery Ticket Transformers: Structural and Behavioral Study of Sparse Neural Machine TranslationRajiv Movva, Jason Y. Zhao. 193-203 [doi]
- Exploring Neural Entity Representations for Semantic InformationAndrew Runge, Eduard H. Hovy. 204-216 [doi]
- BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performanceR. Thomas McCoy, Junghyun Min, Tal Linzen. 217-227 [doi]
- Second-Order NLP Adversarial ExamplesJohn Morris. 228-237 [doi]
- Discovering the Compositional Structure of Vector Representations with Role Learning NetworksPaul Soulos, R. Thomas McCoy, Tal Linzen, Paul Smolensky. 238-254 [doi]
- Structured Self-AttentionWeights Encode Semantics in Sentiment AnalysisZhengxuan Wu, Thanh-Son Nguyen 0003, Desmond C. Ong. 255-264 [doi]
- Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic GeneralizationTristan Thrush, Ethan Wilcox, Roger Levy. 265-275 [doi]
- The EOS Decision and Length ExtrapolationBenjamin Newman, John Hewitt, Percy Liang, Christopher D. Manning. 276-291 [doi]
- Do Language Embeddings capture Scales?Xikun Zhang 0001, Deepak Ramachandran, Ian Tenney, Yanai Elazar, Dan Roth. 292-299 [doi]
- Evaluating Attribution Methods using White-Box LSTMsYiding Hao. 300-313 [doi]
- Defining Explanation in an AI ContextTejaswani Verma, Christoph Lingenfelder, Dietrich Klakow. 314-322 [doi]
- Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial ExamplesJin-Yong Yoo, John X. Morris, Eli Lifland, Yanjun Qi. 323-332 [doi]
- This is a BERT. Now there are several of them. Can they generalize to novel words?Coleman Haley. 333-341 [doi]
- diagNNose: A Library for Neural Activation AnalysisJaap Jumelet. 342-350 [doi]