Abstract is missing.
- When does deep multi-task learning work for loosely related document classification tasks?Emma Kerinec, Chloé Braud, Anders Søgaard. 1-8 [doi]
- Analyzing Learned Representations of a Deep ASR Performance Prediction ModelZied Elloumi, Laurent Besacier, Olivier Galibert, Benjamin Lecouteux. 9-15 [doi]
- Explaining non-linear Classifier Decisions within Kernel-based Deep ArchitecturesDanilo Croce, Daniele Rossini, Roberto Basili 0001. 16-24 [doi]
- Nightmare at test time: How punctuation prevents parsers from generalizingAnders Søgaard, Miryam de Lhoneux, Isabelle Augenstein. 25-29 [doi]
- Evaluating Textual Representations through Image GenerationGraham Spinks, Marie-Francine Moens. 30-39 [doi]
- On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment AnalysisJosé Camacho-Collados, Mohammad Taher Pilehvar. 40-46 [doi]
- Jump to better conclusions: SCAN both left and rightJoost Bastings, Marco Baroni, Jason Weston, KyungHyun Cho, Douwe Kiela. 47-55 [doi]
- Understanding Convolutional Neural Networks for Text ClassificationAlon Jacovi, Oren Sar Shalom, Yoav Goldberg. 56-65 [doi]
- Linguistic representations in multi-task neural networks for ellipsis resolutionOla Rønning, Daniel Hardt, Anders Søgaard. 66-73 [doi]
- Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder ModelsShun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata. 74-81 [doi]
- Rule induction for global explanation of trained modelsMadhumita Sushil, Simon Suster, Walter Daelemans. 82-97 [doi]
- Can LSTM Learn to Capture Agreement? The Case of BasqueShauli Ravfogel, Yoav Goldberg, Francis Tyers. 98-107 [doi]
- Rearranging the Familiar: Testing Compositional Generalization in Recurrent NetworksJoão Loula, Marco Baroni, Brenden M. Lake. 108-114 [doi]
- Evaluating the Ability of LSTMs to Learn Context-Free GrammarsLuzi Sennhauser, Robert C. Berwick. 115-124 [doi]
- Interpretable Neural Architectures for Attributing an Ad's Performance to its Writing StyleReid Pryzant, Sugato Basu, Kazoo Sone. 125-135 [doi]
- Interpreting Neural Networks with Nearest NeighborsEric Wallace, Shi Feng, Jordan L. Boyd-Graber. 136-144 [doi]
- 'Indicatements' that character language models learn English morpho-syntactic units and regularitiesYova Kementchedjhieva, Adam Lopez. 145-153 [doi]
- LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern TransformationPankaj Gupta, Hinrich Schütze. 154-164 [doi]
- Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogueDieuwke Hupkes, Sanne Bouwmeester, Raquel Fernández. 165-174 [doi]
- An Operation Sequence Model for Explainable Neural Machine TranslationFelix Stahlberg, Danielle Saunders, Bill Byrne. 175-186 [doi]
- Introspection for convolutional automatic speech recognitionAndreas Krug, Sebastian Stober. 187-199 [doi]
- Learning and Evaluating Sparse Interpretable Sentence EmbeddingsValentin Trifonov, Octavian-Eugen Ganea, Anna Potapenko, Thomas Hofmann. 200-210 [doi]
- What do RNN Language Models Learn about Filler-Gap Dependencies?Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell. 211-221 [doi]
- Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity ItemsJaap Jumelet, Dieuwke Hupkes. 222-231 [doi]
- Closing Brackets with Recurrent Neural NetworksNatalia Skachkova, Thomas Trost, Dietrich Klakow. 232-239 [doi]
- Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement InformationMario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, Willem H. Zuidema. 240-248 [doi]
- Iterative Recursive Attention Model for Interpretable Sequence ClassificationMartin Tutek, Jan Snajder. 249-257 [doi]
- Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language ModelsAvery Hiebert, Cole Peterson, Alona Fyshe, Nishant Mehta. 258-266 [doi]
- Importance of Self-Attention for Sentiment AnalysisGaël Letarte, Frédérik Paradis, Philippe Giguère, François Laviolette. 267-275 [doi]
- Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not: Testing whether Word Embeddings Can TellPia Sommerauer, Antske Fokkens. 276-286 [doi]
- An Analysis of Encoder Representations in Transformer-Based Machine TranslationAlessandro Raganato, Jörg Tiedemann. 287-297 [doi]
- Evaluating Grammaticality in Seq2seq Models with a Broad Coverage HPSG Grammar: A Case Study on Machine TranslationJohnny Wei, Khiem Pham, Brendan O'Connor, Brian Dillon. 298-305 [doi]
- Context-Free Transductions with Neural StacksYiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, Simon Mendelsohn. 306-315 [doi]
- Learning Explanations from Language DataDavid Harbecke, Robert Schwarzenberg, Christoph Alt. 316-318 [doi]
- How much should you ask? On the question structure in QA systemsBarbara Rychalska, Dominika Basaj, Anna Wróblewska, Przemyslaw Biecek. 319-321 [doi]
- Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA SystemBarbara Rychalska, Dominika Basaj, Anna Wróblewska, Przemyslaw Biecek. 322-324 [doi]
- Interpretable Textual Neuron Representations for NLPNina Pörner, Benjamin Roth, Hinrich Schütze. 325-327 [doi]
- Language Models Learn POS FirstNaomi Saphra, Adam Lopez. 328-330 [doi]
- Predicting and interpreting embeddings for out of vocabulary words in downstream tasksNicolas Garneau, Jean-Samuel Leboeuf, Luc Lamontagne. 331-333 [doi]
- Probing sentence embeddings for structure-dependent tenseGeoff Bacon, Terry Regier. 334-336 [doi]
- Collecting Diverse Natural Language Inference Problems for Sentence Representation EvaluationAdam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme. 337-340 [doi]
- Interpretable Word Embedding ContextualizationKyoungrok Jang, Sung-Hyon Myaeng, Sang-Bum Kim. 341-343 [doi]
- State Gradients for RNN Memory AnalysisLyan Verwimp, Hugo Van Hamme, Vincent Renkens, Patrick Wambacq. 344-346 [doi]
- Extracting Syntactic Trees from Transformer Encoder Self-AttentionsDavid Marecek, Rudolf Rosa. 347-349 [doi]
- Portable, layer-wise task performance monitoring for NLP modelsTom Lippincott. 350-352 [doi]
- GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language UnderstandingAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman. 353-355 [doi]
- Explicitly modeling case improves neural dependency parsingClara Vania, Adam Lopez. 356-358 [doi]
- Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task AnalysisKelly W. Zhang, Samuel R. Bowman. 359-361 [doi]
- Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language ModelSteven Derby, Paul Miller, Brian Murphy, Barry Devereux. 362-364 [doi]
- Interpretable Structure Induction via Sparse AttentionBen Peters, Vlad Niculae, André F. T. Martins. 365-367 [doi]
- Debugging Sequence-to-Sequence Models with Seq2Seq-VisHendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush. 368-370 [doi]
- Grammar Induction with Neural Language Models: An Unusual ReplicationPhu Mon Htut, KyungHyun Cho, Samuel R. Bowman. 371-373 [doi]
- Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?Prajit Dhar, Arianna Bisazza. 374-377 [doi]
- Exploiting Attention to Reveal Shortcomings in Memory ModelsKaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, Thomas L. Griffiths. 378-380 [doi]
- End-to-end Image Captioning Exploits Distributional Similarity in Multimodal SpacePranava Swaroop Madhyastha, Josiah Wang, Lucia Specia. 381-383 [doi]
- Limitations in learning an interpreted language with recurrent modelsDenis Paperno. 384-386 [doi]