Abstract is missing.
- Zero-Resource Cross-Domain Named Entity RecognitionZihan Liu, Genta Indra Winata, Pascale Fung. 1-6 [doi]
- Encodings of Source Syntax: Similarities in NMT Representations Across Target LanguagesTyler A. Chang, Anna N. Rafferty. 7-16 [doi]
- Learning Probabilistic Sentence Representations from ParaphrasesMingda Chen, Kevin Gimpel. 17-23 [doi]
- Word Embeddings as Tuples of Feature ProbabilitiesSiddharth Bhat, Alok Debnath, Souvik Banerjee, Manish Shrivastava 0001. 24-33 [doi]
- Compositionality and Capacity in Emergent LanguagesAbhinav Gupta 0002, Cinjon Resnick, Jakob N. Foerster, Andrew M. Dai, KyungHyun Cho. 34-38 [doi]
- Learning Geometric Word Meta-EmbeddingsPratik Jawanpuria, N. T. V. Satya Dev, Anoop Kunchukuttan, Bamdev Mishra. 39-44 [doi]
- Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector SpacesIvan Vulic, Anna Korhonen, Goran Glavas. 45-54 [doi]
- Adversarial Training for Commonsense InferenceLis Pereira, Xiaodong Liu, Fei Cheng, Masayuki Asahara, Ichiro Kobayashi. 55-60 [doi]
- Evaluating Natural Alpha Embeddings on Intrinsic and Extrinsic TasksRiccardo Volpi, Luigi Malagò. 61-71 [doi]
- Exploring the Limits of Simple Learners in Knowledge Distillation for Document Classification with DocBERTAshutosh Adhikari, Achyudh Ram, Raphael Tang, William L. Hamilton, Jimmy Lin. 72-77 [doi]
- Joint Training with Semantic Role Labeling for Better Generalization in Natural Language InferenceCemil Cengiz, Deniz Yuret. 78-88 [doi]
- A Metric Learning Approach to Misogyny CategorizationJuan Manuel Coria, Sahar Ghannay, Sophie Rosset, Hervé Bredin. 89-94 [doi]
- On the Choice of Auxiliary Languages for Improved Sequence TaggingLukas Lange, Heike Adel, Jannik Strötgen. 95-102 [doi]
- Adversarial Alignment of Multilingual Models for Extracting Temporal Expressions from TextLukas Lange, Anastasiia Iurshina, Heike Adel, Jannik Strötgen. 103-109 [doi]
- Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic InvestigationAlessio Miaschi, Felice dell'Orletta. 110-119 [doi]
- Are All Languages Created Equal in Multilingual BERT?Shijie Wu, Mark Dredze. 120-130 [doi]
- Staying True to Your Word: (How) Can Attention Become Explanation?Martin Tutek, Jan Snajder. 131-142 [doi]
- Compressing BERT: Studying the Effects of Weight Pruning on Transfer LearningMitchell A. Gordon, Kevin Duh, Nicholas Andrews. 143-155 [doi]
- On Dimensional Linguistic Properties of the Word Embedding SpaceVikas Raunak, Vaibhav Kumar, Vivek Gupta 0001, Florian Metze. 156-165 [doi]
- A Cross-Task Analysis of Text Span RepresentationsShubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, Kevin Gimpel. 166-176 [doi]
- Enhancing Transformer with Sememe KnowledgeYuhui Zhang, Chenghao Yang, Zhengping Zhou, Zhiyuan Liu 0001. 177-184 [doi]
- Evaluating Compositionality of Sentence Representation ModelsHanoz Bhathena, Angelica Willis, Nathan Dass. 185-193 [doi]
- Supertagging with CCG primitivesAditya Bhargava, Gerald Penn. 194-204 [doi]
- What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, Sunita Sarawagi. 205-214 [doi]