Abstract is missing.
- Explaining Predictions of Non-Linear Classifiers in NLPLeila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek. 1-7 [doi]
- Joint Learning of Sentence Embeddings for Relevance and EntailmentPetr Baudis, Silvestr Stanko, Jan Sedivý. 8-17 [doi]
- A Joint Model for Word Embedding and Word MorphologyKris Cao, Marek Rei. 18-26 [doi]
- On the Compositionality and Semantic Interpretation of English Noun CompoundsCorina Dima. 27-39 [doi]
- Functional Distributional SemanticsGuy Emerson, Ann A. Copestake. 40-52 [doi]
- Assisting Discussion Forum Users using Deep Recurrent Neural NetworksJacob Hagstedt P. Suorra, Olof Mogren. 53-61 [doi]
- Adjusting Word Embeddings with Semantic Intensity OrdersJoo-Kyung Kim, Marie-Catherine de Marneffe, Eric Fosler-Lussier. 62-69 [doi]
- Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent Unit for SummarizationMinsoo Kim, Dennis Singh Moirangthem, Minho Lee. 70-77 [doi]
- An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding GenerationJey Han Lau, Timothy Baldwin. 78-86 [doi]
- Quantifying the Vanishing Gradient and Long Distance Dependency Problem in Recursive Neural Networks and Recursive LSTMsPhong Le, Willem H. Zuidema. 87-93 [doi]
- LSTM-Based Mixture-of-Experts for Knowledge-Aware DialoguesPhong Le, Marc Dymetman, Jean-Michel Renders. 94-99 [doi]
- Mapping Unseen Words to Task-Trained Embedding SpacesPranava Swaroop Madhyastha, Mohit Bansal, Kevin Gimpel, Karen Livescu. 100-110 [doi]
- Multilingual Modal Sense Classification using a Convolutional Neural NetworkAna Marasovic, Anette Frank. 111-120 [doi]
- Towards cross-lingual distributed representations without parallel text trained with adversarial autoencodersAntonio Valerio Miceli Barone. 121-126 [doi]
- Decomposing Bilexical Dependencies into Semantic and Syntactic VectorsJeff Mitchell. 127-136 [doi]
- Learning Semantic Relatedness in Community Question Answering Using Neural ModelsHenry Nassif, Mitra Mohtarami, James Glass. 137-147 [doi]
- Learning Text Similarity with Siamese Recurrent NetworksPaul Neculoiu, Maarten Versteegh, Mihai Rotaru. 148-157 [doi]
- A Two-stage Approach for Extending Event Detection to New Types via Neural NetworksThien Huu Nguyen, Lisheng Fu, KyungHyun Cho, Ralph Grishman. 158-165 [doi]
- Parameterized context windows in Random IndexingTobias Norlund, David Nilsson, Magnus Sahlgren. 166-173 [doi]
- Making Sense of Word EmbeddingsMaria Pelevina, Nikolay Arefiev, Chris Biemann, Alexander Panchenko. 174-183 [doi]
- Pair Distance Distribution: A Model of Semantic RepresentationYonatan Ramni, Oded Maimon, Eugene Khmelnitsky. 184-192 [doi]
- Measuring Semantic Similarity of Words Using Concept NetworksGábor Recski, Eszter Iklódi, Katalin Pajkossy, András Kornai. 193-200 [doi]
- Using Embedding Masks for Word CategorizationStefan Ruseti, Traian Rebedea, Stefan Trausan-Matu. 201-205 [doi]
- Sparsifying Word Representations for Deep Unordered Sentence ModelingPrasanna Sattigeri, Jayaraman J. Thiagarajan. 206-214 [doi]
- Why "Blow Out"? A Structural Analysis of the Movie Dialog DatasetRichard Searle, Megan Bingham-Walker. 215-221 [doi]
- Learning Word Importance with the Neural Bag-of-Words ModelImran A. Sheikh, Irina Illina, Dominique Fohr, Georges Linarès. 222-229 [doi]
- A Vector Model for Type-Theoretical SemanticsKonstantin Sokolov. 230-238 [doi]
- Towards Generalizable Sentence EmbeddingsEleni Triantafillou, Ryan Kiros, Raquel Urtasun, Richard S. Zemel. 239-248 [doi]
- Domain Adaptation for Neural Networks by Parameter AugmentationYusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka. 249-257 [doi]
- Neural Associative Memory for Dual-Sequence ModelingDirk Weissenborn. 258-266 [doi]