Abstract is missing.
- MRQA 2019 Shared Task: Evaluating Generalization in Reading ComprehensionAdam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen. 1-13 [doi]
- Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading ComprehensionHangbo Bao, Li Dong 0004, Furu Wei, Wenhui Wang, Nan Yang 0002, Lei Cui 0001, Songhao Piao, Ming Zhou 0001. 14-18 [doi]
- CALOR-QUEST : generating a training corpus for Machine Reading Comprehension models from shallow semantic annotationsFrédéric Béchet, Cindy Aloui, Delphine Charlet, Géraldine Damnati, Johannes Heinecke, Alexis Nasr, Frédéric Herledan. 19-26 [doi]
- Improving Question Answering with External KnowledgeXiaoman Pan, Kai Sun, Dian Yu, Jianshu Chen, Heng Ji, Claire Cardie, Dong Yu. 27-37 [doi]
- Answer-Supervised Question Reformulation for Enhancing Conversational Machine ComprehensionQian Li, Hui Su, Cheng Niu, Daling Wang, Zekang Li, Shi Feng, Yifei Zhang 0003. 38-47 [doi]
- Simple yet Effective Bridge Reasoning for Open-Domain Multi-Hop Question AnsweringWenhan Xiong, Mo Yu, Xiaoxiao Guo, Hong Wang, Shiyu Chang, Murray Campbell, William Yang Wang. 48-52 [doi]
- Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax PriorBowen Wu, Haoyang Huang, Zongsheng Wang, Qihang Feng, Jingsong Yu, Baoxun Wang. 53-57 [doi]
- Reasoning Over Paragraph Effects in SituationsKevin Lin, Oyvind Tafjord, Peter Clark, Matt Gardner 0001. 58-62 [doi]
- Towards Answer-unaware Conversational Question GenerationMao Nakanishi, Tetsunori Kobayashi, Yoshihiko Hayashi. 63-71 [doi]
- Cross-Task Knowledge Transfer for Query-Based Text SummarizationElozino Egonmwan, Vittorio Castelli, Md. Arafat Sultan. 72-77 [doi]
- Book QA: Stories of Challenges and OpportunitiesStefanos Angelidis, Lea Frermann, Diego Marcheggiani, Roi Blanco, Lluís Màrquez. 78-85 [doi]
- FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine ComprehensionYi-Ting Yeh, Yun-Nung Chen. 86-90 [doi]
- Do Multi-hop Readers Dream of Reasoning Chains?Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong, Tian Gao. 91-97 [doi]
- Machine Comprehension Improves Domain-Specific Japanese Predicate-Argument Structure AnalysisNorio Takahashi, Tomohide Shibata, Daisuke Kawahara, Sadao Kurohashi. 98-104 [doi]
- On Making Reading Comprehension More ComprehensiveMatt Gardner 0001, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min. 105-112 [doi]
- Multi-step Entity-centric Information Retrieval for Multi-Hop Question AnsweringRajarshi Das, Ameya Godbole, Dilip Kavarthapu, Zhiyu Gong, Abhishek Singhal, Mo Yu, Xiaoxiao Guo, Tian Gao, Hamed Zamani, Manzil Zaheer, Andrew McCallum. 113-118 [doi]
- Evaluating Question Answering EvaluationAnthony Chen, Gabriel Stanovsky, Sameer Singh 0001, Matt Gardner 0001. 119-124 [doi]
- Bend but Don't Break? Multi-Challenge Stress Test for QA ModelsHemant Pugaliya, James Route, Kaixin Ma, Yixuan Geng, Eric Nyberg. 125-136 [doi]
- ReQA: An Evaluation for End-to-End Answer Retrieval ModelsAmin Ahmad, Noah Constant, Yinfei Yang, Daniel Cer. 137-146 [doi]
- Comprehensive Multi-Dataset Evaluation of Reading ComprehensionDheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh 0001, Matt Gardner 0001. 147-153 [doi]
- A Recurrent BERT-based Model for Question GenerationYing-Hong Chan, Yao-Chung Fan. 154-162 [doi]
- Let Me Know What to Ask: Interrogative-Word-Aware Question GenerationJunmo Kang, Haritz Puerto San Roman, Sung-Hyon Myaeng. 163-171 [doi]
- Extractive NarrativeQA with Heuristic Pre-TrainingLea Frermann. 172-182 [doi]
- CLER: Cross-task Learning with Expert Representation to Generalize Reading and UnderstandingTakumi Takahashi, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma. 183-190 [doi]
- Question Answering Using Hierarchical Attention on Top of BERT FeaturesReham A. Osama, Nagwa M. El-Makky, Marwan Torki. 191-195 [doi]
- Domain-agnostic Question-Answering with Adversarial TrainingSeanie Lee, Donggyu Kim, Jangwon Park. 196-202 [doi]
- Generalizing Question Answering System with Pre-trained Language Model Fine-tuningDan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, Pascale Fung. 203-211 [doi]
- D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading ComprehensionHongYu Li, Xiyuan Zhang, Yibing Liu, Yiming Zhang, Quan Wang, Xiangyang Zhou, Jing Liu 0022, Hua Wu 0003, Haifeng Wang. 212-219 [doi]
- An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question AnsweringShayne Longpre, Yi Lu, Zhucheng Tu, Chris DuBois. 220-227 [doi]