Abstract is missing.
- Improving BERT Model Using Contrastive Learning for Biomedical Relation ExtractionPeng Su, Yifan Peng, K. Vijay-Shanker. 1-10 [doi]
- Triplet-Trained Vector Space and Sieve-Based Search Improve Biomedical Concept NormalizationDongfang Xu, Steven Bethard. 11-22 [doi]
- Scalable Few-Shot Learning of Robust Biomedical Name RepresentationsPieter Fivez, Simon Suster, Walter Daelemans. 23-29 [doi]
- SAFFRON: tranSfer leArning For Food-disease RelatiOn extractioNGjorgjina Cenikj, Tome Eftimov, Barbara Korousic-Seljak. 30-40 [doi]
- Are we there yet? Exploring clinical domain knowledge of BERT modelsMadhumita Sushil, Simon Suster, Walter Daelemans. 41-53 [doi]
- Towards BERT-based Automatic ICD Coding: Limitations and OpportunitiesDamian Pascual, Sandro Luck, Roger Wattenhofer. 54-63 [doi]
- emrKBQA: A Clinical Knowledge-Base Question Answering DatasetPreethi Raghavan, Jennifer J. Liang, Diwakar Mahajan, Rachita Chandra, Peter Szolovits. 64-73 [doi]
- Overview of the MEDIQA 2021 Shared Task on Summarization in the Medical DomainAsma Ben Abacha, Yassine Mrabet, Yuhao Zhang 0004, Chaitanya Shivade, Curtis Langlotz, Dina Demner-Fushman. 74-85 [doi]
- WBI at MEDIQA 2021: Summarizing Consumer Health Questions with Generative TransformersMario Sänger, Leon Weber, Ulf Leser. 86-95 [doi]
- paht_nlp @ MEDIQA 2021: Multi-grained Query Focused Multi-Answer SummarizationWei Zhu, Yilong He, Ling Chai, Yunxiao Fan, Yuan Ni, Guotong Xie, Xiaoling Wang. 96-102 [doi]
- BDKG at MEDIQA 2021: System Report for the Radiology Report Summarization TaskSongtai Dai, Quan Wang, Yajuan Lyu, Yong Zhu. 103-111 [doi]
- damo_nlp at MEDIQA 2021: Knowledge-based Preprocessing and Coverage-oriented Reranking for Medical Question SummarizationYifan He, Mosha Chen, Songfang Huang. 112-118 [doi]
- Stress Test Evaluation of Biomedical Word EmbeddingsVladimir Araujo, Andrés Carvallo, Carlos Aspillaga, Camilo Thorne, Denis Parra. 119-125 [doi]
- BLAR: Biomedical Local Acronym ResolverWilliam Hogan, Yoshiki Vazquez-Baeza, Yannis Katsis, Tyler Baldwin, Ho-Cheol Kim, Chun-Nan Hsu. 126-130 [doi]
- Claim Detection in Biomedical Twitter PostsAmelie Wührl, Roman Klinger. 131-142 [doi]
- BioELECTRA: Pretrained Biomedical text Encoder using DiscriminatorsKamal Raj Kanakarajan, Bhuvana Kundumani, Malaikannan Sankarasubbu. 143-154 [doi]
- Word centrality constrained representation for keyphrase extractionZelalem Gero, Joyce C. Ho. 155-161 [doi]
- End-to-end Biomedical Entity Linking with Span-based Dictionary MatchingShogo Ujiie, Hayate Iso, Shuntaro Yada, Shoko Wakamiya, Eiji Aramaki. 162-167 [doi]
- Word-Level Alignment of Paper Documents with their Electronic Full-Text CounterpartsMark-Christoph Müller, Sucheta Ghosh, Ulrike Wittig, Maja Rey. 168-179 [doi]
- Improving Biomedical Pretrained Language Models with KnowledgeZheng Yuan 0002, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang. 180-190 [doi]
- EntityBERT: Entity-centric Masking Strategy for Model Pretraining for the Clinical DomainChen Lin, Timothy A. Miller, Dmitriy Dligach, Steven Bethard, Guergana Savova. 191-201 [doi]
- Contextual explanation rules for neural clinical classifiersMadhumita Sushil, Simon Suster, Walter Daelemans. 202-212 [doi]
- Exploring Word Segmentation and Medical Concept Recognition for Chinese Medical TextsYang Liu, Yuanhe Tian, Tsung-Hui Chang, Song Wu, Xiang Wan, Yan Song. 213-220 [doi]
- BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRASultan Alrowili, Vijay-Shanker. 221-227 [doi]
- Semi-Supervised Language Models for Identification of Personal Health Experiential from Twitter Data: A Case for Medication EffectsMinghao Zhu, Keyuan Jiang. 228-237 [doi]
- Context-aware query design combines knowledge and data for efficient reading and reasoningEmilee Holtzapple, Brent Cochran, Natasa Miskov-Zivanov. 238-246 [doi]
- Measuring the relative importance of full text sections for information retrieval from scientific literatureLana Yeganova, Won Kim 0003, Donald C. Comeau, W. John Wilbur, Zhiyong Lu. 247-256 [doi]
- UCSD-Adobe at MEDIQA 2021: Transfer Learning and Answer Sentence Selection for Medical SummarizationKhalil Mrini, Franck Dernoncourt, Seunghyun Yoon 0002, Trung Bui, Walter Chang, Emilias Farcas, Ndapa Nakashole. 257-262 [doi]
- ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarizationLiwen Xu, Yan Zhang, Lei Hong, Yi Cai, Szui Sung. 263-267 [doi]
- NCUEE-NLP at MEDIQA 2021: Health Question Summarization Using PEGASUS TransformersLung-Hao Lee, Po-Han Chen, Yu-Xiang Zeng, Po-Lei Lee, Kuo-Kai Shyu. 268-272 [doi]
- SB_NITK at MEDIQA 2021: Leveraging Transfer Learning for Question Summarization in Medical DomainSpandana Balumuri, Sony Bachina, Sowmya Kamath S. 273-279 [doi]
- Optum at MEDIQA 2021: Abstractive Summarization of Radiology Reports using simple BART FinetuningRavi Kondadadi, Sahil Manchanda, Jason Ngo, Ronan McCormack. 280-284 [doi]
- QIAI at MEDIQA 2021: Multimodal Radiology Report SummarizationJean-Benoit Delbrouck, Cassie Zhang, Daniel Rubin. 285-290 [doi]
- NLM at MEDIQA 2021: Transfer Learning-based Approaches for Consumer Question and Multi-Answer SummarizationShweta Yadav, Mourad Sarrouti, Deepak Gupta. 291-301 [doi]
- IBMResearch at MEDIQA 2021: Toward Improving Factual Correctness of Radiology Report Abstractive SummarizationDiwakar Mahajan, Ching-Huei Tsou, Jennifer J. Liang. 302-310 [doi]
- UETrice at MEDIQA 2021: A Prosper-thy-neighbour Extractive Multi-document Summarization ModelDuy-Cat Can, Vo Nguyen Quoc Bao, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Linh Nguyen Tran Ngoc, Quang-Thuy Ha, Mai-Vu Tran. 311-319 [doi]
- MNLP at MEDIQA 2021: Fine-Tuning PEGASUS for Consumer Health Question SummarizationJooyeon Lee, Huong Dang, Özlem Uzuner, Sam Henry 0001. 320-327 [doi]
- UETfishes at MEDIQA 2021: Standing-on-the-Shoulders-of-Giants Model for Abstractive Multi-answer SummarizationHoang-Quynh Le, Quoc-An Nguyen, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Tam Doan Thanh, Hai-Yen Thi Vuong, Trang M. Nguyen. 328-335 [doi]