Abstract is missing.
- Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language ProcessingNafise Sadat Moosavi, Angela Fan, Vered Shwartz, Goran Glavas, Shafiq Joty, Alex Wang, Thomas Wolf 0008. [doi]
- Knowing Right from Wrong: Should We Use More Complex Models for Automatic Short-Answer Scoring in Bahasa Indonesia?Ali Akbar Septiandri, Yosef Ardhito Winatmoko, Ilham Firdausi Putra. 1-7 [doi]
- Rank and run-time aware compression of NLP ApplicationsUrmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina. 8-18 [doi]
- Learning Informative Representations of Biomedical Relations with Latent Variable ModelsHarshil Shah, Julien Fauqueur. 19-28 [doi]
- End to End Binarized Neural Networks for Text ClassificationKumar Shridhar, Harshil Jain, Akshat Agarwal, Denis Kleyko. 29-34 [doi]
- Exploring the Boundaries of Low-Resource BERT DistillationMoshe Wasserblat, Oren Pereg, Peter Izsak. 35-40 [doi]
- Efficient Estimation of Influence of a Training InstanceSosuke Kobayashi, Sho Yokoi, Jun Suzuki, Kentaro Inui. 41-47 [doi]
- Efficient Inference For Neural Machine TranslationYi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, Ilya Chatsviorkin. 48-53 [doi]
- Sparse Optimization for Unsupervised Extractive Summarization of Long Documents with the Frank-Wolfe AlgorithmAlicia Y. Tsai, Laurent El Ghaoui. 54-62 [doi]
- Don't Read Too Much Into It: Adaptive Computation for Open-Domain Question AnsweringYuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel 0001. 63-72 [doi]
- A Two-stage Model for Slot Filling in Low-resource Settings: Domain-agnostic Non-slot Reduction and Pretrained Contextual EmbeddingsCennet Oguz, Ngoc Thang Vu. 73-82 [doi]
- Early Exiting BERT for Efficient Document RankingJi Xin, Rodrigo Nogueira, Yaoliang Yu, Jimmy Lin. 83-88 [doi]
- Keyphrase Generation with GANs in Low-Resources ScenariosGiuseppe Lancioni, Saida S. Mohamed, Beatrice Portelli, Giuseppe Serra 0001, Carlo Tasso. 89-96 [doi]
- Quasi-Multitask Learning: an Efficient Surrogate for Obtaining Model EnsemblesNorbert Kis-Szabó, Gábor Berend. 97-106 [doi]
- A Little Bit Is Worse Than None: Ranking with Limited Training DataXinyu Zhang, Andrew Yates, Jimmy Lin. 107-112 [doi]
- Predictive Model Selection for Transfer Learning in Sequence Labeling TasksParul Awasthy, Bishwaranjan Bhattacharjee, John R. Kender, Radu Florian. 113-118 [doi]
- Load What You Need: Smaller Versions of Mutililingual BERTAmine Abdaoui, Camille Pradel, Grégoire Sigel. 119-123 [doi]
- SqueezeBERT: What can computer vision teach NLP about efficient neural networks?Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt Keutzer. 124-135 [doi]
- Analysis of Resource-efficient Predictive Models for Natural Language ProcessingRaj Ratn Pranesh, Ambesh Shekhar. 136-140 [doi]
- Towards Accurate and Reliable Energy Measurement of NLP ModelsQingqing Cao, Aruna Balasubramanian, Niranjan Balasubramanian. 141-148 [doi]
- FastFormers: Highly Efficient Transformer Models for Natural Language UnderstandingYoung-Jin Kim, Hany Hassan. 149-158 [doi]
- A comparison between CNNs and WFAs for Sequence ClassificationAriadna Quattoni, Xavier Carreras. 159-163 [doi]
- Label-Efficient Training for Next Response SelectionSeungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang. 164-168 [doi]
- Do We Need to Create Big Datasets to Learn a Task?Swaroop Mishra, Bhavdeep Singh Sachdeva. 169-173 [doi]
- Overview of the SustaiNLP 2020 Shared TaskAlex Wang, Thomas Wolf 0008. 174-178 [doi]