JUST-BLUE at SemEval-2021 Task 1: Predicting Lexical Complexity using BERT and RoBERTa Pre-trained Language Models

Tuqa Bani Yaseen, Qusai Ismail, Sarah Al-Omari, Eslam Al-Sobh, Malak Abdullah. JUST-BLUE at SemEval-2021 Task 1: Predicting Lexical Complexity using BERT and RoBERTa Pre-trained Language Models. In Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurélie Herbelot, Xiaodan Zhu, editors, Proceedings of the 15th International Workshop on Semantic Evaluation, SemEval@ACL/IJCNLP 2021, Virtual Event / Bangkok, Thailand, August 5-6, 2021. pages 661-666, Association for Computational Linguistics, 2021. [doi]

@inproceedings{YaseenIAAA21,
  title = {JUST-BLUE at SemEval-2021 Task 1: Predicting Lexical Complexity using BERT and RoBERTa Pre-trained Language Models},
  author = {Tuqa Bani Yaseen and Qusai Ismail and Sarah Al-Omari and Eslam Al-Sobh and Malak Abdullah},
  year = {2021},
  url = {https://aclanthology.org/2021.semeval-1.85},
  researchr = {https://researchr.org/publication/YaseenIAAA21},
  cites = {0},
  citedby = {0},
  pages = {661-666},
  booktitle = {Proceedings of the 15th International Workshop on Semantic Evaluation, SemEval@ACL/IJCNLP 2021, Virtual Event / Bangkok, Thailand, August 5-6, 2021},
  editor = {Alexis Palmer and Nathan Schneider and Natalie Schluter and Guy Emerson and Aurélie Herbelot and Xiaodan Zhu},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-954085-70-1},
}