Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models

Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, Pasquale Minervini. Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models. In Paola Merlo, Jörg Tiedemann, Reut Tsarfaty, editors, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021. pages 2232-2242, Association for Computational Linguistics, 2021. [doi]

@inproceedings{ManelaEFBM21,
  title = {Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models},
  author = {Daniel de Vassimon Manela and David Errington and Thomas Fisher and Boris van Breugel and Pasquale Minervini},
  year = {2021},
  url = {https://www.aclweb.org/anthology/2021.eacl-main.190/},
  researchr = {https://researchr.org/publication/ManelaEFBM21},
  cites = {0},
  citedby = {0},
  pages = {2232-2242},
  booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021},
  editor = {Paola Merlo and Jörg Tiedemann and Reut Tsarfaty},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-954085-02-2},
}