Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

Isar Nejadgholi, Esma Balkir, Kathleen C. Fraser, Svetlana Kiritchenko. Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information. In Jasmijn Bastings, Yonatan Belinkov, Yanai Elazar, Dieuwke Hupkes, Naomi Saphra, Sarah Wiegreffe, editors, Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2022, Abu Dhabi, United Arab Emirates (Hybrid), December 8, 2022. pages 225-237, Association for Computational Linguistics, 2022. [doi]

@inproceedings{NejadgholiBFK22,
  title = {Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information},
  author = {Isar Nejadgholi and Esma Balkir and Kathleen C. Fraser and Svetlana Kiritchenko},
  year = {2022},
  url = {https://aclanthology.org/2022.blackboxnlp-1.18},
  researchr = {https://researchr.org/publication/NejadgholiBFK22},
  cites = {0},
  citedby = {0},
  pages = {225-237},
  booktitle = {Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2022, Abu Dhabi, United Arab Emirates (Hybrid), December 8, 2022},
  editor = {Jasmijn Bastings and Yonatan Belinkov and Yanai Elazar and Dieuwke Hupkes and Naomi Saphra and Sarah Wiegreffe},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-959429-05-0},
}