People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection

Indira Sen, Dennis Assenmacher, Mattia Samory, Isabelle Augenstein, Wil van der Aalst, Claudia Wagner 0001. People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection. In Houda Bouamor, Juan Pino 0001, Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023. pages 10480-10504, Association for Computational Linguistics, 2023. [doi]

Authors

Indira Sen

This author has not been identified. Look up 'Indira Sen' in Google

Dennis Assenmacher

This author has not been identified. Look up 'Dennis Assenmacher' in Google

Mattia Samory

This author has not been identified. Look up 'Mattia Samory' in Google

Isabelle Augenstein

This author has not been identified. Look up 'Isabelle Augenstein' in Google

Wil van der Aalst

This author has not been identified. Look up 'Wil van der Aalst' in Google

Claudia Wagner 0001

This author has not been identified. Look up 'Claudia Wagner 0001' in Google