On Measures of Biases and Harms in NLP

Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang. On Measures of Biases and Harms in NLP. In Yulan He 0001, Heng Ji, Yang Liu, Sujian Li, Chia-Hui Chang, Soujanya Poria, Chenghua Lin, Wray L. Buntine, Maria Liakata, Hanqi Yan, Zonghan Yan, Sebastian Ruder, Xiaojun Wan, Miguel Arana-Catania, Zhongyu Wei, Hen-Hsen Huang, Jheng-Long Wu, Min-Yuh Day, Pengfei Liu, Ruifeng Xu, editors, Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, Online only, November 20-23, 2022. pages 246-267, Association for Computational Linguistics, 2022. [doi]

Abstract

Abstract is missing.