On Measures of Biases and Harms in NLP

Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang. On Measures of Biases and Harms in NLP. In Yulan He 0001, Heng Ji, Yang Liu, Sujian Li, Chia-Hui Chang, Soujanya Poria, Chenghua Lin, Wray L. Buntine, Maria Liakata, Hanqi Yan, Zonghan Yan, Sebastian Ruder, Xiaojun Wan, Miguel Arana-Catania, Zhongyu Wei, Hen-Hsen Huang, Jheng-Long Wu, Min-Yuh Day, Pengfei Liu, Ruifeng Xu, editors, Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, Online only, November 20-23, 2022. pages 246-267, Association for Computational Linguistics, 2022. [doi]

Authors

Sunipa Dev

This author has not been identified. Look up 'Sunipa Dev' in Google

Emily Sheng

This author has not been identified. Look up 'Emily Sheng' in Google

Jieyu Zhao

This author has not been identified. Look up 'Jieyu Zhao' in Google

Aubrie Amstutz

This author has not been identified. Look up 'Aubrie Amstutz' in Google

Jiao Sun

This author has not been identified. Look up 'Jiao Sun' in Google

Yu Hou

This author has not been identified. Look up 'Yu Hou' in Google

Mattie Sanseverino

This author has not been identified. Look up 'Mattie Sanseverino' in Google

Jiin Kim

This author has not been identified. Look up 'Jiin Kim' in Google

Akihiro Nishi

This author has not been identified. Look up 'Akihiro Nishi' in Google

Nanyun Peng

This author has not been identified. Look up 'Nanyun Peng' in Google

Kai-Wei Chang

This author has not been identified. Look up 'Kai-Wei Chang' in Google