5th Workshop on Gender Bias in Natural Language Processing

At ACL in Bangkok, Thailand, 16th August, 2024

Gender bias, among other demographic biases (e.g. race, nationality, religion), in machine-learned models is of increasing interest to the scientific community and industry. Models of natural language are highly affected by such biases, which are present in widely used products and can lead to poor user experiences. There is a growing body of research into improved representations of gender in NLP models. Key example approaches are to build and use balanced training and evaluation datasets (e.g. (Webster et al., 2018; Bentivogli et al., 2020; Renduchintala et al., 2021)), and to change the learning algorithms themselves (e.g. (Bolukbasi et al., 2016)). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need to create widespread awareness of bias and a consensus on how to work against it, for instance by developing standard tasks and metrics. Our workshop provides a forum to achieve this goal.

Organizers

Christine Basta, Alexandria University
Marta R. Costa-jussĂ , FAIR, Meta,
Agnieszka Falénska, University of Stuttgart
Seraphina Goldfarb-Tarrant, Cohere
Debora Nozza, Bocconi University