3rd Workshop on Gender Bias in Natural Language Processing
At ACL-IJCNLP 2021 in Bangkok, Thailand, during August 5-6, 2021.
Gender among other demographic biases (e.g. race, nationality, religion) in machine-learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such biases, which are present in widely used products and can lead to poor user experiences. There is a growing body of research into improved representations of gender in NLP models. Key example approaches are to build and use balanced training and evaluation datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018, Maadan et al., 2018), and to change the learning algorithms themselves (e.g. Bolukbasi et al., 2016, Chiappa et al., 2018). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need to create widespread awareness of bias and a consensus on how to work against it, for instance by developing standard tasks and metrics. Our workshop provides a forum to achieve this goal. Our workshop follows up two successful previous editions of the Workshop collocated with ACL 2019 and COLING 2020, respectively. Special efforts will be made following the current edition in the line of discussing bias statements of the works discussed in the workshop (Blodgett et al., 2020). This helps to make clear (a) what system behaviors are considered as bias in the work, and (b) why those behaviors are harmful, in what ways, and to whom. We encourage authors to engage with definitions of bias and other relevant concepts such as prejudice, harm, discrimination from outside NLP, especially from social sciences and normative ethics, in this statement and in their work in general. Also we will be keeping pushing the integration of several communities such as social sciences as well as a wider representation of approaches dealing with bias.
Topics of interest
We invite submissions of technical work exploring the detection, measurement, and mediation of gender bias in NLP models and applications. Other important topics are the creation of datasets exploring demographics such as metrics to identify and assess relevant biases or focusing on fairness in NLP systems. Finally, the workshop is also open to non-technical work addressing sociological perspectives, and we strongly encourage critical reflections on the sources and implications of bias throughout all types of work..
Organizers
Marta R. Costa-jussà , Universitat Politècnica de Catalunya, Barcelona
Hila Gonen, Amazon
Christian Hardmeier, Uppsala University
Kellie Webster, Google AI Language, New York