2on Workshop on Gender Bias in Natural Language Processing
At COLING 2020 in Barcelona on 13th or 14th September 2020
Gender and other demographic biases in machine learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such perceived biases, and when present in widely used products, can lead to poor user experiences.
There is a growing body of research into fair representations of gender in NLP models. Key example approaches are to build and use fairer training and evaluation datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018, Maadan et al., 2018), and to change the learning algorithms themselves (e.g. Bolukbasi et al., 2016, Chiappa et al., 2018). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need standard tasks which quantify bias.
This workshop follows up on the First Workshop on Gender Bias in NLP (GeBNLP 2019) held at ACL 2019 in Florence. We invite submissions of technical work exploring the detection, measurement, and mediation of gender bias in NLP models and applications. Other important topics are the creation of datasets exploring demographics such as metrics to identify and assess relevant biases or focusing on fairness in NLP systems. Finally, the workshop is also open to non-technical work welcoming sociological perspectives.
Organizers
Marta R. Costa-jussà , Universitat Politècnica de Catalunya, Barcelona
Christian Hardmeier, Uppsala University
Kellie Webster, Google AI Language, New York
Will Radford, Canva, Sydney