1st ACL Workshop on Gender Bias in Natural Language Processing

At ACL 2019 in Florence on 2on August 2019

Gender and other demographic biases in machine learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such perceived biases, and when present in widely used products, can lead to poor user experiences.

There is a growing body of research into fair representations of gender in NLP models. Key example approaches are to build and use fairer training and evaluation datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018, Maadan et al., 2018), and to change the learning algorithms themselves (e.g. Bolukbasi et al., 2016, Chiappa et al., 2018). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need standard tasks which quantify bias.

This workshop will be the first dedicated to the issue of gender bias in NLP techniques and it includes a shared task on coreference resolution. In order to make progress as a field, this workshop will specially focus on discussing and proposing standard tasks which quantify bias.

Organizers

Marta R. Costa-jussà, Universitat Politècnica de Catalunya, Barcelona

Christian Hardmeier, Uppsala University

Kellie Webster, Google AI Language, New York

Will Radford, Canva, Sydney

Latest news