TBA: Workshop Paper Due Date

TBA: Notification of Acceptance

TBA: Camera-ready papers due

TBA [12-17 August, 2023]: Workshop Dates

Isabelle Augenstein, University of Copenhagen

Hal Daumé III, University of Maryland and Microsoft Research NYC

Bashar Alhafni (NYU) Jasmijn Bastings (Google)
Hannah Devinney (Umeå University, Sweden)
Marco Gaido (Trento, FBK)
Dorna Behdadi (University of Gothenburg, Sweden)
Matthias Gallé (Naver Labs Europe, France)
Mercedes García-Martínez (Pangeanic, Spain)
Nizar Habash (NYU Abu Dhabi, Abu Dhabi)
Ben Hachey (Harrison.AI, Australia)
Lucy Havens (University of Edinburgh)
Wael Khreich (American University of Beirut)
Svetlana Kiritchenko (National Research Council, Canada)
Gabriella Lapesa (GESIS, Germany)
Antonis Maronikolakis (LMU Munich, Germany)
Maite Melero (Barcelona Computing)
Carla Perez Almendros (Cardiff University, UK)
Michael Roth (University of Stuttgart)
Rafal Rzepka (Hokkaido University, Japan)
Beatrice Savoldi (Trento, FBK)
Masashi Takeshita (Hokkaido University)
Soroush Vosoughi (Dartmouth

5th Workshop on Gender Bias in Natural Language Processing

At ACL in Bangkok, Thailand, during August, 2024

Gender bias, among other demographic biases (e.g. race, nationality, religion), in machine-learned models is of increasing interest to the scientific community and industry. Models of natural language are highly affected by such biases, which are present in widely used products and can lead to poor user experiences. There is a growing body of research into improved representations of gender in NLP models. Key example approaches are to build and use balanced training and evaluation datasets (e.g. (Webster et al., 2018; Bentivogli et al., 2020; Renduchintala et al., 2021)), and to change the learning algorithms themselves (e.g. (Bolukbasi et al., 2016)). While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need to create widespread awareness of bias and a consensus on how to work against it, for instance by developing standard tasks and metrics. Our workshop provides a forum to achieve this goal.


Christine Basta, Alexandria University
Marta R. Costa-jussà, FAIR, Meta,
Agnieszka Falénska, University of Stuttgart
Seraphina Goldfarb-Tarrant, Cohere
Debora Nozza, Bocconi University