Isabelle Augenstein, University of Copenhagen
Title Quantifying societal biases towards entities
Abstract Language is known to be influenced by the gender of the speaker and the referent, a phenomenon that has received much attention in sociolinguistics. This can lead to harmful societal biases, such as gender bias, the tendency to make assumptions based on gender rather than objective factors. Moreover, these biases are then picked up on by language models and perpetuated to models for downstream NLP tasks. Most research on quantifying these biases emerging in text and in language models has used artificial probing templates imposing fixed sentence constructions, been conducted for English, and has ignored biases beyond gender including inter-sectional aspects ones. In our work, we by contrast focus on detecting biases towards specific entities, and adopt a cross-lingual inter-sectional approach. This allows for studying more complex interdependencies, such as the relationship between a politician’s origin and language of the analysed text, or relationships between gender and racial bias.
Hal Daumé III, University of Maryland and Microsoft Research NYC
Title Gender, Stereotypes, and Harms
Abstract Gender is expressed and performed in a plethora of ways in the world, and reflected in complex, interconnected ways in language. I’ll discuss recent and ongoing work measuring how modern NLP models encode (some of) these expressions of gender, how those encoding reflect cultural stereotypes (and whose cultural stereotypes), and how that impacts people using these models. This will reflect joint work with a number of collaborators including students Haozhe An, Connor Baumler, Yang Trista Cao, Eve Fleisig, Amanda Liu, and Anna Sotnikova.