Navid Rekabsaz, Markus Schedl,
"Do Neural Ranking Models Intensify Gender Bias?"
: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Seite(n) 2065?2068, 7-2020
Do Neural Ranking Models Intensify Gender Bias?
Sprache des Titels:
Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
Concerns regarding the footprint of societal biases in informationretrieval (IR) systems have been raised in several previous studies.In this work, we examine various recent IR models from the per-spective of the degree of gender bias in their retrieval results. Tothis end, we first provide a bias measurement framework whichincludes two metrics to quantify the degree of the unbalanced pres-ence of gender-related concepts in a given IR model?s ranking list.To examine IR models by means of the framework, we create adataset of non-gendered queries, selected by human annotators.Applying these queries to the MS MARCO Passage retrieval col-lection, we then measure the gender bias of a BM25 model andseveral recent neural ranking models. The results show that whileall models are strongly biased toward male, the neural models, andin particular the ones based on contextualized embedding models,significantly intensify gender bias. Our experiments also show anoverall increase in the gender bias of neural models when theyexploit transfer learning, namely when they use (already biased)pre-trained embeddings.1