TY - GEN
T1 - Gender Artifacts in Visual Datasets
AU - Meister, Nicole
AU - Zhao, Dora
AU - Wang, Angelina
AU - Ramaswamy, Vikram V.
AU - Fong, Ruth
AU - Russakovsky, Olga
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models. Many prior works have proposed methods for mitigating gender biases, often by attempting to remove gender expression information from images. To understand the feasibility and practicality of these approaches, we investigate what "gender artifacts"exist in large-scale visual datasets. We define a "gender artifact"as a visual cue correlated with gender, focusing specifically on cues that are learnable by a modern image classifier and have an interpretable human corollary. Through our analyses, we find that gender artifacts are ubiquitous in the COCO and OpenImages datasets, occurring everywhere from low-level information (e.g., the mean value of the color channels) to higher-level image composition (e.g., pose and location of people). Further, bias mitigation methods that attempt to remove gender actually remove more information from the scene than the person. Given the prevalence of gender artifacts, we claim that attempts to remove these artifacts from such datasets are largely infeasible as certain removed artifacts may be necessary for the downstream task of object recognition. Instead, the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop fairness-aware methods which are robust to these distributional shifts across groups.
AB - Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models. Many prior works have proposed methods for mitigating gender biases, often by attempting to remove gender expression information from images. To understand the feasibility and practicality of these approaches, we investigate what "gender artifacts"exist in large-scale visual datasets. We define a "gender artifact"as a visual cue correlated with gender, focusing specifically on cues that are learnable by a modern image classifier and have an interpretable human corollary. Through our analyses, we find that gender artifacts are ubiquitous in the COCO and OpenImages datasets, occurring everywhere from low-level information (e.g., the mean value of the color channels) to higher-level image composition (e.g., pose and location of people). Further, bias mitigation methods that attempt to remove gender actually remove more information from the scene than the person. Given the prevalence of gender artifacts, we claim that attempts to remove these artifacts from such datasets are largely infeasible as certain removed artifacts may be necessary for the downstream task of object recognition. Instead, the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop fairness-aware methods which are robust to these distributional shifts across groups.
UR - http://www.scopus.com/inward/record.url?scp=85180395461&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180395461&partnerID=8YFLogxK
U2 - 10.1109/ICCV51070.2023.00446
DO - 10.1109/ICCV51070.2023.00446
M3 - Conference contribution
AN - SCOPUS:85180395461
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 4814
EP - 4825
BT - Proceedings - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
Y2 - 2 October 2023 through 6 October 2023
ER -