TY - JOUR
T1 - Towards fairness in visual recognition
T2 - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
AU - Wang, Zeyu
AU - Qinami, Klint
AU - Karakozis, Ioannis Christos
AU - Genova, Kyle
AU - Nair, Prem
AU - Hata, Kenji
AU - Russakovsky, Olga
N1 - Funding Information:
This work is partially supported by the National Science Foundation under Grant No. 1763642, by Google Cloud, and by the Princeton SEAS Yang Family Innovation award. Thank you to Arvind Narayanan and to members of Princeton’s Fairness in AI reading group for great discussions.
Funding Information:
We provide a benchmark and a thorough analysis of bias mitigation techniques in visual recognition models. We draw several important algorithmic conclusions, while also acknowledging that this work does not attempt to tackle many of the underlying ethical fairness questions. What happens if the domain (gender in this case) is non-discrete? What happens if the imbalanced domain distribution is not known at training time – for example, if the researchers failed to identify the undesired correlation with gender? What happens in downstream tasks where these models may be used to make prediction decisions? We leave these and many other questions to future work. Acknowledgements. This work is partially supported by the National Science Foundation under Grant No. 1763642, by Google Cloud, and by the Princeton SEAS Yang Family Innovation award. Thank you to Arvind Narayanan and to members of Princeton’s Fairness in AI reading group for great discussions.
Publisher Copyright:
©2020 IEEE.
PY - 2020
Y1 - 2020
N2 - Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recognition benchmark for studying bias mitigation. Using this benchmark, we provide a thorough analysis of a wide range of techniques. We highlight the shortcomings of popular adversarial training approaches for bias mitigation, propose a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Finally, we validate our findings on the attribute classification task in the CelebA dataset, where attribute presence is known to be correlated with the gender of people in the image, and demonstrate that the proposed technique is effective at mitigating real-world gender bias.
AB - Computer vision models learn to perform a task by capturing relevant statistics from training data. It has been shown that models learn spurious age, gender, and race correlations when trained for seemingly unrelated tasks like activity recognition or image captioning. Various mitigation techniques have been presented to prevent models from utilizing or learning such biases. However, there has been little systematic comparison between these techniques. We design a simple but surprisingly effective visual recognition benchmark for studying bias mitigation. Using this benchmark, we provide a thorough analysis of a wide range of techniques. We highlight the shortcomings of popular adversarial training approaches for bias mitigation, propose a simple but similarly effective alternative to the inference-time Reducing Bias Amplification method of Zhao et al., and design a domain-independent training technique that outperforms all other methods. Finally, we validate our findings on the attribute classification task in the CelebA dataset, where attribute presence is known to be correlated with the gender of people in the image, and demonstrate that the proposed technique is effective at mitigating real-world gender bias.
UR - http://www.scopus.com/inward/record.url?scp=85094624899&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094624899&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.00894
DO - 10.1109/CVPR42600.2020.00894
M3 - Conference article
AN - SCOPUS:85094624899
SN - 1063-6919
SP - 8916
EP - 8925
JO - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
JF - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
M1 - 9156668
Y2 - 14 June 2020 through 19 June 2020
ER -