Delayed impact of fair machine learning

Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt

Research output: Chapter in Book/Report/Conference proceedingConference contribution

23 Scopus citations


Static classification has been the predominant focus of the study of fairness in machine learning. While most models do not consider how decisions change populations over time, it is conventional wisdom that fairness criteria promote the long-term well-being of groups they aim to protect. This work studies the interaction of static fairness criteria with temporal indicators of well-being. We show a simple one-step feedback model in which common criteria do not generally promote improvement over time, and may in fact cause harm. Our results highlight the importance of temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.

Original languageEnglish (US)
Title of host publicationProceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
EditorsSarit Kraus
PublisherInternational Joint Conferences on Artificial Intelligence
Number of pages5
ISBN (Electronic)9780999241141
StatePublished - 2019
Externally publishedYes
Event28th International Joint Conference on Artificial Intelligence, IJCAI 2019 - Macao, China
Duration: Aug 10 2019Aug 16 2019

Publication series

NameIJCAI International Joint Conference on Artificial Intelligence
ISSN (Print)1045-0823


Conference28th International Joint Conference on Artificial Intelligence, IJCAI 2019

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence


Dive into the research topics of 'Delayed impact of fair machine learning'. Together they form a unique fingerprint.

Cite this