TY - GEN
T1 - Fairness and abstraction in sociotechnical systems
AU - Selbst, Andrew D.
AU - Boyd, Danah
AU - Friedler, Sorelle A.
AU - Venkatasubramanian, Suresh
AU - Vertesi, Janet
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/1/29
Y1 - 2019/1/29
N2 - A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process. Bedrock concepts in computer science-such as abstraction and modular design-are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce "fair" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.
AB - A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process. Bedrock concepts in computer science-such as abstraction and modular design-are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce "fair" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.
KW - Fairness-aware Machine Learning
KW - Interdisciplinary
KW - Sociotechnical Systems
UR - http://www.scopus.com/inward/record.url?scp=85061791517&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061791517&partnerID=8YFLogxK
U2 - 10.1145/3287560.3287598
DO - 10.1145/3287560.3287598
M3 - Conference contribution
AN - SCOPUS:85061791517
T3 - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
SP - 59
EP - 68
BT - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
PB - Association for Computing Machinery, Inc
T2 - 2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019
Y2 - 29 January 2019 through 31 January 2019
ER -