TY - GEN
T1 - Humans, AI, and Context
T2 - 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
AU - Kim, Sunnie S.Y.
AU - Watkins, Elizabeth Anne
AU - Russakovsky, Olga
AU - Fong, Ruth
AU - Monroy-Hernández, Andrés
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/6/12
Y1 - 2023/6/12
N2 - Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application. We report findings from interviews with 20 end-users of a popular, AI-based bird identification app where we inquired about their trust in the app from many angles. We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors, and decided against app adoption in certain high-stakes scenarios. We also find domain knowledge and context are important factors for trust-related assessment and decision-making. We discuss the implications of our findings and provide recommendations for future research on trust in AI.
AB - Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application. We report findings from interviews with 20 end-users of a popular, AI-based bird identification app where we inquired about their trust in the app from many angles. We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors, and decided against app adoption in certain high-stakes scenarios. We also find domain knowledge and context are important factors for trust-related assessment and decision-making. We discuss the implications of our findings and provide recommendations for future research on trust in AI.
KW - Case Study
KW - Computer Vision
KW - Human-AI Interaction
KW - Trust in AI
UR - http://www.scopus.com/inward/record.url?scp=85163607098&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163607098&partnerID=8YFLogxK
U2 - 10.1145/3593013.3593978
DO - 10.1145/3593013.3593978
M3 - Conference contribution
AN - SCOPUS:85163607098
T3 - ACM International Conference Proceeding Series
SP - 77
EP - 88
BT - Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
PB - Association for Computing Machinery
Y2 - 12 June 2023 through 15 June 2023
ER -