TY - GEN
T1 - Humans, AI, and Context
T2 - 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
AU - Kim, Sunnie S.Y.
AU - Watkins, Elizabeth Anne
AU - Russakovsky, Olga
AU - Fong, Ruth
AU - Monroy-Hernández, Andrés
N1 - Funding Information:
We foremost thank our participants for generously sharing their time and experiences. We also thank Tristen Godfrey, Dyanne Ahn, and Klea Tryfoni for their help in the interview transcription. Finally, we thank the anonymous reviewers and members of the Princeton HCI Lab and the Princeton Visual AI Lab (especially Angelina Wang, Vikram V. Ramaswamy, Amna Liaqat, and Fannie Liu) for their helpful and thoughtful feedback. This material is based upon work partially supported by the National Science Foundation (NSF) under Grants No. 1763642 and 2145198 awarded to OR. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. We also acknowledge support from the Princeton SEAS Howard B. Wentz, Jr. Junior Faculty Award (OR), Princeton SEAS Project X Fund (RF, OR), Princeton Center for Information Technology Policy (EW), Open Philanthropy (RF, OR), and NSF Graduate Research Fellowship (SK).
Funding Information:
We foremost thank our participants for generously sharing their time and experiences. We also thank Tristen Godfrey, Dyanne Ahn, and Klea Tryfoni for their help in the interview transcription. Finally, we thank the anonymous reviewers and members of the Princeton HCI Lab and the Princeton Visual AI Lab (especially AngelinaWang, Vikram V. Ramaswamy, Amna Liaqat, and Fannie Liu) for their helpful and thoughtful feedback. This material is based upon work partially supported by the National Science Foundation (NSF) under Grants No. 1763642 and 2145198 awarded to OR. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. We also acknowledge support from the Princeton SEAS Howard B. Wentz, Jr. Junior Faculty Award (OR), Princeton SEAS Project X Fund (RF, OR), Princeton Center for Information Technology Policy (EW), Open Philanthropy (RF, OR), and NSF Graduate Research Fellowship (SK).
Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/6/12
Y1 - 2023/6/12
N2 - Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application. We report findings from interviews with 20 end-users of a popular, AI-based bird identification app where we inquired about their trust in the app from many angles. We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors, and decided against app adoption in certain high-stakes scenarios. We also find domain knowledge and context are important factors for trust-related assessment and decision-making. We discuss the implications of our findings and provide recommendations for future research on trust in AI.
AB - Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application. We report findings from interviews with 20 end-users of a popular, AI-based bird identification app where we inquired about their trust in the app from many angles. We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors, and decided against app adoption in certain high-stakes scenarios. We also find domain knowledge and context are important factors for trust-related assessment and decision-making. We discuss the implications of our findings and provide recommendations for future research on trust in AI.
KW - Case Study
KW - Computer Vision
KW - Human-AI Interaction
KW - Trust in AI
UR - http://www.scopus.com/inward/record.url?scp=85163607098&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163607098&partnerID=8YFLogxK
U2 - 10.1145/3593013.3593978
DO - 10.1145/3593013.3593978
M3 - Conference contribution
AN - SCOPUS:85163607098
T3 - ACM International Conference Proceeding Series
SP - 77
EP - 88
BT - Proceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
PB - Association for Computing Machinery
Y2 - 12 June 2023 through 15 June 2023
ER -