Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies

Sunnie S.Y. Kim, Jennifer Wortman Vaughan, Q. Vera Liao, Tania Lombrozo, Olga Russakovsky

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct. Mitigating such overreliance is a key challenge. Through a think-aloud study in which participants use an LLM-infused application to answer objective questions, we identify several features of LLM responses that shape users' reliance: explanations (supporting details for answers), inconsistencies in explanations, and sources. Through a large-scale, pre-registered, controlled experiment (N=308), we isolate and study the effects of these features on users' reliance, accuracy, and other measures. We find that the presence of explanations increases reliance on both correct and incorrect responses. However, we observe less reliance on incorrect responses when sources are provided or when explanations exhibit inconsistencies. We discuss the implications of these findings for fostering appropriate reliance on LLMs.

Original languageEnglish (US)
Title of host publicationCHI 2025 - Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9798400713941
DOIs
StatePublished - Apr 26 2025
Event2025 CHI Conference on Human Factors in Computing Systems, CHI 2025 - Yokohama, Japan
Duration: Apr 26 2025May 1 2025

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2025 CHI Conference on Human Factors in Computing Systems, CHI 2025
Country/TerritoryJapan
CityYokohama
Period4/26/255/1/25

All Science Journal Classification (ASJC) codes

  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design
  • Software

Keywords

  • Explanations
  • Human-AI interaction
  • Inconsistencies
  • Large language models
  • Overreliance
  • Question answering
  • Sources

Fingerprint

Dive into the research topics of 'Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies'. Together they form a unique fingerprint.

Cite this