TY - GEN
T1 - Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities Humans Attribute to Them
AU - Chen, Allison
AU - Kim, Sunnie S.Y.
AU - Dharmasiri, Amaya
AU - Russakovsky, Olga
AU - Fan, Judith E.
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/26
Y1 - 2025/4/26
N2 - As large language models (LLMs) become increasingly popular and prevalent in media and daily conversations, individuals encounter different portrayals of LLMs from various sources. It is important to understand how these portrayals can shape their beliefs about LLMs as this can have downstream impacts on adoption and usage behaviors. In this work, we investigate what mental capacities individuals attribute to LLMs after being exposed to short videos adopting one of three portrayals: mechanistic (LLMs as machines), functional (LLMs as tools), and intentional (LLMs as companions). We find that the intentional portrayal increases the attribution of mental capacities to LLMs, and that individuals tend to attribute mind-related capacities the most, followed by heart-then body-related capacities. We discuss the implications of these findings, provide recommendations on how to portray LLMs, and outline directions for future research.
AB - As large language models (LLMs) become increasingly popular and prevalent in media and daily conversations, individuals encounter different portrayals of LLMs from various sources. It is important to understand how these portrayals can shape their beliefs about LLMs as this can have downstream impacts on adoption and usage behaviors. In this work, we investigate what mental capacities individuals attribute to LLMs after being exposed to short videos adopting one of three portrayals: mechanistic (LLMs as machines), functional (LLMs as tools), and intentional (LLMs as companions). We find that the intentional portrayal increases the attribution of mental capacities to LLMs, and that individuals tend to attribute mind-related capacities the most, followed by heart-then body-related capacities. We discuss the implications of these findings, provide recommendations on how to portray LLMs, and outline directions for future research.
KW - Dennett’s hierarchy
KW - Human-AI interaction
KW - Large language models
KW - Mental capacity attribution
UR - https://www.scopus.com/pages/publications/105005754125
UR - https://www.scopus.com/inward/citedby.url?scp=105005754125&partnerID=8YFLogxK
U2 - 10.1145/3706599.3719710
DO - 10.1145/3706599.3719710
M3 - Conference contribution
AN - SCOPUS:105005754125
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI EA 2025 - Extended Abstracts of the 2025 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
T2 - 2025 CHI Conference on Human Factors in Computing Systems, CHI EA 2025
Y2 - 26 April 2025 through 1 May 2025
ER -