TY - GEN
T1 - Anthropomorphization of AI
T2 - 5th Natural Legal Language Processing Workshop, NLLP 2023
AU - Deshpande, Ameet
AU - Rajpurohit, Tanmay
AU - Narasimhan, Karthik
AU - Kalyan, Ashwin
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Anthropomorphization, which is the tendency to attribute human-like traits to non-human entities, is prevalent in many social contexts - children anthropomorphize toys and adults do so with brands. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push to make it human-like through alignment techniques, human voice, and avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights1 and the (2) subtle psychological aspects of customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint and raise corporate personhood confusions. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus establishing potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, we propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems.
AB - Anthropomorphization, which is the tendency to attribute human-like traits to non-human entities, is prevalent in many social contexts - children anthropomorphize toys and adults do so with brands. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push to make it human-like through alignment techniques, human voice, and avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights1 and the (2) subtle psychological aspects of customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint and raise corporate personhood confusions. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus establishing potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, we propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems.
UR - http://www.scopus.com/inward/record.url?scp=85185007697&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185007697&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85185007697
T3 - NLLP 2023 - Natural Legal Language Processing Workshop 2023, Proceedings of the Workshop
SP - 1
EP - 7
BT - NLLP 2023 - Natural Legal Language Processing Workshop 2023, Proceedings of the Workshop
A2 - Preotiuc-Pietro, Daniel
A2 - Goanta, Catalina
A2 - Chalkidis, Ilias
A2 - Barrett, Leslie
A2 - Spanakis, Gerasimos
A2 - Aletras, Nikolaos
PB - Association for Computational Linguistics (ACL)
Y2 - 7 December 2023
ER -