On the conversational persuasiveness of GPT-4

Francesco Salvi, Manoel Horta Ribeiro, Riccardo Gallotti, Robert West

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Early work has found that large language models (LLMs) can generate persuasive content. However, evidence on whether they can also personalize arguments to individual attributes remains limited, despite being crucial for assessing misuse. This preregistered study examines AI-driven persuasion in a controlled setting, where participants engaged in short multiround debates. Participants were randomly assigned to 1 of 12 conditions in a 2 × 2 × 3 design: (1) human or GPT-4 debate opponent; (2) opponent with or without access to sociodemographic participant data; (3) debate topic of low, medium or high opinion strength. In debate pairs where AI and humans were not equally persuasive, GPT-4 with personalization was more persuasive 64.4% of the time (81.2% relative increase in odds of higher post-debate agreement; 95% confidence interval [+26.0%, +160.7%], P < 0.01; N = 900). Our findings highlight the power of LLM-based persuasion and have implications for the governance and design of online platforms.

Original languageEnglish (US)
Article number68
JournalNature Human Behaviour
DOIs
StateAccepted/In press - 2025

All Science Journal Classification (ASJC) codes

  • Social Psychology
  • Experimental and Cognitive Psychology
  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'On the conversational persuasiveness of GPT-4'. Together they form a unique fingerprint.

Cite this