Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance

Alexis Palmer, Arthur Spirling

Research output: Contribution to journalComment/debatepeer-review

4 Scopus citations

Abstract

All politics relies on rhetorical appeals, and the ability to make arguments is considered perhaps uniquely human. But as recent times have seen successful large language model (LLM) applications to similar endeavours, we explore whether these approaches can out-compete humans in making appeals for/against various positions in US politics. We curate responses from crowdsourced workers and an LLM and place them in competition with one another. Human (crowd) judges make decisions about the relative strength of their (human v machine) efforts. We have several empirical ‘possibility’ results. First, LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments. This may suggest voters view such models as potentially dangerous; we think politicians should be aware of related ‘liar’s dividend’ concerns.

Original languageEnglish (US)
Pages (from-to)281-291
Number of pages11
JournalPolitical Science
Volume75
Issue number3
DOIs
StatePublished - 2023

All Science Journal Classification (ASJC) codes

  • Sociology and Political Science

Keywords

  • artificial intelligence
  • large language models
  • political debate
  • political methodology
  • rhetoric

Fingerprint

Dive into the research topics of 'Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance'. Together they form a unique fingerprint.

Cite this