Leveraging Large Language Models to Estimate Clinically Relevant Psychological Constructs in Psychotherapy Transcripts

Mostafa Abdou, Razia S. Sahi, Thomas D. Hull, Erik C. Nook, Nathaniel D. Daw

Research output: Contribution to journalArticlepeer-review

Abstract

Developing precise, innocuous markers of psychopathology and the processes that foster effective treatment would greatly advance the field’s ability to detect and intervene on psychopathology. However, a central challenge in this area is that both assessment and treatment are conducted primarily in natural language, a medium that makes quantitative measurement difficult. Although recent advances have been made, much existing research in this area has been limited by reliance on previous-generation psycholinguistic tools. Here we build on previous work that identified a linguistic measure of “psychological distancing” (that is, viewing a negative situation as separated from oneself) in client language, which was associated with improved emotion regulation in laboratory settings and treatment progress in real-world therapeutic transcripts (Nook et al., 2017, 2022). However, this formulation was based on context-insensitive word count-based measures of distancing (pronoun person and verb tense), which limits the ability to detect more abstract expressions of psychological distance, such as counterfactual or conditional statements. This approach also leaves open many questions about how therapists’ — likely subtler — language can effectively guide clients toward increased psychological distance. We address these gaps by introducing the use of appropriately prompted large language models (LLMs) to measure linguistic distance, and we compare these results to those obtained using traditional word-counting techniques. Our results show that LLMs offer a more nuanced and context-sensitive approach to assessing language, significantly enhancing our ability to model the relations between linguistic distance and symptoms. Moreover, this approach enables us to expand the scope of analysis beyond client language to shed insight into how therapists’ language relates to client outcomes. Specifically, the LLM was able to detect ways in which a therapist’s language encouraged a client to adopt distanced perspectives—rather than simply detecting the therapist themselves being distanced. This measure also reliably tracked the severity of patient symptoms, highlighting the potential of LLM-powered linguistic analysis to deepen our understanding of therapeutic processes.

Original languageEnglish (US)
Pages (from-to)187-209
Number of pages23
JournalComputational Psychiatry
Volume9
Issue number1
DOIs
StatePublished - 2025

All Science Journal Classification (ASJC) codes

  • Psychology (miscellaneous)

Keywords

  • anxiety
  • computational modelling
  • depression
  • language models
  • linguistic distancing

Fingerprint

Dive into the research topics of 'Leveraging Large Language Models to Estimate Clinically Relevant Psychological Constructs in Psychotherapy Transcripts'. Together they form a unique fingerprint.

Cite this