TY - JOUR
T1 - Temporal structure of natural language processing in the human brain corresponds to layered hierarchy of large language models
AU - Goldstein, Ariel
AU - Ham, Eric
AU - Schain, Mariano
AU - Nastase, Samuel A.
AU - Aubrey, Bobbi
AU - Zada, Zaid
AU - Grinstein-Dabush, Avigail
AU - Gazula, Harshvardhan
AU - Feder, Amir
AU - Doyle, Werner
AU - Devore, Sasha
AU - Dugan, Patricia
AU - Friedman, Daniel
AU - Brenner, Michael
AU - Hassidim, Avinatan
AU - Matias, Yossi
AU - Devinsky, Orrin
AU - Siegelman, Noam
AU - Flinker, Adeen
AU - Levy, Omer
AU - Reichart, Roi
AU - Hasson, Uri
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/12
Y1 - 2025/12
N2 - Large Language Models (LLMs) offer a framework for understanding language processing in the human brain. Unlike traditional models, LLMs represent words and context through layered numerical embeddings. Here, we demonstrate that LLMs’ layer hierarchy aligns with the temporal dynamics of language comprehension in the brain. Using electrocorticography (ECoG) data from participants listening to a 30-minute narrative, we show that deeper LLM layers correspond to later brain activity, particularly in Broca’s area and other language-related regions. We extract contextual embeddings from GPT-2 XL and Llama-2 and use linear models to predict neural responses across time. Our results reveal a strong correlation between model depth and the brain’s temporal receptive window during comprehension. We also compare LLM-based predictions with symbolic approaches, highlighting the advantages of deep learning models in capturing brain dynamics. We release our aligned neural and linguistic dataset as a public benchmark to test competing theories of language processing.
AB - Large Language Models (LLMs) offer a framework for understanding language processing in the human brain. Unlike traditional models, LLMs represent words and context through layered numerical embeddings. Here, we demonstrate that LLMs’ layer hierarchy aligns with the temporal dynamics of language comprehension in the brain. Using electrocorticography (ECoG) data from participants listening to a 30-minute narrative, we show that deeper LLM layers correspond to later brain activity, particularly in Broca’s area and other language-related regions. We extract contextual embeddings from GPT-2 XL and Llama-2 and use linear models to predict neural responses across time. Our results reveal a strong correlation between model depth and the brain’s temporal receptive window during comprehension. We also compare LLM-based predictions with symbolic approaches, highlighting the advantages of deep learning models in capturing brain dynamics. We release our aligned neural and linguistic dataset as a public benchmark to test competing theories of language processing.
UR - https://www.scopus.com/pages/publications/105023055044
UR - https://www.scopus.com/pages/publications/105023055044#tab=citedBy
U2 - 10.1038/s41467-025-65518-0
DO - 10.1038/s41467-025-65518-0
M3 - Article
C2 - 41298357
AN - SCOPUS:105023055044
SN - 2041-1723
VL - 16
JO - Nature communications
JF - Nature communications
IS - 1
M1 - 10529
ER -