Probing out-of-distribution generalization in machine learning for materials

Kangming Li, Andre Niyongabo Rubungo, Xiangyun Lei, Daniel Persaud, Kamal Choudhary, Brian DeCost, Adji Bousso Dieng, Jason Hattrick-Simpers

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Scientific machine learning (ML) aims to develop generalizable models, yet assessments of generalizability often rely on heuristics. Here, we demonstrate in the materials science setting that heuristic evaluations lead to biased conclusions of ML generalizability and benefits of neural scaling, through evaluations of out-of-distribution (OOD) tasks involving unseen chemistry or structural symmetries. Surprisingly, many tasks demonstrate good performance across models, including boosted trees. However, analysis of the materials representation space shows that most test data reside within regions well-covered by training data, while poorly-performing tasks involve data outside the training domain. For these challenging tasks, increasing training size or time yields limited or adverse effects, contrary to traditional neural scaling trends. Our findings highlight that most OOD tests reflect interpolation, not true extrapolation, leading to overestimations of generalizability and scaling benefits. This emphasizes the need for rigorously challenging OOD benchmarks.

Original languageEnglish (US)
Article number9
JournalCommunications Materials
Volume6
Issue number1
DOIs
StatePublished - Dec 2025

All Science Journal Classification (ASJC) codes

  • General Materials Science
  • Mechanics of Materials

Fingerprint

Dive into the research topics of 'Probing out-of-distribution generalization in machine learning for materials'. Together they form a unique fingerprint.

Cite this