Adultification Bias in LLMs and Text-To-Image Models

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The rapid adoption of generative AI models in domains such as education, policing, and social media raises significant concerns about potential bias and safety issues, particularly along protected attributes, such as race and gender, and when interacting with minors. Given the urgency of facilitating safe interactions with AI systems, we study bias along axes of race and gender in young girls. More specifically, we focus on "adultification bias,"a phenomenon in which Black girls are presumed to be more defiant, sexually intimate, and culpable than their White peers.Advances in alignment techniques show promise towards mitigating biases but vary in their coverage and effectiveness across models and bias types. Therefore, we measure explicit and implicit adultification bias in widely used LLMs and text-To-image (T2I) models, such as OpenAI, Meta, and Stability AI models. We find that LLMs exhibit explicit and implicit adultification bias against Black girls, assigning them harsher, more sexualized consequences in comparison to their White peers. Additionally, we find that T2I models depict Black girls as older and wearing more revealing clothing than their White counterparts, illustrating how adultification bias persists across modalities.We make three key contributions: (1) we measure a new form of bias in generative AI models, (2) we systematically study adultification bias across modalities, and (3) our findings emphasize that current alignment methods are insufficient for comprehensively addressing bias. Therefore, new alignment methods that address biases such as adultification are needed to ensure safe and equitable AI deployment.

Original languageEnglish (US)
Title of host publicationACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency
PublisherAssociation for Computing Machinery, Inc
Pages2751-2767
Number of pages17
ISBN (Electronic)9798400714825
DOIs
StatePublished - Jun 23 2025
Event8th Annual ACM Conference on Fairness, Accountability, and Transparency, FAccT 2025 - Athens, Greece
Duration: Jun 23 2025Jun 26 2025

Publication series

NameACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency

Conference

Conference8th Annual ACM Conference on Fairness, Accountability, and Transparency, FAccT 2025
Country/TerritoryGreece
CityAthens
Period6/23/256/26/25

All Science Journal Classification (ASJC) codes

  • General Business, Management and Accounting

Fingerprint

Dive into the research topics of 'Adultification Bias in LLMs and Text-To-Image Models'. Together they form a unique fingerprint.

Cite this