Enhancing Interpretability using Human Similarity Judgements to Prune Word Embeddings

Natalia Flechas Manrique, Wanqian Bao, Aurelie Herbelot, Uri Hasson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Interpretability methods in NLP aim to provide insights into the semantics underlying specific system architectures. Focusing on word embeddings, we present a supervised-learning method that, for a given domain (e.g., sports, professions), identifies a subset of model features (columns of the embedding space) that strongly improve prediction of human similarity judgments. We show this method keeps only 20-40% of the original embeddings, for 8 independent semantic domains, and that it retains different feature sets across domains. We then present two approaches for interpreting the semantics of the retained features. The first obtains the scores of the domain words (co-hyponyms) on the first principal component of the retained embeddings, and extracts terms whose co-occurrence with the co-hyponyms tracks these scores’ profile. This analysis reveals that humans differentiate e.g. sports based on how gender-inclusive and international they are. The second approach uses the retained sets as variables in a probing task that predicts values along 65 semantically annotated dimensions for a dataset of 535 words. The features retained for professions are best at predicting cognitive, emotional and social dimensions, whereas features retained for fruits or vegetables best predict the gustation (taste) dimension. We discuss implications for alignment between AI systems and human knowledge.

Original languageEnglish (US)
Title of host publicationBlackboxNLP 2023 - Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 6th Workshop
EditorsYonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
PublisherAssociation for Computational Linguistics (ACL)
Pages169-179
Number of pages11
ISBN (Electronic)9798891760523
StatePublished - 2023
Externally publishedYes
Event6th Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2023 - Singapore, Singapore
Duration: Dec 7 2023 → …

Publication series

NameBlackboxNLP 2023 - Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 6th Workshop

Conference

Conference6th Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2023
Country/TerritorySingapore
CitySingapore
Period12/7/23 → …

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Information Systems

Fingerprint

Dive into the research topics of 'Enhancing Interpretability using Human Similarity Judgements to Prune Word Embeddings'. Together they form a unique fingerprint.

Cite this