Abstract
Deep neural networks are increasingly being used in cognitive modeling as a means of deriving representations for complex stimuli such as images. While the predictive power of these networks is high, it is often not clear whether they also offer useful explanations of the task at hand. Convolutional neural network representations have been shown to be predictive of human similarity judgments for images after appropriate adaptation. However, these high-dimensional representations are difficult to interpret. Here we present a method for reducing these representations to a low-dimensional space which is still predictive of similarity judgments. We show that these low-dimensional representations also provide insightful explanations of factors underlying human similarity judgments.
Original language | English (US) |
---|---|
Pages | 2180-2186 |
Number of pages | 7 |
State | Published - 2020 |
Event | 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 - Virtual, Online Duration: Jul 29 2020 → Aug 1 2020 |
Conference
Conference | 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 |
---|---|
City | Virtual, Online |
Period | 7/29/20 → 8/1/20 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Science Applications
- Human-Computer Interaction
- Cognitive Neuroscience
Keywords
- deep learning
- dimensionality reduction
- interpretability
- neural networks
- similarity judgments