TY - JOUR
T1 - High-contrast “gaudy” images improve the training of deep neural network models of visual cortex
AU - Cowley, Benjamin R.
AU - Pillow, Jonathan W.
N1 - Funding Information:
We deeply thank Patricia L. Stan at the University of Pittsburgh and Matthew A. Smith at Carnegie Mellon University for collecting the real neural dataset (Fig. 5). Some elements of figures were created with BioRender.com. B.R.C. was supported by a CV Starr Foundation Fellowship. J.W.P. was supported by grants from the Simons Collaboration on the Global Brain (SCGB AWD543027) and the NIH BRAIN initiative (NS104899 and R01EB026946).
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020
Y1 - 2020
N2 - A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model that maps natural images to neural responses. Deep neural networks (DNNs) provide a promising candidate for such a model. However, DNNs require orders of magnitude more training data than neuroscientists can collect because experimental recording time is severely limited. This motivates us to find images to train highly-predictive DNNs with as little training data as possible. We propose high-contrast, binarized versions of natural images—termed gaudy images—to efficiently train DNNs to predict higher-order visual cortical responses. In simulation experiments and analyses of real neural data, we find that training DNNs with gaudy images substantially reduces the number of training images needed to accurately predict responses to natural images. We also find that gaudy images, chosen before training, outperform images chosen during training by active learning algorithms. Thus, gaudy images overemphasize features of natural images that are the most important for efficiently training DNNs. We believe gaudy images will aid in the modeling of visual cortical neurons, potentially opening new scientific questions about visual processing.
AB - A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model that maps natural images to neural responses. Deep neural networks (DNNs) provide a promising candidate for such a model. However, DNNs require orders of magnitude more training data than neuroscientists can collect because experimental recording time is severely limited. This motivates us to find images to train highly-predictive DNNs with as little training data as possible. We propose high-contrast, binarized versions of natural images—termed gaudy images—to efficiently train DNNs to predict higher-order visual cortical responses. In simulation experiments and analyses of real neural data, we find that training DNNs with gaudy images substantially reduces the number of training images needed to accurately predict responses to natural images. We also find that gaudy images, chosen before training, outperform images chosen during training by active learning algorithms. Thus, gaudy images overemphasize features of natural images that are the most important for efficiently training DNNs. We believe gaudy images will aid in the modeling of visual cortical neurons, potentially opening new scientific questions about visual processing.
UR - http://www.scopus.com/inward/record.url?scp=85108448904&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85108448904&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85108448904
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
Y2 - 6 December 2020 through 12 December 2020
ER -