Universal linguistic inductive biases via meta-learning

R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen

Research output: Contribution to conferencePaperpeer-review

5 Scopus citations

Abstract

How do learners acquire languages from the limited data available to them? This process must involve some inductive biases-factors that affect how a learner generalizes-but it is unclear which inductive biases can explain observed patterns in language acquisition. To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases. This framework disentangles universal inductive biases, which are encoded in the initial values of a neural network's parameters, from non-universal factors, which the neural network must learn from data in a given language. The initial state that encodes the inductive biases is found with meta-learning, a technique through which a model discovers how to acquire new languages more easily via exposure to many possible languages. By controlling the properties of the languages that are used during meta-learning, we can control the inductive biases that meta-learning imparts. We demonstrate this framework with a case study based on syllable structure. First, we specify the inductive biases that we intend to give our model, and then we translate those inductive biases into a space of languages from which a model can meta-learn. Finally, using existing analysis techniques, we verify that our approach has imparted the linguistic inductive biases that it was intended to impart.

Original languageEnglish (US)
Pages737-743
Number of pages7
StatePublished - 2020
Event42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 - Virtual, Online
Duration: Jul 29 2020Aug 1 2020

Conference

Conference42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020
CityVirtual, Online
Period7/29/208/1/20

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Science Applications
  • Human-Computer Interaction
  • Cognitive Neuroscience

Keywords

  • inductive bias
  • language universals
  • meta-learning
  • neural networks
  • syllable structure typology

Fingerprint

Dive into the research topics of 'Universal linguistic inductive biases via meta-learning'. Together they form a unique fingerprint.

Cite this