Abstract
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.
Original language | English (US) |
---|---|
Article number | e1002165 |
Journal | PLoS computational biology |
Volume | 7 |
Issue number | 9 |
DOIs | |
State | Published - Sep 2011 |
All Science Journal Classification (ASJC) codes
- Genetics
- Ecology, Evolution, Behavior and Systematics
- Cellular and Molecular Neuroscience
- Molecular Biology
- Ecology
- Computational Theory and Mathematics
- Modeling and Simulation