Familiar Voices Are Easier to Understand, Even If We Don’t Recognize Them
Familiar voices are easier to understand and this advantage holds even if when we aren’t able to identify who those familiar voices belong to, according to research in Psychological Science, a journal of the Association for Psychological Science. The research from Western University’s BrainsCAN initiative showed that even though participants were not able to recognize their friend’s voice when the resonance of their voice was manipulated, they still found it easier to understand than the same words spoken by a stranger.
“Our findings demonstrate that we pick out different information from a voice, depending on whether we’re simply trying to recognize whether it’s our friend or family member on the phone or whether we’re trying to understand the words they’re saying,” says researcher Emma Holmes of UCL (University College London), first author on the study. “This shows we focus on different parts of speech sounds for different purposes.”
Holmes and Western University coauthors Ingrid S. Johnsrude and Ysabel Domingo are interested in understanding the factors that influence how we perceive others’ voices across a variety of contexts. Anyone who has tried to hold a conversation in a bustling office or a crowded restaurant knows how difficult it is to understand what someone is saying when it competes with background noise. In previous work, the researchers found that familiarity offers an advantage in these noisy situations, making the voices of friends and family easier to understand than the voices of strangers.
“This suggests that, over time, we must learn something about the voices of the people we frequently talk to, which helps us to better understand the words they’re saying,” Holmes explains. “For this study we asked: Why does being familiar with someone’s voice help us understand what they’re saying?”
Holmes and colleagues decided to focus on two acoustic properties of the voice that vary reliably across people: pitch and resonance. Their aim was to determine how these properties influence our ability to understand what someone is saying and recognize who is speaking.
The researchers recruited 11 pairs of participants who were either friends or couples, had known each other for at least 6 months, and spoke to one another regularly. The participants read a predetermined set of sentences aloud, each of which followed a standard pattern of name, verb, number, adjective, noun (e.g., “Bob bought five green bags”). The researchers digitally manipulated the recordings from each speaker to systematically vary the pitch and the resonance.
For the experiment, participants listened to sentences spoken by their partner and by unfamiliar speakers who had the same gender as their partner. In one task, they heard two sentences spoken at the same time by different speakers and had to identify words in one of the sentences. In another task, participants heard a series of sentences and indicated whether each sentence was spoken by their partner or not.
As expected, participants were better at recognizing their partner’s voice when they heard an unaltered recording compared with any of the manipulated versions. They were also better at recognizing the partner’s voice when the pitch was altered relative to when the resonance was altered.
Intriguingly, even though manipulating the resonance of the partner’s voice made it unrecognizable to participants, they still found it easier to understand against a competing speaker than a stranger’s voice.
The results suggest that resonance is a critical acoustic feature that helps us recognize who a particular voice belongs to. Both pitch and resonance can influence our ability to understand what someone familiar is saying, although it seems that we can still understand what they are saying very well when the pitch and resonance of their voice have been altered.
The findings shed light on how we perceive speech in our everyday lives and could have particular implications for individuals with hearing loss, who have even greater difficulty understanding speech in noisy settings: These individuals might benefit even more from familiar voices. The research could even have implications for the development of artificial agents, including robots and digital assistants like Siri, that are both intelligible and recognizable.
This study was supported by a Natural Sciences and Engineering Research Council of Canada Discovery Grant, a Canadian Institutes of Health Research Operating Grant, and by BrainsCAN, Western University’s $66 million Canada First Research Excellence Fund program in cognitive neuroscience.
Comments
This is honestly true, especially for those with hearing loss. My mother has 80% hearing loss in one ear and 30% loss in the other, and tones of voices and reading lips is how my mother can understand what is being said.
APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.
Please login with your APS account to comment.