Seeing a smile can make a person unconsciously smile in return, and now scientists find that digitally mimicking the voice of a smiling person can also make people reflexively smile.
Charles Darwin and his contemporaries were among the first scientists to investigate smiles. Darwin suggested that smiles and several other facial expressions are universal to all humans, rather than unique products of a person’s culture.
“There is evidence that smiles are a profoundly deep gesture in the human repertoire,” agrees study lead author Pablo Arias, an audio engineer and cognitive scientist at the Institute for Research and Coordination in Acoustics/Music in Paris. “Smiles are recognized across cultures, and babies a few weeks old already produce smiles long before they know how to talk.”
The Smile Sense
Previous research noted that smiles not only trigger visible changes to a person’s face, but also audible changes to the human voice. “(It’s) what I call an auditory smile,” Arias says. Almost no one has studied the acoustic consequences of smiles, “and we wanted to see if people perceived smiles the same way acoustically as they did visually,” he says. “We want to study how emotions are communicated through sound.”
To do this, the scientists first analyzed how actors sounded when they did and did not smile. They next designed patented software that simulated the acoustic effects of a smile’s stretched lips on speech. This software works regardless of the gender of a person or the pitch, speed or content of what they are saying.
In an experiment where 35 volunteers wore electrodes on their faces, the researchers found that when the volunteers heard sentences that software artificially added smiles to, the volunteers unconsciously engaged their zygomaticus major muscles — the ones that stretch the corners of the mouth during smiling. Prior work found such mimicry is also generally detected when people see smiles.
“These results suggest there are similar neural mechanisms for processing both the visual and audible components of facial expression of emotions,” Arias says.
This research could find new ways to study the disruption of emotional processing that occurs with autism spectrum disorders. “We can investigate how people with autism respond to artificially generated emotional cues in speech,” Arias says. “We’re also studying the perception of smiles by congenitally blind people, to see if reactions to auditory smiles depend on visual experiences of that very same gesture.”
Future research can also investigate synthesizing other emotions in speech. “For instance, we’re working with the sound of anger, where the vocal cues mainly come from the vocal cords,” Arias says.
Preliminary tests suggest the software the researchers developed can also work in different languages, such as Japanese. In the future, voice synthesis engines such as those used by Google and Amazon could adopt this software to communicate better, he said. People with disabilities who rely on speech synthesizers might also use this software to help color their speech with emotions, much as people now sprinkle emoticons and emoji into their text messages, he added.
The scientists detailed their findings online July 23 in the journal Current Biology.
Try it for yourself:
You will hear two pairs of sentences. The first sentence of each pair was altered to damp its smiling tone; the second, to heighten it.
Part 1: https://soundcloud.com/cnrs_officiel/exemple-anglais-1
Part 2: https://soundcloud.com/cnrs_officiel/exemple-anglais-2?in=cnrs_officiel/sets/le-son-qui-fait-sourire
The differences between the sounds are subtle. We suggest you wear headphones to hear the differences more accurately.
© Pablo Arias and Jean-Julien Aucouturier, Science and Technology of Music and Sound research laboratory (CNRS / IRCAM / Sorbonne University / French Ministry of Culture).