Vocalizations including laughter, cries, moans, or screams constitute a potent source of information about the affective states of others. It is typically conjectured that the higher the intensity of the expressed emotion, the better the classification of affective information. However, attempts to map the relation between affective intensity and inferred meaning are controversial. Based on a newly developed stimulus database of carefully validated non-speech expressions ranging across the entire intensity spectrum from low to peak, we show that the intuition is false. Based on three experiments (Nā=ā90), we demonstrate that intensity in fact has a paradoxical role. Participants were asked to rate and classify the authenticity, intensity and emotion, as well as valence and arousal of the wide range of vocalizations. Listeners are clearly able to infer expressed intensity and arousal; in contrast, and surprisingly, emotion category and valence have a perceptual sweet spot: moderate and strong emotions are clearly categorized, but peak emotions are maximally ambiguous. This finding, which converges with related observations from visual experiments, raises interesting theoretical challenges for the emotion communication literature.
You may also like
Event boundaries shape temporal organization of memory...
June 15, 2022Max Planck Institute for Empirical Aesthetics
Naturalistic viewing conditions can increase task...
June 14, 2022Max Planck Institute for Empirical Aesthetics
Higher-order olfactory neurons in the lateral horn...
June 13, 2022Max Planck Institute for Chemical Ecology
Getting in the Groove: Why samba makes everyone want...
Estimating the pace of change
April 26, 2022Max Planck Institute for Biological Cybernetics
Out of rhythm: Compromised precision of theta-gamma...
April 26, 2022Max Planck Institute for Human Development