Match!

Stimulus arousal drives amygdalar responses to emotional expressions across sensory modalities.

Published on Feb 5, 2020in Scientific Reports4.011
· DOI :10.1038/S41598-020-58839-1
Huiyan Lin5
Estimated H-index: 5
(WWU: University of Münster),
Miriam Müller-Bardorff2
Estimated H-index: 2
(WWU: University of Münster)
+ 5 AuthorsThomas Straube32
Estimated H-index: 32
(WWU: University of Münster)
Abstract
The factors that drive amygdalar responses to emotionally significant stimuli are still a matter of debate – particularly the proneness of the amygdala to respond to negatively-valenced stimuli has been discussed controversially. Furthermore, it is uncertain whether the amygdala responds in a modality-general fashion or whether modality-specific idiosyncrasies exist. Therefore, the present functional magnetic resonance imaging (fMRI) study systematically investigated amygdalar responding to stimulus valence and arousal of emotional expressions across visual and auditory modalities. During scanning, participants performed a gender judgment task while prosodic and facial emotional expressions were presented. The stimuli varied in stimulus valence and arousal by including neutral, happy and angry expressions of high and low emotional intensity. Results demonstrate amygdalar activation as a function of stimulus arousal and accordingly associated emotional intensity regardless of stimulus valence. Furthermore, arousal-driven amygdalar responding did not depend on the visual and auditory modalities of emotional expressions. Thus, the current results are consistent with the notion that the amygdala codes general stimulus relevance across visual and auditory modalities irrespective of valence. In addition, whole brain analyses revealed that effects in visual and auditory areas were driven mainly by high intense emotional facial and vocal stimuli, respectively, suggesting modality-specific representations of emotional expressions in auditory and visual cortices.
  • References (86)
  • Citations (0)
📖 Papers frequently viewed together
64 Citations
5 Citations
20075.81NeuroImage
141 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References86
Newest
#1Alison M. MattekH-Index: 6
Last. M. Justin KimH-Index: 9
view all 5 authors...
2 CitationsSource
: This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition. Neuroimaging studies contrasting emotional with neutral (face: N = 76, voice: N = 34) and explicit with implicit emotion processing (face: N = 27, voice: N = 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively. Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced, modality-matched data se...
8 CitationsSource
#1M. Justin Kim (Dartmouth College)H-Index: 9
#2Alison M. Mattek (Dartmouth College)H-Index: 6
Last. Paul J. Whalen (Dartmouth College)H-Index: 47
view all 6 authors...
Human amygdala function has been traditionally associated with processing the affective valence (positive versus negative) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (a) more general emotional arousal (more relevant versus less relevant) or (b) more specific emotion categories (fear versus happy). Delineating the pure ef...
6 CitationsSource
#1Patricia E. G. Bestelmeyer (Bangor University)H-Index: 16
#2Sonja A. Kotz (UM: Maastricht University)H-Index: 57
Last. Pascal Belin (Glas.: University of Glasgow)H-Index: 54
view all 3 authors...
Several theories conceptualise emotions along two main dimensions: valence (a continuum from negative to positive) and arousal (a continuum that varies from low to high). These dimensions are typically treated as independent in many neuroimaging experiments, yet recent behavioural findings suggest that they are actually interdependent. This result has impact on neuroimaging design, analysis and theoretical development. We were interested in determining the extent of this interdependence both beh...
7 CitationsSource
#1Stefania Benetti (University of Trento)H-Index: 11
#2Markus J. van Ackeren (University of Trento)H-Index: 4
Last. Olivier Collignon (University of Trento)H-Index: 30
view all 10 authors...
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific re...
16 CitationsSource
#1Shuo Wang (California Institute of Technology)H-Index: 10
#2Rongjun Yu (NUS: National University of Singapore)H-Index: 28
Last. Ueli Rutishauser (California Institute of Technology)H-Index: 26
view all 13 authors...
The human amygdala is a key structure for processing emotional facial expressions, but it remains unclear what aspects of emotion are processed. We investigated this question with three different approaches: behavioural analysis of 3 amygdala lesion patients, neuroimaging of 19 healthy adults, and single-neuron recordings in 9 neurosurgical patients. The lesion patients showed a shift in behavioural sensitivity to fear, and amygdala BOLD responses were modulated by both fear and emotion ambiguit...
20 CitationsSource
#1Annett Schirmer (CUHK: The Chinese University of Hong Kong)H-Index: 27
#2Ralph Adolphs (California Institute of Technology)H-Index: 96
Historically, research on emotion perception has focused on facial expressions, and findings from this modality have come to dominate our thinking about other modalities. Here we examine emotion perception through a wider lens by comparing facial with vocal and tactile processing. We review stimulus characteristics and ensuing behavioral and brain responses and show that audition and touch do not simply duplicate visual mechanisms. Each modality provides a distinct input channel and engages part...
52 CitationsSource
#1Huiyan Lin (WWU: University of Münster)H-Index: 5
#2Miriam Mueller-Bardorff (WWU: University of Münster)H-Index: 1
Last. Thomas Straube (WWU: University of Münster)H-Index: 32
view all 7 authors...
5 CitationsSource
#1Anders Eklund (Linköping University)H-Index: 15
#2Thomas E. Nichols (Warw.: University of Warwick)H-Index: 76
Last. Hans Knutsson (Linköping University)H-Index: 34
view all 3 authors...
The most widely used task functional magnetic resonance imaging (fMRI) analyses use parametric statistical methods that depend on a variety of assumptions. In this work, we use real resting-state data and a total of 3 million random task group analyses to compute empirical familywise error rates for the fMRI software packages SPM, FSL, and AFNI, as well as a nonparametric permutation method. For a nominal familywise error rate of 5%, the parametric statistical methods are shown to be conservativ...
1,410 CitationsSource
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant proso...
2 CitationsSource
Cited By0
Newest