Abstract
Prioritized processing of fearful compared to neutral faces is reflected in increased amplitudes of components of the event-related potential (ERP). It is unknown whether specific face parts drive these modulations. Here, we investigated the contributions of face parts on ERPs to task-irrelevant fearful and neutral faces using an ERP-dependent facial decoding technique and a large sample of participants (N = 83). Classical ERP analyses showed typical and robust increases of N170 and EPN amplitudes by fearful relative to neutral faces. Facial decoding further showed that the absolute amplitude of these components, as well as the P1, was driven by the low-frequency contrast of specific face parts. However, the difference between fearful and neutral faces was not driven by any specific face part, as supported by Bayesian statistics. Furthermore, there were no correlations between trait anxiety and main effects or interactions. These results suggest that increased N170 and EPN amplitudes to task-irrelevant fearful compared to neutral faces are not driven by specific facial regions but represent a holistic face processing effect.
Similar content being viewed by others
Introduction
It is of paramount importance to detect threat signals and prioritize the processing of such emotional over neutral information1,2. In line with this notion, electrophysiological (for a review, see3) or brain imaging studies (for a review, see4) showed enhanced brain responses to threat-associated vs. neutral stimuli. For example, prioritized processing of fearful compared to neutral faces is reflected in increased amplitudes of different components of the event-related potential (ERP), such as the P1, N170, and EPN (for reviews, see5,6,7). One important question is, what information exactly leads to the potentiation of specific ERP components in response to specific faces or facial expressions. Low-level visual features such as spatial frequencies8,9,10,11 or differences in specific face parts such as the eyes12,13,14,15 have been proposed to drive specific ERP components. On the other hand, several studies show that threat-associated faces potentiate early ERPs despite the control or absence of visual differences between negative and neutral faces16,17,18.
A second question is whether interindividual differences modulate the effects of facial expressions. One important trait related to altered threat detection and perception is trait anxiety, which has been proposed to lead to hypervigilant processing of threat-related stimuli19,20,21,22. However, while early studies reported increased early ERPs for fearful faces in high trait-anxious participants23,24,25, other studies found no effect26,27 or even attenuated differential ERPs28,29. Thus, although theoretical arguments (e.g.,21,23) strongly suggest a relation between trait anxiety and early neural responses to threat, findings are inconclusive. Further research is strongly needed to investigate the role of trait anxiety and facial features for the increased processing of fearful faces during different processing stages indexed by specific components of the ERP.
The earliest of the relevant components in this regard is the occipitally scored P1, which is thought to reflect early stages of stimulus detection and discrimination30,31,32, being enlarged for faces compared to objects33,34. Findings on P1 modulations by fearful faces are mixed (for a review, see7), with studies reporting larger amplitudes for fearful compared to neutral faces35,36,37,38 while others do not show such differences (e.g.,24,25,26,27,39). Recent studies show that ERP modulations during the P1 interval strongly depend on low-level visual information8,40,41.
The subsequent N170 is viewed as a structural encoding component and reliably found to be enlarged for faces compared to objects42. Fearful expressions have been shown to increase further this ERP component6. Notably, increased N170 amplitudes by emotional expressions are not explained by low-level information8,40,43 and are highly resistant to various attentional manipulations27,37,40,44,45,46,47. Both holistic accounts47,48,50 and accounts assuming a role of specific features, such as mouth51,52,53,54 or eyes53,54,55,56 have been proposed to explain N170 modulations by faces in general and fearful facial expressions in particular.
The EPN peaks between 200 and 300 ms and is observed as a differential negativity when contrasting emotional and neutral stimuli, including faces. It has been repeatedly found to be increased for fearful compared to neutral expressions29,57,58,59 and has been linked to early attentional selection60. A previous study showed that the EPN increase for fearful faces is affected by specific spatial frequencies8.
Several aspects of face processing have been shown to be sensitive to manipulations of the spatial frequency spectrum, typically by presenting frequency-filtered faces9,10,11,61,62,63. The P1 has been shown to be enhanced for low spatial frequency (LSF) faces compared to high spatial frequency (HSF) faces10,11. Results are less clear regarding the N170, where some studies reported smaller N170 amplitudes for LSF compared to HSF faces10,11, whereas another study did not find amplitude differences, but an increase in N170 peak latency for HSF compared to LSF faces. Furthermore, the inversion effect on the N170 (i.e., larger amplitudes for inverted compared to upright faces) was no longer observed when low spatial frequencies were removed from the faces9. We conclude that spatial frequencies affect early evoked potentials to faces, although their effect regarding the N170 is disputed.
Regarding the interplay of spatial frequency and emotional expressions, it is essential to note that fearful compared to neutral faces contain more spectral power in low spatial frequencies, most prominently in the eye region64. We recently showed that controlling for this natural confound does not alter N170 differences between fearful and neutral faces but has subtle effects on the P1 and the EPN8.
Using facial decoding methods, which allow mapping the effect of specific face parts on ERP modulations65, some studies suggest that specific facial regions contribute to N170 ERP modulations53,54,66,67,68. The eye region, in particular, played an important role for fearful facial expressions. More contrast in this region led to better emotion recognition performance54,69 and to enhanced N170 amplitudes66,68. While these studies yielded precise individual estimates of facial decoding maps with small samples (N = 351; N = 363; N = 264; N = 265), no statistical inferences about a population or relationships to interindividual differences, such as trait anxiety, were possible. Furthermore, these studies required participants to identify the emotional expression on each trial, creating specific attentional demands for participants. Task relevance has been shown to affect ERPs, especially the EPN and later components, to facial expressions and the differences between them15,27,37,70. Although task relevance is a valid setting in which to study effects of facial expressions on ERPs, we suggest that task irrelevant faces yield a more default-like and thus ecologically more valid insight into emotional face processing. The studies mentioned above leave room for speculation whether differences in specific ERP components between fearful and neutral faces are mainly driven by specific face parts or represent a response to the whole face if it is task irrelevant.
Taken together, it remains to be investigated to what extent the potentiation of early and mid-latency components of the ERP is driven by specific face parts or an interaction of face parts and spatial frequency. Furthermore, whether effects are related to interindividual differences, such as trait anxiety, is unknown.
The present pre-registered study (osf.io/n72w3) investigated contributions of face parts on ERPs to task-irrelevant fearful and neutral faces, using a facial decoding technique and a large sample of participants (N = 83) varying in trait anxiety scores. We assumed that at the level of the P1, modulations might be mainly driven by image intensity, irrespective of the facial region. This modulation should be most pronounced in mid-range spatial frequencies (between 3.75 and 7.5 cycles per degree of visual angle; cpd), reflecting the properties of the human contrast sensitivity function (CSF63). In contrast, we hypothesized the eye region to modulate N170 amplitudes. Finally, for the EPN, we predicted that diagnostically critical face-feature (eyes, mouth) should drive fearful vs. neutral differences. Based on our observation that emotion effects on the EPN interact with spatial frequency8, we assumed these effects also depend on spatial frequency. In addition to these analyses, we exploratively investigated whether effects are modulated by interindividual differences in trait anxiety.
Methods
Participants
According to the registered data sampling plan, 87 participants were examined (19 male, 68 female). Similar decoding approaches to this study have been used in the context of group statistical analyses71,72, however, not related to emotional expressions. Therefore, power calculations were not possible. We chose the highest sample size suitable for our human and financial resources to maximize sensitivity. Four participants were excluded due to bad EEG data, resulting in 83 participants (19 male, 64 female). Participants gave written informed consent and received 10 Euros per hour for participation. Inclusion criteria required participants to have normal or corrected-to-normal vision and no reported history of neurological or psychiatric disorders. On average, participants were 23.62 (SD = 3.68; min = 19; max = 36; median = 23; see Fig. 2) years old. The study was approved by and conducted following the guidelines of the Ethics Committee of the University of Münster (Germany; vote ID 2018–705-f-S).
Apparatus and stimuli
The facial stimuli were taken from the Radboud Faces Database73. For these stimuli, the position of the eyes and head orientation are well standardized. We converted the faces into greyscale and cut out the oval center of each face, removing facial hair. Thirty-six identities (18 male, 18 female) were used, displaying either fearful or neutral expressions in frontal viewing angle. Stimuli were created by adapting the so-called bubble technique65.
In contrast to classical ERP studies of facial expressions, this technique aims at relating single-trial amplitudes to pixel-wise variations in image contrast across a comparably higher number of stimuli (500–2000) instead of averaging ERPs across several repetitions (e.g., 50–100) of stimuli from the same category. The stimuli consist of fractions of facial information obtained by randomly placing Gaussian blobs (“bubbles”) on facial images, with the bubbles determining the contrast at a given image location. More specifically, the face images are first decomposed into separate spatial frequency scales (see Fig. 1, top row). Bubbles are then placed per scale with a constant blob-size-to-scale ratio (see Fig. 1, middle row). The final image on a given trial consists of the same amount of information per scale but is randomly scattered across the image (see Fig. 1, bottom right image).
Instead of ERPs per stimulus category, this technique yields classification images, i.e., “face maps”, showing which face regions contributed to amplitude modulations of a given ERP component (details see below). Classification images can be obtained across emotional categories, showing face regions that amplify or diminish the ERP component separately for each spatial scale. They can also be obtained per emotional category and then compared, showing face regions that contribute to amplitude differences separately for each spatial scale.
Spatial scales were separated by convolving each image with Gabor gratings with 12 different orientations ranging from 0° to 165° in steps of 15°. The frequency bands were identical to those by Gosselin and Schyns65, i.e., 3.75 to 7.5, 7.5 to 15, 30 to 60, and 60 to 120 cycles per image (cpi). However, we excluded the lowest frequency band (1.875 to 3.75 cpi) as it contained no detectable information, most likely due to a different face image choice than Gosselin and Schyns65. With faces presented at a bizygomatic diameter of 6.2 deg of visual angle, the average spatial frequencies of each band are 0.67, 1.3, 2.7, 5.4, and 10.7 cpd.
Each frequency band was sampled with four equidistant frequencies. The resulting images were averaged across the 12 orientations and four frequencies per band (see Fig. 1 top row). Bubbles were created by randomly positioning Gaussian blobs with standard deviations corresponding to the width of 0.5 cycles of the respective average bandwidth (e.g., with 90 cycles per image for the average of the finest scale, the standard deviation of the Gaussian blob corresponded to the image height divided by 180). Thus, from -3 to + 3 standard deviations of the Gaussian window, 3 complete cycles were included per bubble at each scale, corresponding to Gosselin and Schyns65. The total energy of all Gaussian blobs on each scale was equal to 80% of a complete Gaussian blob at the coarsest scale (see Fig. 1 middle row). After applying the Gaussian windows to each scale, the resulting image intensities were summed. We calculated the minimal and maximal luminance across all images and scaled all resulting images with a single constant factor so that all final images contained the same average gray background while maximizing the range of gray levels from black to white without clipping. To ensure attention to the screen, target stimuli were generated by applying the same filtering and bubble procedure to white noise images, to which the same oval cut-out was applied. In total, 600 different images per emotional expression and 100 different target stimuli were created. This procedure was repeated four times to obtain four differently randomized experiment versions, counterbalanced across subjects.
Procedure
The experiment was programmed and ran with Matlab (Version R2019b; Mathworks Inc., Natick, MA; http://www.mathworks.com), the Psychophysics Toolbox (Version 3.0.16)74,75, and the Eyelink Toolbox76. In each trial, a fixation mark was presented of a randomized duration between 300 and 700 ms, followed by a bubble stimulus for 50 ms, followed by a blank screen presented for 500 ms before the next fixation mark was presented. All stimuli were presented at the center of the screen. Participants were instructed to avoid eye movements and blinks during the stimulus presentation. To ensure that participants paid attention to the presented faces, Eye-tracking was used (EyeLink 1000, SR Research Ltd., Mississauga, Canada), pausing the presentation of faces when their gaze was not directed to the center of the screen, defined by a circular region with a radius of 0.7° around the fixation mark. If a gaze deviation was detected for more than five seconds despite a participant's attempt to fixate the center, the eye- tracker calibration procedure was automatically initiated.
Additionally, participants were instructed to respond to a target trial by pressing the space bar. Emotion conditions and target trials appeared in randomized order. Including breaks, the experiment lasted approximately 45 min. After testing, participants were asked about the effort and difficulty of the experiment, and tiredness during and after the experiment. Further, they responded to a demographic questionnaire, the BDI-II77 and STAI Trait questionnaire78, and a short version of the NEO-FFI79. For the current study, only the STAI was analyzed.
EEG recording and preprocessing
EEG signals were recorded from 64 BioSemi active electrodes using Biosemis Actiview software (www.biosemi.com). Four additional electrodes measured horizontal and vertical eye movement. The recording sampling rate was 512 Hz. Offline data were re-referenced to average reference, and band-pass filtered from 0.01 Hz (6 dB/oct; forward) and low-pass with a cut-off frequency of 40 Hz (24 dB/oct; zero-phase). Recorded eye-movement were corrected using the automatic eye-artifact correction method implemented in BESA80. Filtered data were segmented from 100 ms before stimulus onset until 1000 ms after stimulus presentation. Baseline correction used 100 ms before stimulus onset.
EEG data analyses
Components of interest were scored individually by inspecting each subject’s evoked potential averaged across all 1200 face stimuli. The search criteria for the components of interest were characterized as follows: P1: a bilateral positive peak at an occipital/occipito-parietal region of sensors (search space: Iz, Oz, O1, O2, PO3, PO4, PO7, PO8, P3, P4, P5, P6, P7, P8, P9, P10) in the time range between 80 to 130 ms after stimulus onset. The interval of interest was defined as ± 10 ms around the peak. N170: a bilateral negative peak at a more temporal region of sensors (search space: O1, O2, PO7, PO8, P7, P8, P9, P10, TP7, TP8, TP9, TP10) in the time range between 120 to 180 ms after stimulus onset. The interval of interest was defined as ± 10 ms around the peak. EPN: a bilateral negative peak following the P2 peak at temporal sensors (same search space as N170) in the time range between 200 to 350 ms after stimulus onset. The interval of interest was defined as ± 50 ms around the peak.
After defining the individual sensors and intervals of interest, the average amplitude at these sensors and intervals was calculated per trial using the per-trial activation of -100 to 0 ms relative to stimulus onset as the baseline. Following Smith and colleagues69 procedure, we derived classification images for each subject as follows: Images were re-scaled from 800 × 592 px to 200 × 148 px to reduce computation time. For each participant and each component of interest, trials were sorted by amplitude. We then labeled trials above the individual 60th percentile and below the 40th percentile as high and low-amplitude trials, respectively. We calculated the average image intensity from the maps of randomly positioned Gaussian blobs (see Fig. 1 middle row) per trial, separately for each pixel and spatial scale, separately for high and low-amplitude trials. The procedure was performed once across all stimuli irrespective of their emotional category to obtain total classification images comparable to previous studies53. Additionally, classification images were calculated per emotional category by applying the above-described procedure separately to trials with neutral and fearful faces. This calculation resulted in one low- and one high-amplitude map per subject, component of interest, spatial scale, and emotional category (total, neutral, fearful).
Please note that the classification image approach deviates from the pre-registered protocol, which was based on correlations between single-trial amplitudes and image intensities. We opted for replicating the methodology of previous studies for the sake of direct comparability.
Classification images, i.e., the differences between low- and high-amplitude maps, were statistically analyzed using cluster-based permutation (CBP) analyses. CBPs based on cluster mass were performed by running paired t-tests per pixel of each map. Pixels were labeled as significant if p < .01 (i.e., pixel-α = .01). Clusters were defined as significant orthogonally neighboring pixels, and the cluster mass was calculated by summing all t-values within a cluster, separately for positive and negative t-values. Cluster masses were then compared to the distribution of maximum cluster masses obtained from 5000 permutations based on sign-flipping, i.e., the multiplication of each high/low difference with randomly -1 or 1. Clusters with a mass exceeding the 99th percentile of the permutation distribution were defined as significant (i.e., cluster-α = .01). The procedure was performed for classification maps based on all trials, for classification maps based on neutral and fearful faces separately, and once for the difference between classification maps of fearful and neutral faces.
In case of absent significant differences between classification maps of fearful and neutral faces, we performed Bayesian analyses to quantify the likelihood of the absence of differences. For this purpose, we used the clusters observed in the classification images across all trials as effect masks for fearful and neutral classification images, averaging their intensity within this region per emotional category. For these analyses, the null hypothesis was specified as a point-null prior (i.e., standardized effect size δ = 0) and defined the alternative hypothesis as a Jeffreys-Zellner-Siow (JZS) prior, i.e., a folded Cauchy distribution centered around δ = 0 with the scaling factor r = 0.707. This scaling factor assumes a roughly normal distribution. To assign verbal labels to the strength of evidence, we followed the taxonomy suggested by Jeffreys81, labeling Bayes factors with a BF01 of 1 as no evidence, BF01 between 1 and 3 as anecdotal evidence, 3–10 as moderate evidence, 10–30 as strong evidence, 30–100 as very strong evidence, and larger BFs as extreme evidence in favor of the null hypothesis.
Possible correlations between individual traits anxiety scores and classification maps were explored by performing a CBP as described above, except that instead of calculating t-tests per pixel, we calculated Pearson correlations between the high-vs-low-amplitude maps and individual STAI scores. Random permutations were generated by randomizing the assignment of difference maps to the individual STAI values. Correlation analyses were restricted to the difference in classification maps between fearful and neutral faces. Please note that three participants chose not to complete the questionnaires. Therefore, the analyses of STAI correlations are based on the data from 80 participants.
Additionally, to the single-trial-based analysis with individualized component identification, we performed a standard ERP analysis to compare the average amplitudes per component of interest between all images in the fearful and neutral categories. For the purpose of illustration we calculated the 95% bootstrap confidence interval82 for the differential ERPs (fearful–neutral) based on 1000 samples.
Results
Behavior
On average, subjects detected 76.51% (SD = 15.48%) of the targets and made 1.5% false alarms (SD = 1%). The average reaction time was 561 ms (SD = 65 ms). Trait anxiety scores (STAI-T) ranged between 21 and 58 (mean = 36.763, SD = 8.809). Individual behavioral data, demographic data, and trait anxiety scores are illustrated in Fig. 2.
ERPs
P1
The ANOVA of mean ERPs across all spatial scales and bubble stimuli, revealed a significant interaction of emotion and hemisphere, F(1,82) = 4.247, p = .042, partial η2 = .049. As Fig. 3a–c indicates, amplitude differences between fearful and neutral faces were more pronounced in the left than in the right hemisphere. Figure 3d–i additionally illustrates individual amplitudes and amplitude differences.
Classification maps for the left-hemispheric P1 of all trials combined show positive clusters (increased P1 amplitudes) in the region of the nose and the inner canthus of the right eye, restricted to spatial frequencies between 0.67 and 2.7 cpd (cluster masses (C) from lowest to highest frequency: C1 = 12,334.8, p < .001; C2 = 15,374.2, p < .001; C3 = 8473.4, p < .001; see Fig. 4). For neutral faces, significant positive clusters were observed in the region of the nose, the mouth and the inner canthus of the right eye, restricted to spatial frequencies between 0.67 cpd and 2.7 cpd (C1 = 13,494.3, p < .001; C2 = 8737.5, p < .001; C3 = 5003.9, p < .001). For fearful faces, the maps show positive clusters in the region of the nose and the right eye for spatial frequencies between 1.3 and 2.7 cpd (C1 = 9532.5, p < .001; C2 = 3012.4, p < .001). There were no significant difference between classification maps of fearful and neutral faces. We calculated Bayesian t-tests as described in the Method section. For all three clusters found across all trials for the left P1, we observed moderate evidence for the null hypothesis (C1: BF01 = 6.932, C2: BF01 = 7.459, and C3: BF01 = 7.611).
For the right hemispheric P1, classification maps of all trials show positive clusters around the right eye at a spatial frequency of 0.67 cpd. At 1.3 cpd and 2.7 cpd, positive clusters appeared in the left eye region. (C1 = 16,698.9, p < .001; C2 = 17,803.7, p < .001; C4 = 3062.6, p < .001). The maps show negative clusters (i.e., reduced P1 amplitudes) in the area of the mouth and chin for spatial frequencies between 1.3 and 2.7 cpd. (C3 = 14,473.1, p < .001; C5 = 2048.8, p = .007). For neutral faces, positive clusters appear around the right eye at a spatial frequency of 0.67 cpd. At a spatial frequency of 1.3 cpd and 2.7 cpd, positive clusters appear in the region of the inner canthus of the left eye (C1 = 6093.3, p = .010; C2 = 10,014.9, p < .001; C4 = 2134.4, p = 006). The maps show negative clusters for the right corner of the mouth for a spatial frequency of 1.3 cpd (C3 = 7176.6, p = .001). For fearful faces, significant positive clusters were observed in the region of both eyes at a spatial frequency of 0.67 to 1.3 cpd. (C1 = 9622.0, p = .003; C2 = 13,752.8, p < .001). A negative cluster can be viewed in the region of the mouth at a spatial frequency of 1.3 cpd (C3 = 7708.8, p < .001).
Again, no significant clusters were observed for the difference between fearful and neutral classification maps. For the five clusters found for the right P1, Bayesian t-tests revealed moderate evidence for the null hypothesis (C1: BF01 = 6.592, C2: BF01 = 7.465, C3: BF01 = 6.673, C4: BF01 = 7.25, and C5: BF01 = 5.867).
Average classification maps are provided in Supplementary Fig. 1. To further illustrate the relationship between local image intensity and ERPs, we extracted the image intensity for each trial and each spatial scale at the pixel with the highest absolute t-value per cluster. These intensity values were then assigned to three bins of equal contrast range, referred to as low, medium, or high-intensity trials. The resulting ERPs per intensity level illustrate how the relative visibility of the maximally effective pixel influences the ERP time course (see Supplementary Fig. 4).
N170
The ANOVA of mean ERPS, showed an significant main effect of emotion, F(1,82) = 113.614, p < .001, partial η2 = .581, as well as an effect of hemisphere, F(1,82) = 11.022, p = .001, partial η2 = .118. N170 amplitudes were larger (i.e., more negative) for fearful than neutral stimuli and larger in the right than the left hemisphere (see Fig. 5a–c). No interaction was observed. Figure 5d–i additionally illustrates individual amplitudes and amplitude differences.
Classification maps for the left-hemispheric N170 of all trials show positive clusters (i.e., reduced N170 amplitudes) in the region of the forehead, the left eyebrow and the forehead, restricted to spatial frequencies between 0.67 and 1.3 cpd (C1 = 22,588.8, p < 0.001; C3 = 4243.3, p = 0.008; see Fig. 6). For all trials, negative clusters (i.e., increased N170 amplitudes) can be observed in the region of the mouth, the nose, and the left cheek at spatial frequencies between 0.67 and 1.3 cpd. (C2 = 23,128.5, p < .001; C4 = 4251.9, p = .009). For neutral faces, the maps show positive clusters in the area of the forehead and hairline at a spatial frequency of 0.67 cpd (C1 = 13,971.2, p < .001). For fearful faces, positive clusters can be observed in the region of the forehead and the left eyelid at a spatial frequency of 0.67 cpd (C1 = 17,506.3, p < .001). The maps show negative clusters in the region of the mouth, the bridge and tip of the nose and the right eyelid, restricted to spatial frequencies between 0.67 cpd and 5.4 cpd. (C2 = 23,595.8, p < .001; C3 = 5778.6, p = .001; C4 = 1027.2, p < .001). No significant clusters were observed for the difference between fearful and neutral classification maps for the left N170. For the four clusters found across all trials, we observed moderate evidence for the null hypothesis (C1: BF01 = 4.029, C2: BF01 = 7.852, C3: BF01 = 6.27, and C4: BF01 = 8.229).
For the right N170 of all trials, classification maps show positive clusters in the region of the right hairline at a spatial frequency of 1.3 cpd (C2 = 4108.9, p = .008). The maps further show negative clusters in the region of the right cheek, the left eye, the right corner of the mouth and the right hairline, at spatial frequencies between 0.67 cpd and 5.4 cpd (C1 = 14,785.2, p < .001; C3 = 23,455.7, p < .001; C4 = 2037.5, p = .006; C5 = 667.9, p = .006). For neutral faces, negative clusters can be observed in the region of the left eye, the nose and the right chin at spatial frequencies between 1.3 cpd and 2.7 cpd. (C1 = 10,170.3, p < 0.001; C2 = 2339.7, p = .003). For fearful faces, the maps show negative clusters in the region of the right eye and the nose at a spatial frequency of 1.3 cpd.
No significant clusters were observed for the difference between fearful and neutral classification maps for the right N170. For four out of the five clusters found for the right N170, we observed moderate evidence for the null hypothesis (C1: BF01 = 8.114, C2: BF01 = 4.108, C3: BF01 = 6.072, and C5: BF01 = 7.532) and anecdotal evidence for one cluster (C4: BF01 = 2.695). Average classification images and ERPs separated by intensity at the cluster maximum are provided in Supplementary Figs. 2 and 5, respectively.
EPN
The ANOVA revealed a significant main effect of emotion , F(1,82) = 32.395, p < .001, partial η2 = .283, and of hemisphere, F(1,82) = 12.421, p < .001, partial η2 = .132. EPN amplitudes were larger (i.e., more negative) for fearful than neutral stimuli and larger in the left than the right hemisphere (see Fig. 7a–c). No interaction was observed. Figure 7d–i additionally illustrates individual amplitudes and amplitude differences.
Classification maps for the left-hemispheric EPN of all trials show a positive cluster (i.e., reduced EPN amplitudes) in the region of the right eye and forehead at 0.67 cpd (C1 = 25,840.6, p < .001; see Fig. 8). Negative clusters (i.e., increased EPN amplitudes) appear in the left chin region and the nose at 0.67 and 1.3 cpd, respectively (C2 = 11,904.5, p = .001; C3 = 5514.1, p = .003). For neutral trials, a positive cluster was observed in the region of the right eye and forehead at 0.67 cpd (C1 = 12,109.3, p < .001) and a negative cluster at the left temple (C2 = 2188.7, p = .004) at 2.7 cpd. Fearful trials showed a similar positive cluster in the region of the right eye and forehead (C1 = 14,835.7, p < .001) and a negative cluster in the left chin region (C1 = 10,478.1, p = .002) at 0.67 cpd. Another negative cluster was found in the nose region at 2.7 cpd (C3 = 1868.2, p = .008). Finally, for the difference between fearful and neutral trials, a positive cluster (i.e., reduced EPN amplitudes for fearful compared to neutral faces) was observed at the left temple region (C1 = 3784.5, p < .001). For two of three clusters found across all trials for the left EPN, we observed moderate evidence for the null hypothesis (C1: BF01 = 7.038, and C3: BF01 = 8.158) and anecdotal evidence for one cluster (C2: BF01 = 2.11). No significant clusters were observed for the right hemispheric EPN. Average classification images and ERPs separated by intensity at the cluster maximum are provided in Supplementary Figs. 3 and 6, respectively.
Correlations with trait anxiety
Correlation coefficients were computed between the differential classification maps (fearful – neutral) per component and scale. Cluster-based permutation tests of these correlations did not yield any significant clusters. However, some clusters approached the critical cluster-α = .01. The coefficients ranged between r = − .451 and r = .507 at the level of individual pixels. Given our exploratory approach, we refrain from interpreting these clusters. For the sake of completeness and future studies, we report all clusters below cluster-α = .05. Please note that this involves apparently implausible face regions (e.g., the upper edge of the forehead; see Fig. 9).
For the left-hemispheric N170, we observe a negative cluster beneath the contralateral eye region at 0.67 cpd (C = 478.9, p = .025). The higher the trait anxiety score, the more the right eye region contributes to an increased (i.e., enhanced negativity) of the contralateral N170. For the same component, we further observe two positive clusters at 2.7 cpd located at the upper edge of the forehead (C = 116, p = .043) and at 5.4 cpd located at the center of the chin (C = 64.4, p = .014). The higher the trait anxiety score, the more these regions contribute to a decreased (i.e., reduced negativity) of the left-hemispheric N170.
For the left-hemispheric EPN, we observe a positive cluster located at the ipsilateral eye at 1.3 cpd (C = 210.38, p = .043). The higher the trait anxiety score, the more the left eye contributes to a reduction (i.e., reduced negativity) of the ipsilateral EPN.
Discussion
In the present study, we investigated the role of specific face parts for the potentiation of components of the ERP to fearful compared to neutral faces. We found that different facial regions, particularly the eyes and mouth region, contributed significantly to the amplitudes of all ERP components across facial expressions. When comparing fearful and neutral faces, however, we found evidence for the absence of facial decoding effects. We also did not detect statistically significant associations between the ERP components, the classification analysis, and trait anxiety scores. These results suggest that the increased amplitudes of early ERPs to fearful compared to neutral faces are not determined by specific face regions or trait anxiety-dependent interindividual differences in face processing,
Results for the analysis of P1 amplitudes showed a specific interaction of emotion and hemisphere. In previous studies, results on P1 modulations by fearful compared to neutral faces were mixed7, with studies reporting differences35,38,83 while other studies did not observe such effects44,58,70. Notably, P1 differences between fearful and neutral faces are strongly driven by low-level differences between stimuli41, including spatial frequency8. The classification maps revealed that P1 amplitudes—as a whole—were enhanced by low spatial frequency information stemming from the nose and the contralateral eye region. Further, they were attenuated by low spatial frequency information from the mouth region. However, the modulation of P1 amplitudes by emotional category could not be attributed to specific face regions. Thus, this pattern adds to the mixture of fragile emotion effects on the P1 but does not single out specific facial regions responsible for these effects.
For the N170, we found that the amplitudes were more negative for fearful than neutral facial stimuli. This is consistent with a recent meta-analysis which showed that N170 is reliably modulated by fearful expressions6 and our own recent studies8,43,59,70,83. In addition, across all faces, our results replicate a specific role of the eye region for the N170. Regardless of neutral or fearful facial expressions, the eye region led to larger N170 amplitudes for all faces. This finding is in line with studies showing that N170 amplitudes are increased when only core face features are presented59 and, more specifically, when the eyes are fixated14,15,55,56. This effect has been attributed to the specific role of the eyes in social communication84.
However, our results suggest that no specific facial region contributes to the N170 difference between fearful and neutral faces. This finding is in contrast to studies suggesting that the eye regions influence N170 differences between fearful and neutral expressions depending on fixated faces parts14,15, that the white of the eye affects brain responses85,86, or that the eye regions support the detection of fearful faces64. However, our results show that across a large sample, differences in the eye or other regions are not the discriminative feature for early ERP differences during a design in which faces are task-irrelevant.
Besides the N170, also the EPN was larger for fearful compared to neutral faces in accordance with various previous studies27,44,59,87,88, confirming that the EPN differentiates between fearful and neutral faces in a paradigm that does neither direct attention away from the stimuli27, nor requires attentional focus on the emotional expression70. Classification maps reveal that especially the left-hemispheric EPN is sensitive to facial features and that this sensitivity is restricted to low spatial frequencies. Interestingly, the right hemispheric EPN did not reveal any dependence on specific face parts, although fearful-neutral differences were descriptively more pronounced compared to the left hemispheric EPN (see Fig. 7b). Facial regions also had no effect on this differential ERP modulation. The observation that the right hemispheric EPN is at least as expression-selective as the left hemispheric EPN, but is not driven by specific face parts, suggests that the right EPN reflects a more holistic form of face processing than the left EPN. It also suggests that holistic face processing does not come at the cost of expression selectivity.
Our findings of potentiated brain responses to threat-related faces independently from specific faces parts are in line with studies in which neutral faces were associated with threat-related or neutral information leading to P189,90, N17016,17,89,91, or EPN16,17 potentiations to threat-associated faces despite identical facial features. These studies suggest that ERP modulations can be influenced by the emotional relevance of faces, independent of sensory differences between stimuli.
Our study supports the thesis that the modulation of early components, mainly of the N170 by negative facial expressions, is based on a form of holistic processing. This does not rule out a prominent role of facial features in the modulation of the N170, such as the mouth50,51,52,54 or eye region53,54,55,56. However, our study suggests that even for fearful faces, the eyes per se do not produce the emotional N170 effect, at least in designs in which faces are task-irrelevant. Specific face features might support behavioral responses and be associated with N170 amplitudes in other designs54,66,68,69.
Finally, there were no significant correlations between trait anxiety and the differential classification images. This implies that there are no face regions that play more important roles in high-anxious individuals compared to low-anxious individuals, at least with the statistical thresholds in the present study. Future hypothesis-driven studies might use the presented data to investigate whether some numerically high correlation coefficients are replicable (for example, between the eye-region-driven N170 and trait anxiety). Trait anxiety has been suggested to relate to different possible attentional biases21,92,93,94. ERP findings, however, remain mixed. Some studies reported a relationship between trait anxiety and increased early ERPs to fearful faces23,24,25, while other studies showed attenuated differences during mid-latency processing stages28,29, both effects24, or no relationship of the P1, N170, or EPN amplitudes and trait anxiety26,27. It might be that neural responses can be more reliably observed when comparing extreme groups of high versus low subclinical anxiety95 or in clinical samples96. Furthermore, fearful faces might elicit less threat-related responses, and therefore, fear-conditioning might be more potent in revealing relationships between ERP differences and trait anxiety (see93, but see94). Finally, other measures of individual differences in threat responses could reveal more systematic relationships between fear processing and anxiety, such as trait fearfulness (see Panitz et al., 2018) or other validated measures of anxiety (e.g., the STICSA, see96, or the GAD-7, see97).
We would like to note some limitations of our study and suggestions for future studies. Some points were already discussed. We investigated responses during a specific face-unrelated task; other tasks might yield other findings. It would be interesting to compare emotion detection tasks and other tasks and to better understand the role of specific features in detection performance. We used only one specific trait anxiety questionnaire and an unselected sample in an exploratory approach. Future studies might extend the investigation of interindividual differences.
Conclusion
In this study, we investigated the contributions of facial areas to differential amplitudes of the P1, N170, and EPN to fearful vs. neutral faces. We found that different facial regions, especially the eyes and mouth regions, led to increased ERP amplitudes to all faces, while we did not detect contributions of specific face regions to ERP differences between fearful and neutral faces. There was also no modulation of findings by trait anxiety. Thus, it can be concluded that the increased amplitudes of the N170 and EPN to fearful compared to neutral faces are not due to specific facial regions and are not driven by differences in trait anxiety scores.
Data availability
All data and stimuli are available at osf.io/n72w3. All stimuli were generated from images taken from the Radboud Faces Database73.
References
Ledoux, J. The Emotional Brain: The Mysterious Underpinnings of Emotional Life (Simon and Schuster, 1998).
Maratos, F. A. & Pessoa, L. What drives prioritized visual processing? A motivational relevance account. In Progress in Brain Research (Elsevier, 2019). https://doi.org/10.1016/bs.pbr.2019.03.028
Hajcak, G., MacNamara, A. & Olvet, D. M. Event-related potentials, emotion, and emotion regulation: An integrative review. Dev. Neuropsychol. 35, 129–155 (2010).
Fox, A. S., Oler, J. A., Tromp, D. P. M., Fudge, J. L. & Kalin, N. H. Extending the amygdala in theories of threat processing. Trends Neurosci. 38, 319–329 (2015).
Eimer, M. & Holmes, A. Event-related brain potential correlates of emotional face processing. Neuropsychologia 45, 15–31 (2007).
Hinojosa, J. A., Mercado, F. & Carretié, L. N170 sensitivity to facial expression: A meta-analysis. Neurosci. Biobehav. Rev. 55, 498–509 (2015).
Schindler, S. & Bublatzky, F. Attention and emotion: An integrative review of emotional face processing as a function of attention. Cortex 130, 362–386 (2020).
Bruchmann, M., Schindler, S. & Straube, T. The spatial frequency spectrum of fearful faces modulates early and mid-latency ERPs but not the N170. Psychophysiology 57, e13597 (2020).
Flevaris, A. V., Robertson, L. C. & Bentin, S. Using spatial frequency scales for processing face features and face configuration: An ERP analysis. Brain Res. 1194, 100–109 (2008).
Nakashima, T. et al. Early ERP components differentially extract facial features: Evidence for spatial frequency-and-contrast detectors. Neurosci. Res. 62, 225–235 (2008).
Tian, J. et al. The influence of spatial frequency content on facial expression processing: An ERP study using rapid serial visual presentation. Sci. Rep. 8, 2383 (2018).
Cauquil, A. S., Edmonds, G. E. & Taylor, M. J. Is the face-sensitive N170 the only ERP not affected by selective attention?. NeuroReport 11, 2167–2171 (2000).
Cecchini, M., Aceto, P., Altavilla, D., Palumbo, L. & Lai, C. The role of the eyes in processing an intact face and its scrambled image: A dense array ERP and low-resolution electromagnetic tomography (sLORETA) study. Soc. Neurosci. 8, 314–325 (2013).
Neath, K. N. & Itier, R. J. Fixation to features and neural processing of facial expressions in a gender discrimination task. Brain Cogn. 99, 97–111 (2015).
Neath-Tavares, K. N. & Itier, R. J. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach. Biol. Psychol. 119, 122–140 (2016).
Bruchmann, M., Schindler, S., Heinemann, J., Moeck, R. & Straube, T. Increased early and late neuronal responses to aversively conditioned faces across different attentional conditions. Cortex 142, 332–341 (2021).
Schindler, S., Bruchmann, M., Krasowski, C., Moeck, R. & Straube, T. Charged with a crime: The neuronal signature of processing negatively evaluated faces under different attentional conditions. Psychol. Sci. 32, 1311–1324 (2021).
Schindler, S., Heinemann, J., Bruchmann, M., Moeck, R. & Straube, T. No trait anxiety influences on early and late differential neuronal responses to aversively conditioned faces across three different tasks. Cogn. Affect. Behav. Neurosci. https://doi.org/10.3758/s13415-022-00998-x (2022).
Bishop, S. J. Trait anxiety and impoverished prefrontal control of attention. Nat. Neurosci. 12, 92–98 (2009).
Bishop, S. J., Jenkins, R. & Lawrence, A. D. Neural processing of fearful faces: Effects of anxiety are gated by perceptual capacity limitations. Cereb. Cortex 17, 1595–1603 (2007).
Bishop, S. J. & Forster, S. Trait anxiety, neuroticism, and the brain basis of vulnerability to affective disorder. In The Cambridge Handbook of Human Affective Neuroscience 553–574 (Cambridge University Press, 2013). https://doi.org/10.1017/CBO9780511843716.031
Quigley, L., Nelson, A. L., Carriere, J., Smilek, D. & Purdon, C. The effects of trait and state anxiety on attention to emotional images: An eye-tracking study. Cogn. Emot. 26, 1390–1411 (2012).
Bar-Haim, Y., Lamy, D. & Glickman, S. Attentional bias in anxiety: A behavioral and ERP study. Brain Cogn. 59, 11–22 (2005).
Holmes, A., Nielsen, M. K. & Green, S. Effects of anxiety on the processing of fearful and happy faces: An event-related potential study. Biol. Psychol. 77, 159–173 (2008).
Williams, L. M. et al. Neural biases to covert and overt signals of fear: Dissociation by trait anxiety and depression. J. Cogn. Neurosci. 19, 1595–1608 (2007).
Morel, S., George, N., Foucher, A., Chammat, M. & Dubal, S. ERP evidence for an early emotional bias towards happy faces in trait anxiety. Biol. Psychol. 99, 183–192 (2014).
Schindler, S., Richter, T. S., Bruchmann, M., Busch, N. A. & Straube, T. Effects of task load, spatial attention, and trait anxiety on neuronal responses to fearful and neutral faces. Psychophysiology 59, e14114 (2022).
Steinweg, A., Schindler, S., Bruchmann, M., Moeck, R. & Straube, T. Reduced early fearful face processing during perceptual distraction in high trait anxious participants. Psychophysiology 58, e13819 (2021).
Walentowska, W. & Wronka, E. Trait anxiety and involuntary processing of facial emotions. Int. J. Psychophysiol. 85, 27–36 (2012).
Hopfinger, J. B. & Mangun, G. R. Reflexive attention modulates processing of visual stimuli in human extrastriate cortex. Psychol. Sci. 9, 441–447 (1998).
Luck, S. J. & Hillyard, S. A. Electrophysiological correlates of feature analysis during visual search. Psychophysiology 31, 291–308 (1994).
Vogel, E. K. & Luck, S. J. The visual N1 component as an index of a discrimination process. Psychophysiology 37, 190–203 (2000).
Allison, T., Puce, A., Spencer, D. D. & McCarthy, G. Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb. Cortex 9, 415–430 (1999).
Bentin, S., Allison, T., Puce, A., Perez, E. & McCarthy, G. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8, 551–565 (1996).
Li, S., Li, P., Wang, W., Zhu, X. & Luo, W. The effect of emotionally valenced eye region images on visuocortical processing of surprised faces. Psychophysiology 55, e13039 (2018).
Santos, I. M., Iglesias, J., Olivares, E. I. & Young, A. W. Differential effects of object-based attention on evoked potentials to fearful and disgusted faces. Neuropsychologia 46, 1468–1479 (2008).
Schindler, S., Tirloni, C., Bruchmann, M. & Straube, T. Face and emotional expression processing under continuous perceptual load tasks: An ERP study. Biol. Psychol. https://doi.org/10.1016/j.biopsycho.2021.108056 (2021).
Smith, E., Weinberg, A., Moran, T. & Hajcak, G. Electrocortical responses to NIMSTIM facial expressions of emotion. Int. J. Psychophysiol. 88, 17–25 (2013).
Rossignol, M., Philippot, P., Bissot, C., Rigoulot, S. & Campanella, S. Electrophysiological correlates of enhanced perceptual processes and attentional capture by emotional faces in social anxiety. Brain Res. 1460, 50–62 (2012).
Schindler, S., Bruchmann, M., Gathmann, B., Moeck, R. & Straube, T. Effects of low-level visual information and perceptual load on P1 and N170 responses to emotional expressions. Cortex 136, 14–27 (2021).
Schindler, S., Wolf, M.-I., Bruchmann, M. & Straube, T. Fearful face scrambles increase early visual sensory processing in the absence of face information. Eur. J. Neurosci. 53, 2703–2712 (2021).
Eimer, M. The Face-sensitive N170 component of the event-related brain potential. In The Oxford Handbook of Face Perception 329–344 (Oxford University Press, 2011).
Bruchmann, M., Schindler, S., Dinyarian, M. & Straube, T. The role of phase and orientation for ERP modulations of spectrum-manipulated fearful and neutral faces. Psychophysiology 59, e13974 (2022).
Itier, R. J. & Neath-Tavares, K. N. Effects of task demands on the early neural processing of fearful and happy facial expressions. Brain Res. 1663, 38–50 (2017).
Herbert, C., Sfaerlea, A. & Blumenthal, T. Your emotion or mine: Labeling feelings alters emotional face perception—An ERP study on automatic and intentional affect labeling. Front. Hum. Neurosci. 7, 1–11 (2013).
Pegna, A. J., Landis, T. & Khateb, A. Electrophysiological evidence for early non-conscious processing of fearful facial expressions. Int. J. Psychophysiol. 70, 127–136 (2008).
Smith, M. L. Rapid processing of emotional expressions without conscious awareness. Cereb. Cortex 22, 1748–1760 (2012).
Calvo, M. G. & Beltrán, D. Brain lateralization of holistic versus analytic processing of emotional facial expressions. Neuroimage 92, 237–247 (2014).
Piepers, D. & Robbins, R. A review and clarification of the terms “holistic”, “configural”, and “relational” in the face perception literature. Front. Psychol. 3, 1–11 (2012).
Rossion, B. The composite face illusion: A whole window into our understanding of holistic face perception. Vis. Cogn. 21, 139–253 (2013).
da Silva, E. B. et al. Something to sink your teeth into: The presence of teeth augments ERPs to mouth expressions. Neuroimage 127, 227–241 (2016).
Harris, A. & Nakayama, K. Rapid adaptation of the M170 response: Importance of face parts. Cereb. Cortex 18, 467–476 (2008).
Schyns, P. G., Petro, L. S. & Smith, M. L. Transmission of facial expressions of emotion co-evolved with their efficient decoding in the brain: Behavioral and brain evidence. PLoS ONE 4, e5625 (2009).
Schyns, P. G., Petro, L. S. & Smith, M. L. Dynamics of visual information integration in the brain for categorizing facial expressions. Curr. Biol. 17, 1580–1585 (2007).
Itier, R. J., Van Roon, P. & Alain, C. Species sensitivity of early face and eye processing. Neuroimage 54, 705–713 (2011).
Parkington, K. B. & Itier, R. J. One versus two eyes makes a difference! Early face perception is modulated by featural fixation and feature context. Cortex 109, 35–49 (2018).
Wieser, M. J., Gerdes, A. B. M., Greiner, R., Reicherts, P. & Pauli, P. Tonic pain grabs attention, but leaves the processing of facial expressions intact—Evidence from event-related brain potentials. Biol. Psychol. 90, 242–248 (2012).
Durston, A. J. & Itier, R. J. The early processing of fearful and happy facial expressions is independent of task demands—Support from mass univariate analyses. Brain Res. 1765, 147505 (2021).
Schindler, S., Bruchmann, M., Bublatzky, F. & Straube, T. Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations. Soc. Cogn. Affect. Neurosci. 14, 493–503 (2019).
Schupp, H. T., Junghöfer, M., Weike, A. I. & Hamm, A. O. The selective processing of briefly presented affective pictures: An ERP analysis. Psychophysiology 41, 441–449 (2004).
McFadyen, J., Mermillod, M., Mattingley, J. B., Halász, V. & Garrido, M. I. A rapid subcortical amygdala route for faces irrespective of spatial frequency and emotion. J. Neurosci. 37, 3864–3874 (2017).
Watier, N. & DeGagne, B. Spatial frequency thresholds for detecting latent facial signals of threat. Perception https://doi.org/10.1177/0301006619828254 (2019).
Vuilleumier, P., Armony, J. L., Driver, J. & Dolan, R. J. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 6, 624–631 (2003).
Hedger, N., Adams, W. J. & Garner, M. Fearful faces have a sensory advantage in the competition for awareness. J. Exp. Psychol. Hum. Percept. Perform. 41, 1748–1757 (2015).
Gosselin, F. & Schyns, P. G. Bubbles: A technique to reveal the use of information in recognition tasks. Vis. Res. 41, 2261–2271 (2001).
van Rijsbergen, N. J. & Schyns, P. G. Dynamics of trimming the content of face representations for categorization in the brain. PLOS Comput. Biol. 5, e1000561 (2009).
Schyns, P. G., Jentzsch, I., Johnson, M., Schweinberger, S. R. & Gosselin, F. A principled method for determining the functionality of brain responses. NeuroReport 14, 1665–1669 (2003).
Smith, M. L., Gosselin, F. & Schyns, P. G. Receptive fields for flexible face categorizations. Psychol. Sci. 15, 753–761 (2004).
Smith, M. L., Cottrell, G. W., Gosselin, F. & Schyns, P. G. Transmitting and decoding facial expressions. Psychol. Sci. 16, 184–189 (2005).
Schindler, S., Bruchmann, M., Steinweg, A.-L., Moeck, R. & Straube, T. Attentional conditions differentially affect early, intermediate and late neural responses to fearful and neutral faces. Soc. Cogn. Affect. Neurosci. 15, 765–774 (2020).
Rousselet, G. A., Ince, R. A. A., van Rijsbergen, N. J. & Schyns, P. G. Eye coding mechanisms in early human face event-related potentials. J. Vis. 14, 7 (2014).
Jaworska, K. et al. Healthy aging delays the neural processing of face features relevant for behavior by 40 ms. Hum. Brain Mapp. 41, 1212–1225 (2020).
Langner, O. et al. Presentation and validation of the Radboud Faces Database. Cogn. Emot. 24, 1377–1388 (2010).
Brainard, D. H. The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997).
Kleiner, M., Brainard, D. H. & Pelli, D. G. What’s new in Psychtoolbox-3?. Perception 36, 14 (2007).
Cornelissen, F. W., Peters, E. M. & Palmer, J. The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behav. Res. Methods Instrum. Comput. 34, 613–617 (2002).
Hautzinger, M., Keller, F., Kühner, C. & Beck, A. T. Beck Depressions-Inventar: BDI II; Manual (Pearson Assessment, 2009).
Spielberger, C. D. & Gorsuch, R. L. State-Trait Anxiety Inventory (Form Y) (Consulting Psychologists Press, 1983).
Körner, A. et al. Persönlichkeitsdiagnostik mit dem NEO-Fünf-Faktoren-Inventar: Die 30-Item-Kurzversion (NEO-FFI-30). PPmP Psychother. Psychosom. Med. Psychol. 58, 238–245 (2008).
Ille, N., Berg, P. & Scherg, M. Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies. J. Clin. Neurophysiol. 19, 113–124 (2002).
Jeffreys, H. Theory of Probability (UK Oxford University Press, 1961).
Efron, B. & Tibshirani, R. J. An Introduction to the Bootstrap (CRC Press, 1994).
Schindler, S., Caldarone, F., Bruchmann, M., Moeck, R. & Straube, T. Time-dependent effects of perceptual load on processing fearful and neutral faces. Neuropsychologia 146, 107529 (2020).
Itier, R. J. & Batty, M. Neural bases of eye and gaze processing: The core of social cognition. Neurosci. Biobehav. Rev. 33, 843–863 (2009).
Straube, T., Dietrich, C., Mothes-Lasch, M., Mentzel, H.-J. & Miltner, W. H. R. The volatility of the amygdala response to masked fearful eyes. Hum. Brain Mapp. 31, 1601–1608 (2010).
Whalen, P. J. et al. Human amygdala responsivity to masked fearful eye whites. Science 306, 2061 (2004).
Frühholz, S., Jellinghaus, A. & Herrmann, M. Time course of implicit processing and explicit processing of emotional faces and emotional words. Biol. Psychol. 87, 265–274 (2011).
Maffei, A. et al. Spatiotemporal dynamics of covert versus overt processing of happy, fearful and sad facial expressions. Brain Sci. 11, 942 (2021).
Aguado, L. et al. Modulation of early perceptual processing by emotional expression and acquired valence of faces: An ERP study. J. Psychophysiol. 26, 29–41 (2012).
Beckes, L., Coan, J. A. & Morris, J. P. Implicit conditioning of faces via the social regulation of emotion: ERP evidence of early attentional biases for security conditioned faces: Implicit conditioning and ERP. Psychophysiology 50, 734–742 (2013).
Camfield, D. A., Mills, J., Kornfeld, E. J. & Croft, R. J. Modulation of the N170 with classical conditioning: The use of emotional imagery and acoustic startle in healthy and depressed participants. Front. Hum. Neurosci. 10, 337 (2016).
Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M. J. & van Ijzendoorn, M. H. Threat-related attentional bias in anxious and nonanxious individuals: A meta-analytic study. Psychol. Bull. 133, 1–24 (2007).
Cisler, J. M. & Koster, E. H. W. Mechanisms of attentional biases towards threat in anxiety disorders: An integrative review. Clin. Psychol. Rev. 30, 203–216 (2010).
Wieser, M. J. & Keil, A. Attentional threat biases and their role in anxiety: A neurophysiological perspective. Int. J. Psychophysiol. 153, 148–158 (2020).
Stegmann, Y., Ahrens, L., Pauli, P., Keil, A. & Wieser, M. J. Social aversive generalization learning sharpens the tuning of visuocortical neurons to facial identity cues. Elife 9, e55204 (2020).
Torrents-Rodas, D. et al. No effect of trait anxiety on differential fear conditioning or fear generalization. Biol. Psychol. 92, 185–190 (2013).
Kappenman, E. S., Geddert, R., Farrens, J. L., McDonald, J. J. & Hajcak, G. Recoiling from threat: Anxiety is related to heightened suppression of threat, not increased attention to threat. Clin. Psychol. Sci. 9, 434–448 (2021).
Acknowledgements
We acknowledge support from the Open Access Publication Fund of the University of Münster.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
M.B.: Conceptualization, methodology, software, formal analysis, visualization, writing—original draft, writing—review and editing. L.M..: Conceptualization, methodology, investigation, visualization, writing—original draft, writing—review and editing. S.S.: Writing—review and editing, supervision. T.S.: Conceptualization, supervision, writing—review and editing.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bruchmann, M., Mertens, L., Schindler, S. et al. Potentiated early neural responses to fearful faces are not driven by specific face parts. Sci Rep 13, 4613 (2023). https://doi.org/10.1038/s41598-023-31752-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-023-31752-z
- Springer Nature Limited