Next Article in Journal
Fetal Atrial Flutter Associated with Atrial Septal Aneurysm
Next Article in Special Issue
Intratemporal Facial Nerve Schwannomas: A Review of 45 Cases in A Single Center
Previous Article in Journal
Identifying Obstructive Sleep Apnoea in Patients with Empty Nose Syndrome
Previous Article in Special Issue
Using High-Resolution Ultrasound to Assess Post-Facial Paralysis Synkinesis—Machine Settings and Technical Aspects for Facial Surgeons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis?

by
Anna-Maria Kuttenreich
1,2,3,4,5,*,
Harry von Piekartz
6 and
Stefan Heim
1,2,7
1
Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
2
Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
3
Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
4
Facial-Nerve-Center Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
5
Center of Rare Diseases Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
6
Department of Physical Therapy and Rehabilitation Science, Osnabrück University of Applied Sciences, Albrechtstr. 30, 49076 Osnabrück, Germany
7
Institute of Neuroscience and Medicine (INM−1), Forschungszentrum Jülich, Leo-Brand-Str. 5, 52428 Jülich, Germany
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(7), 1721; https://doi.org/10.3390/diagnostics12071721
Submission received: 12 February 2022 / Revised: 6 July 2022 / Accepted: 10 July 2022 / Published: 15 July 2022
(This article belongs to the Special Issue Evidence-Based Diagnosis and Management of Facial Nerve Disorders)

Abstract

:
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present study, emotion recognition was tested in patients with central facial paresis after stroke. Performance in facial vs. auditory emotion recognition was assessed in patients with vs. without facial paresis. The accuracy of objective facial emotion recognition was significantly lower in patients with vs. without facial paresis and also in comparison to healthy controls. Moreover, for patients with facial paresis, the accuracy measure for facial emotion recognition was significantly worse than that for auditory emotion recognition. Finally, in patients with facial paresis, the subjective judgements of their own facial emotion recognition abilities differed strongly from their objective performances. This pattern of results demonstrates a specific deficit in facial emotion recognition in central facial paresis and thus provides support for the FFH and points out certain effects of stroke.

1. Introduction

Emotion recognition is omnipresent in social interactions [1] and represents an important social competence [2]. Faces provide relevant clues for the recognition of emotions [2,3]. One explanation of the facial recognition of emotions is provided by the Facial Feedback Hypothesis (FFH) [4]. The present study therefore compares stroke patients with vs. without unilateral central facial paresis, i.e., the partial inability to perform facial movements [5], in order to test the FFH prediction of a specific deficit of visual facial emotion recognition in individuals with central facial paresis.

Emotion Processing and the Role of Facial Feedback

Facial emotion expressions are part of nonverbal communication [3] and are regarded as some of the most important nonverbal features in the identification of emotions [6]. Facial expression can be highly variable due to the precise control of the different facial muscles [1] and their voluntary or affective control [7], although the basic emotions framework considers a set of emotions to be highly elementary, unique and independent of culture, time and place [8]. These basic emotions are: anger, disgust, fear, joy, sadness and surprise [9,10]. Each of the basic emotions is characterized by specific patterns of facial muscle activities [8,11]. These congenital, ubiquitous basic emotions [12] are typically used to observe (facial) emotion recognition [13].
The accuracy of emotion recognition varies, depending on the particular emotion presented. Joy is detected significantly more accurately and quickly than all other basic emotions, whereas fear is detected significantly less accurately and more slowly than the other emotions [14]. The basic emotions of surprise and anger, as well as disgust and sadness, are similarly well-identified in terms of accuracy (performance listed in descending order) [14]. Besides differences per emotion, emotion recognition depends on sex and age. Women are faster at facial emotion recognition than men [15]. With increasing age, emotion recognition performance decreases [16]. It has not yet been conclusively clarified whether the processing of emotions is innate [4,17] or whether a concept of emotions must first be learned [18]. A combination is also conceivable, if basic emotions are considered as biologically anchored [12] and innate [17], while all of the other, more complex emotions [8] have to be learned first [12]. The localisation of emotion processing is also a matter of controversy, with evidence for right, left, or left and right hemispheric activation [19]. Dominance of the right hemisphere has been described historically [20], whereas recent evidence has highlighted a combination of different neuronal networks with different lateralization [19].
In emotion processing, the importance of afferent information from the body is emphasised, e.g., facial expression [18]. In this sense, the FFH provides a theoretical account of the process of facial emotion recognition. It postulates that other persons’ emotions are recognised by one’s own facial information [4]. The decoding requires the imitation of the facial expression of the other person and the corresponding proprioceptive facial feedback [21,22] (‘facial reflex’ is a synonym for ‘facial feedback’ [11]). Neal and Chartrand [22] summarised the working steps of the FFH: (1) imitation of the facial expression of the communication partner (discrete, unconscious, fast, automated and specific to the emotion); (2) transmission of afferent information from the face to the brain; and (3) experience and recognition of the emotion [22].
Whereas a person’s spontaneous, quick and unobtrusive imitation with their own face is basically unproblematic [23], pathological conditions affecting facial integrity may affect the abilities to initiate or imitate basic emotions’ corresponding facial expressions. Such conditions include, for example, facial paresis, a unilateral or bilateral palsy of the facial musculature following a peripheral or central defect [24]. The central form of facial paresis considered in this study typically presents unilaterally, contralateral to the central lesion [25], after stroke [26].
Whether and precisely what role facial feedback plays in emotion recognition has not yet been conclusively clarified. For example, different research results show evidence for and against the FFH in the case of limited facial feedback (due to illness or artificially provoked).
Significant deficits in facial emotion recognition were reported by Konnerth et al. [27] and Storbeck et al. [28] in patients with peripheral facial paresis/paralysis. Konnerth et al. [27] reported that patients achieved lower accuracy values than healthy controls, although the difference was not significant. Storbeck et al. [28] also detected that accuracy in facial emotion recognition did not differ significantly between patients with facial paresis and healthy controls. However, visual emotion recognition was significantly slower compared to the control subjects in both studies [27,28]. More specifically, Korb et al. [29] reported differences depending on the paralysed side of the face, with facial emotion recognition being more affected in patients with left-sided rather than right-sided facial palsy. Such findings might be taken as supportive evidence for the FFH, as persons with intact feedback show faster facial emotion recognition times [22,30,31,32,33]. This reduced accuracy of emotion recognition in patients with peripheral facial palsy could be explained by Niedenthal et al. [33], according to whom self-experienced emotions can be recognized earlier than those that are not self-perceived [33]. In contrast, Keillor et al. [34] did not report differences in the accuracy of emotion naming, discrimination or matching tasks in their single case study of a patient with bilateral facial paralysis in Guillain–Barré syndrome, nor did Bogart and Matsumoto [35] report facial emotion recognition deficits in patients with congenital bilateral facial paresis in Moebius syndrome. However, Calder et al. [36] did observe differences in the accuracy of emotion recognition with respect to at least one basic emotion in patients with Moebius syndrome.
A different way of investigating facial feedback in healthy participants is with an injection of botulinum toxin in the facial muscles for temporarily paralysis. Different studies using this method showed changed emotion recognition in terms of accuracy and time [22,32]. The results may point to a direct link between facial feedback and emotion processing [32].
Besides limited facial movements due to experimental induction and peripheral facial palsy, other disorders could also affect (1) facial movements and (2) facial emotion recognition—for instance, central facial palsy after stroke and Parkinson’s disease. Stroke occurs suddenly due to disturbed blood flow and oxygen deficiency (ischemic stroke) or bleeding (hemorrhagic stroke) in the brain and leads to individual disabilities [37], whereas Parkinson’s disease is a neurodegenerative disorder involving loss of dopamine in the substantia nigra, resulting in typical symptoms of rigor, tremor and bradykinesia [38]. Both central facial palsy after stroke [26,39] and Parkinson’s disease [40,41,42] could result in similar effects, i.e., reduced facial expression and therefore reduced facial feedback. Following the FFH, facial feedback due to facial integrity is needed for facial emotion recognition [23]. Both in stroke [43] and in Parkinson’s disease [41], facial emotion recognition could be impaired. However, there is not necessarily a direct correlation between limitations in facial expression and facial emotion recognition, at least in Parkinson’s disease [41].
In summary, there is evidence that patients with limited facial feedback and facial mimicry abilities (e.g., in peripheral facial paresis) are potentially affected by limited facial emotion recognition. To date, to the best of our knowledge, patients with peripheral facial palsy have been studied, whereas patients with central facial palsy have been overlooked.
The care of patients with central facial palsy is insufficient and rehabilitation guidelines are required [44]. To improve treatment and establish guidelines, deficits or remaining abilities must be identified first. To this end, we designed a study to find proof of facial emotion recognition abilities in patients with central facial palsy.
Consequently, the aim of the study was to test facial emotion recognition in patients with central facial paresis after stroke in terms of accuracy and time with visually presented, i.e., facial, stimuli presented by healthy subjects. Testing different modalities (facial and auditory) in two patient groups (with or without facial paresis after stroke) allows assessment of whether there is a general deficit in emotion recognition—which is a possibility after stroke [43]—or whether only one particular modality is (more) affected. If there are no deficits in emotion recognition at all, i.e., if the performance is comparable to that of healthy control subjects, it can assume that emotion recognition may be intact. Accordingly, the primary research question was: Can patients with central facial paresis after stroke recognise facial emotions?

2. Materials and Methods

2.1. Participants

Three groups of participants were considered for this study: (1) patients with unilateral central facial paresis after stroke, (2) patients without facial paresis after stroke and (3) healthy subjects. The data for the patient groups (1) and (2) were collected within the study (data are available from the authors on request), whereas the reference values for the healthy subject group (3) were already available [45,46,47] and served for an additional comparison.
The inclusion and exclusion criteria are summarised in Table 1. The patients were referred by various cooperation partners, hospitals and local practices for speech–language therapy. Recruitment and data collection took place in the period from 22 February until 14 May 2019 in Germany.
A total of 67 patients were recruited. Four of these were drop-out cases (one case: disorientation; one case: suspected bucco–facial apraxia with no possibility of assessing facial paresis; two cases: antidepressant medication with suspected altered emotional regulation). The remaining 63 patients were assigned to the study group (patients with central facial paresis, n = 34) or the control group (patients without facial paresis, n = 29) according to their diagnosis of facial paresis. Sociodemographic data and information on lesions, facial paresis, general mental capacities and aphasia for the study and control groups are given in Table A1, Table A2, Table A3, Table A5 and Table A6 (Appendix A).
The study was approved by the local ethics committee (key: EK 271/18) of the Medical Faculty at RWTH Aachen University, and all regulations of the ethics committee were implemented. All experiments were performed in accordance with the relevant guidelines and regulations. All participants signed an informed consent form after receiving detailed information.

2.2. Materials

For both facial emotion recognition and auditory emotion recognition, the same conditions were set, i.e., an item was presented (visually or auditorily) and the patients had ten seconds to respond. There were different options available as answers. The respective software systems recorded accuracy and time. For both modalities, a pre-test with ten items (initially randomized, later presented in the same order) was performed. The pre-test ensured that the task was understood [48] (see, also, Appendix B).

2.2.1. Visual Facial Emotion Recognition

In our study, we opted to use the Myfacetraining (MFT) Program (CRAFTA Cranio Facial Therapy Academy, Hamburg, Germany) [47,49], which consists of a standard test for accuracy and time taken for facial emotion recognition [47,49]. Forty-two subjects, each showing a basic emotion with their face, were presented on a screen. The person was first shown in a neutral position before changing to an emotional facial expression (basic emotion). Six additional answer options were displayed on the screen according to the basic emotion [47] (see, also, Appendix B).

2.2.2. Auditory Emotion Recognition

In addition to faces, voices (auditory) are the most important modalities in emotional communication [1]. A sub-portion of the Montreal Affective Voices (MAVs) [45] was used for the assessment. These are emotional, non-linguistic, vocal expressions of /a/ (to be compared with a as in apple, British English). Sixty items for the six basic emotions [45] were used. The Montreal Affective Voices were presented with a specially programmed experiment with the software PsychoPy, version 3.0.0b9 [50] (see, also, Appendix B).

2.2.3. Subjective Facial Emotion Recognition: Self-Assessment Questionnaires, Emotion Recognition

Coulson et al. [51] asked relatives of patients with facial paresis for their assessments of emotional recognition. Based on this, two standardized questionnaires were designed for the present study which enabled the systematic collection of subjective facial emotion recognition data. The Self-Assessment Questionnaires Emotion Recognition Accuracy and Time were used to document self-assessment of facial emotion recognition of the six basic emotions (anger, disgust, fear, joy, sadness and surprise) [51]. In order to be able to look at the evaluation in a differentiated way, one questionnaire was developed to assess accuracy and another was developed to assess time taken for facial emotion recognition. The questionnaires assess possible changes between pre-morbid and current abilities per basic emotion. The questions that featured in the questionnaires in each case were as follows: How well do you recognize the following feelings in other people’s faces? One of three answer options could be selected for each questionnaire. For Accuracy, the patient evaluated whether the basic emotion in question was more difficult, just as well as or more easily recognised than before stroke. For Time, the patient indicated whether the basic emotion was detected slower, as fast as or faster than before stroke. For deteriorations (indicated by the response options more difficult or slower), a score of −1 was assigned. If the patient did not notice any changes (response options just as well as or just as fast as), zero points (0) were recorded. For improvements (answer options easier or faster), the patient achieved a score of +1, resulting in a score between −6 and +6 per questionnaire.

2.2.4. Sunnybrook Facial Grading System for Diagnosing Facial Paresis

In order to answer the main research question, all patients were examined in a standardised way to identify possible facial paresis. Only this allowed to divide the patients into the study group (participants with central facial paresis) or the control group (participants without central facial paresis). The Sunnybrook Facial Grading System [52,53] is used for the standardised assessment for diagnosing facial paresis or paralysis, respectively. This measurement method is explicitly recommended [54]. It is also considered the current standard in the evaluation of facial paresis [55] and has been used in various studies (e.g., [54,56,57,58,59,60,61,62]). Ross et al. [52] published the original version of the Sunnybrook Facial Grading System in 1996, which was implemented in the present study (German version [53]). For this purpose, a video was made of each patient with an Apple iPod touch (camera at right angles, at the individual height of the chewing plane, 150 cm from the patient’s chin), in which the patients were asked in a standardised manner to show their face at rest or to perform an arbitrary movement with their face (raise eyebrows, close eyes gently, smile with open mouth, show teeth, pucker lips). The videos were evaluated by a speech–language therapist (see, also, Appendix B).

2.3. Statistical Analysis

Two-factorial ANOVAs with post-hoc t-tests were performed with the factors group (with vs. without facial paresis) as between-subject factor and modality (facial vs. auditory emotion recognition) as within-subject factor. Accuracy and time taken for emotion recognition were considered as dependent variables. In order to compare the empirical data obtained in the present study with normative data for healthy controls (without stroke and without facial paresis) which were already available, a series of t-tests were subsequently performed separately both for accuracy and time. To compare facial emotion recognition and auditory emotion recognition in terms of accuracy and time in patients and healthy subjects, t-tests were performed for one sample. For the comparison between patients with and without facial paresis, two-factorial ANOVAs and (post-hoc) t-tests for independent samples were run. t-tests for dependent samples were performed to compare facial emotion recognition and auditory emotion recognition in patients with and without facial paresis. To analyse subjective emotion recognition in terms of accuracy and time, one-sample t-tests were conducted. To compare accuracy and time, t-tests for dependent samples were performed.
Benjamini–Hochberg correction was applied if more than one t-test was conducted.

3. Results

The results for objective (accuracy and time) and subjectively perceived success in emotion recognition are summarised in Figure 1, Figure 2, Figure 3, Figure 4 and Table A4 (Appendix A).

3.1. Accuracy of Facial Emotion Recognition

The results of the ANOVA for accuracy were a main effect of group F(1;61) = 6.620; p = 0.013, a main effect of modality F(1;61) = 96.535; p < 0.001 and an interaction effect group x modality F(1;61) = 18.330; p < 0.001, which means that participants with central facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) compared to participants without facial paresis (t(49.425) = −3.767; p < 0.001; after correction p = 0.002) and compared to healthy controls (t(33) = −22.888; p < 0.001; after correction p = 0.002). Participants without facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) compared to healthy controls (t(28) = −10.476; p < 0.001; after correction p = 0.002), Figure 1.

3.2. Accuracy of Auditory Emotion Recognition

Participants with central facial paresis recognised auditorily presented basic emotions significantly worse (reduced accuracy) compared to healthy controls (t(33) = −13.258; p < 0.001; after correction p = 0.002). Participants without facial paresis recognised auditorily presented basic emotions significantly worse (reduced accuracy) compared to healthy controls (t(28) = −11.259; p < 0.001; after correction p = 0.002). Participants with vs. without central facial paresis did not differ significantly in auditory emotion recognition (accuracy) (t(61) = 0.616; p = 0.540; after correction p = 0.540), Figure 2.

3.3. Comparison of Accuracy of Facial and Auditory Emotion Recognition

Participants with central facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) than auditorily presented basic emotions (t(33) = −11.252; p < 0.001; after correction p = 0.002). Participants without facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) than auditorily presented basic emotions (t(28) = −3.485; p = 0.002; after correction p = 0.002).

3.4. Time Taken for Facial Emotion Recognition

The results of the ANOVA for accuracy were a main effect of group (F(1;61) = 2.797; p = 0.100), a main effect of modality (F(1;61) = 3.311; p = 0.074), and an interaction effect group × modality (F(1;61) = 3.148; p = 0.081)), which means that participants with central facial paresis did not recognise visually presented basic emotions significantly more slowly (reduced time) compared to participants without facial paresis (t(61) = 0.414; p = 0.680; after correction p = 0.680). Participants with central facial paresis recognised visually presented basic emotions significantly (not significantly after correction) faster (increased time) compared to healthy controls (t(33) = −2.442; p = 0.020; after correction p = 0.060). Participants without facial paresis recognised visually presented basic emotions significantly faster (increased time) compared to healthy controls (t(28) = −2.390; p = 0.024; after correction p = 0.036), Figure 3.

3.5. Time Taken for Auditory Emotion Recognition

Participants with vs. without central facial paresis did not differ significantly with respect to the average time taken for auditory emotion recognition (t(61) = −1.851; p = 0.069), Figure 4.

3.6. Comparison of Time Taken for Facial and Auditory Emotion Recognition

Participants with central facial paresis recognised visually presented basic emotions significantly (not significantly after correction) faster (increased time) than auditorily presented basic emotions (t(33) = −2.269; p = 0.030; after correction p = 0.060). Participants without facial paresis recognised visually presented basic emotions not significantly differently to auditorily presented basic emotions (t(28) = −0.041; p = 0.968; after correction p = 0.968).

3.7. Subjective Judgement of Emotion Recognition from the Perspective of Participants with Central Facial Paresis

Both the average accuracy of facial emotion recognition (mean = −0.71 ± 1.90) was perceived as significantly limited (t(33) = −2.167; p = 0.038; after correction p = 0.038) and the time taken for facial emotion recognition (mean = −1.91 ± 2.90) was subjectively perceived as significantly limited (t(33) = −3.849; p = 0.001; after correction p = 0.003) in participants with central facial paresis. Participants with central facial paresis judged themselves to be significantly more restricted in terms of the time taken for facial emotion recognition than in terms of accuracy (t(33) = 2.689; p = 0.011; after correction p = 0.017), Figure 5.

3.8. Further Analysis

In order to verify that the identified pattern is reasonable on the basis of these results, the following further control calculations were made.
A correlation calculation (Pearson’s product moment correlation) between objective accuracy and objective time taken for facial emotion recognition in patients with and without central facial paresis was performed. The accuracy of and the time taken for facial emotion recognition in patients with central facial paresis were positively correlated with each other (r = 0.729; p < 0.001). The average accuracy and the average time taken for facial emotion recognition in patients without facial paresis were not significantly correlated with each other (r = 0.291; p = 0.126).
Furthermore, a correlation calculation (Pearson’s product moment correlation) between objective facial emotion recognition, accuracy and severity of facial paresis using the Sunnybrook Facial Grading System across all patients (with and without facial paresis) was performed. The average accuracy of facial emotion recognition and the severity of facial paresis were significantly positively correlated with each other (r = 0.31; p = 0.014).
Moreover, a one-tailed t-test on independent samples for facial emotion recognition accuracy showed no significant difference between patients with left-sided facial paresis (mean = 26.44 ± 11.49) and right-sided facial paresis (mean = 29.25 ± 10.69), t(32) = −0.734; p = 0.234). Another one tailed t-test on independent samples for facial emotion recognition time showed no significant difference between patients with left-sided facial paresis (mean = 3.12 ± 0.48) and right-sided facial paresis (mean = 3.17 ± 0.47), t(32) = −0.322; p = 0.375.
Furthermore, a chi-squared test to compare the number of patients with limitations in general mental capacity in both groups (Table A5, Appendix A) was performed. Both groups were comparable, with x2(1, n = 63) = 0.204; p = 0.651. Another chi-squared test to compare the number of patients with aphasia in both groups (Table A6, Appendix A) was also carried out. Both groups were comparable, with x2(1, n = 63) = 1.546; p = 0.214.
Additionally, univariate and multivariate regressions, with emotion recognition (facial and auditory, accuracy and time taken) as the dependent variable and predictors diagnosis of facial paresis, sex, age, subjective judgement, general mental capacity and time post-onset as independent variables, were conducted (Table A7 and Table A8, Appendix A). Patients with facial paresis recognised visually presented basic emotions significantly worse (reduced accuracy) compared to patients without facial paresis, as calculated by means of univariate regression (beta = −0.444; p < 0.001) as well as by multivariate regression (beta = −0.353; 0.003).

4. Discussion

This study investigated visual facial emotion recognition (VFER) in patients with and without central facial paresis vs. healthy individuals. The results of our study showed that the participants with central facial paresis had significantly lower average accurate emotion recognition abilities with respect to the facial modality compared to the auditory modality. The less accurate VFER in cases of facial paresis but not in auditory emotion recognition may be due to changes in the facial feedback mechanism. Clinically, this means that VFER in persons with limited facial mimicry abilities, as in central facial paresis patients, does appear to be affected, in contrast to auditory recognition [36]. Taking into account that we did not test facial mimicry itself (i.e., facial muscle activity was not measured during the emotion-recognition task), but facial emotion recognition, facial paresis can be inferred to be one factor influencing the accuracy of objective facial emotion recognition, which may be affected by changes in the facial feedback mechanism. This may be an indication that the accuracy of objective facial emotion recognition is especially limited when facial feedback is altered by facial paresis. Auditory performance does not appear to be affected by facial paresis (for a similar finding, cf. [36]). Besides facial paresis, stroke, also, could be one factor influencing the accuracy of objective facial emotion recognition in our sample. All participants (with and without facial paresis) had had at least one stroke. Since stroke may also cause deficits in emotion recognition [43], our examined patient groups may be affected as well. These two potential factors (altered facial feedback and altered central processing due to stroke) indicate the relevance of and need to study patients without stroke but with limited facial feedback—for example, patients with peripheral facial palsy.
Our results reveal significant deficits in terms of accuracy of facial emotion recognition, in contrast with other studies that did not report any differences, e.g., [27,28,34]. This fact may be due to the large sample size (participants with facial paresis: n = 34; participants without facial paresis: n = 29) and the inclusion of different phases post-onset, with a wide range since the time of stroke (day 5 up to day 6361 post-onset). However, previous studies reported significant limitations in terms of average time taken for facial emotion recognition, e.g., [27,28], while the participants in the present study showed faster reaction times. This, in turn, could indicate that the participants after stroke replied quick and dirty [63], while they suffered from other impairments, such as deficits in attention, concentration and memory [64], in addition to the facial paresis after stroke. In order to investigate a possible systemic connection between the fast, inaccurate responses, the significant positive correlation between the objective accuracy and the objective time taken for facial emotion recognition in patients with facial paresis provides further insight: the faster a patient with facial paresis responded, the less accurate was the response, whereas no correlation was found in patients without facial paresis. This could indicate that the patients with facial paresis were themselves aware of their deficit in the time taken for facial emotion recognition (as reported in the Self-Assessment Questionnaires Emotion Recognition) but wanted to show their best performance in the test situation and therefore answered as quickly as possible.
The participants with facial paresis subjectively felt limited both in terms of parameter accuracy and time in VFER. They stated that they were more impaired with respect to time than accuracy. The participants felt that facial emotion recognition had slowed down considerably since the stroke and was somewhat less accurate. These results provide a new insight into subjective emotion recognition, as this was not considered in previous studies. However, the clinical measurement gave contradictory results and showed that the patients were clearly less accurate but faster. Thus, the measured performance appears to be controversial to the subjectively perceived performance.
In the present study, we considered the difference in facial and auditory emotion recognition shown in the results. This may support, for example, FFH, as mentioned before. Nevertheless, it should be noted that a large part of human emotion is communicated via the face and the voice, as discussed in the literature. To the best of our knowledge, this is the first clinical study which combines two different modalities in a clinical setting [65]. The mentioned factors (limitations such as deficits in attention, concentration and memory [64], besides facial paresis and emotion recognition) influenced both the study results and everyday communication in the patient groups. Although for stroke patients their survival is of primary importance [66], participation is also highly relevant, particularly in the post-acute and chronic phase [67]. Since both groups of patients showed a significant reduction in the accuracy of facial and auditory emotion recognition compared to healthy subjects, intervention recommendations for both groups and both modalities are required. Although there is limited evidence for FFH [68], FHH can be used as an explanation for assessment and rehabilitation [69].

4.1. The Relevance of Assessment of Emotion Recognition

The described results not only provide evidence for the FFH and certain effects of stroke but also have implications for the treatment of patients with central facial paresis after stroke. As early as 2013, Dobel et al. [69] called for the examination of facial emotion recognition in patients with facial paresis using basic emotions [69]. In summary, the present study supports this demand and once again advocates it.
Since the accuracy of facial emotion recognition can be impaired, especially in patients with facial paresis after stroke, appropriate assessment and therapy is recommended for this patient group. Deficits should be assessed because the performance limitations may have negative consequences on communication and may increase over time. If the performance of emotion recognition remains impaired, this can lead to the development of disorders such as alexithymia (the inability to recognise or describe one’s own emotions) [11,70]. For example, if sadness is not adequately interpreted, a patient may react defensively and thus not appropriately to a situation [6]. The effects of facial emotion recognition are therefore far-reaching and decisive for adequate social contact. The somewhat controversial results for the objective measurement and subjective assessment of facial emotion recognition in participants with facial paresis require detailed and individual examination in clinical practice. It is not sufficient to either ask the patient for his or her opinion or carry out an objective diagnosis. Both options should be taken and the results should be compared.
In addition, the lack of disease insight to be expected according to the available results (comparison between clinical measurement and subjective assessment) must become a focus of treatment in order to show the patient the relevance of facial emotion recognition therapy. This should not underestimate the importance of considering the individual wishes and goals of the patient and including them in the sense of joint decision making [71]. The basis for this is the tripartite evidence-based practice [71,72]. This ensures not only the effectiveness and efficiency of therapy, but also therapy motivation and transfer into the patient’s everyday life [71].

4.2. Limitations of the Study

The composition of the sample may be considered a limiting factor of the study. A larger and more representative, homogeneous sample tested at the same time post-onset after stroke and subdivided according to the subtypes of central facial paresis (voluntary and involuntary central facial paresis [73]) would therefore be desirable for future studies. For a more precise observation of the lesion localization and comparability of patients, imaging with detailed description of affected brain areas would be useful. In addition, statistical adjustment for different stroke locations and lesion sizes would be beneficial, as differences in emotion recognition could depend on the hemisphere affected [43]. Despite the possibility of different lesion locations and lesion sizes, the results for facial emotion recognition showed significant differences between the patient groups. Since significant effects can already be observed in our sample, we expect similar or stronger effects to be observed with more carefully selected samples with stricter inclusion criteria in further studies. Furthermore, a strong and reliable test battery to assess cognitive capacity (see [74]) is needed to differentiate deficits in emotion recognition and limitations on general mental capacity after stroke. Since emotion perception depends on general mental capacity [74,75,76], any emotion perception test measures general mental capacity to some degree. In the present study, there were comparable numbers of patients with limitations in mental capacity and aphasia, as proven by chi-squared tests. In future studies, comparability should be extended and improved by standardised diagnostics.
However, the significant positive correlation observed between objective facial emotion recognition accuracy and severity of facial paresis, calculated using the Sunnybrook Facial Grading System across all patients, points to facial paresis as the main differentiator between the two patient groups. Thus, the higher the accuracy of facial emotion recognition, the higher the score on the Sunnybrook Facial Grading System. That is, facial competence correlates with accuracy in facial emotion recognition, or the lower the facial competence, the worse the accuracy in facial emotion recognition. Moreover, significant univariate and multivariate regressions documented the relation between facial emotion recognition accuracy and facial paresis. These results demonstrate the influence of facial paresis on facial emotion recognition once more, but only in terms of accuracy. No significant differences were detected with respect to objective facial emotion recognition accuracy and time taken between patients with left- or right-sided facial paresis. If one hemisphere is dominant in emotion processing [43], patients with lesions in this dominant hemisphere with contralateral facial paresis [25] could possibly be more affected. We cannot confirm this hypothesis and previous research on facial palsy that reported that patients with left-sided facial palsy showed lower performance in facial emotion recognition compared with patients with right-sided facial palsy [29]. However, our results are in line with findings for patients with Parkinson’s disease, where facial asymmetry is not related to hemispheric dominance for emotion processing [77]. Further evidence is needed, then, to inspect possible differences in facial emotion recognition and expression depending on the side affected with facial palsy and on hemisphere.
Perfect comparability of the standard data with the sample data cannot be guaranteed without gaps—for instance, due to the age of the participants (e.g., the Montreal Affective Voices validation sample with an average age of 23.3 ± 3 years [45] vs. the patients with facial paresis with an average age of 62.6 ± 9.3 years and patients without facial paresis average with an average age of 58.4 ± 10.7 years). It must also be noted that only a small sample size of normative data (n = 29) was used for the auditory emotion recognition assessment (Montreal Affective Voices) [45]. Furthermore, the measurement of auditory and facial emotion recognition is not completely comparable. Especially with regard to the time taken for emotion recognition, it should be noted, for example, that the response modes differed (selecting an option on screen vs. pointing to a surface) and that the numbers of items and response options were not identical. As a consequence, for further research, normative data from healthy individuals should be freshly collected, with comparability extended to the patient groups. Moreover, measurement in facial and auditory emotion recognition tasks should be made even more comparable.
The separate presentation of facial and auditory items in emotion recognition assessments should also be critically questioned. Facial and auditory expressions are not necessarily independent as they can mutually influence their recognition. For example, a facial expression can be generated by moving the mouth while a vocal expression is also made [1]. However, a separation of the modalities, i.e., just visual or just auditory impressions, seemed to make sense in this study in order to differentiate and compare performances. In order to be able to answer the question reliably, this seems unavoidable. At the same time, however, this separate type of emotion recognition is far removed from everyday life and thus reduces the external validity. Equally adapted to optimal experimental conditions, static photographs instead of everyday situations were used [78]. A person is able to show up to 8000 different emotional facial expressions with his or her face [17]. However, it should be critically noted that our study only examined emotion recognition with respect to basic emotions and thus minimized the requirements compared to non-verbal communication in everyday life. It should be noted here that basic emotions can be regarded as the basis for far more complex emotions or emotional states [8]. However, since the recognition of the comparatively primitive basic emotions [8] was assessed as limited in the present study, an even worse performance can be expected for more complex emotions.

5. Conclusions

From this study, it may be concluded that:
-
After a stroke, participants with central facial paresis were significantly less accurate in visually recognising basic emotions compared with stroke patients without facial paresis and compared with a sample of healthy controls;
-
Auditory emotion recognition in both stroke groups was less accurate than in the control sample;
-
The facial emotion recognition accuracy of participants with central facial paresis was significantly worse than the auditory accuracy of emotion recognition;
-
Since visual emotion recognition was clearly worse than auditory emotion recognition in participants with facial paresis after stroke, facial mimicry probably plays an important role in communication with patients after stroke;
-
The results of our observational study may indicate the overall effects of stroke on emotion recognition and support the FFH, which is a practical and appropriate model implemented in clinical assessments and interventions;
-
Future research should investigate patients with facial palsy without stroke to further explore the impact of facial feedback on emotion recognition.

Author Contributions

Conceptualization, A.-M.K., H.v.P. and S.H.; methodology, A.-M.K., H.v.P. and S.H.; software, A.-M.K.; validation, A.-M.K.; formal analysis, A.-M.K. and S.H.; investigation, A.-M.K.; resources, A.-M.K., H.v.P. and S.H.; data curation, A.-M.K. and S.H.; writing—original draft preparation, A.-M.K.; writing—review and editing, A.-M.K., H.v.P. and S.H.; visualization, A.-M.K.; supervision, H.v.P. and S.H.; project administration, A.-M.K.; funding acquisition A.-M.K. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Thüringer Universitäts- und Landesbibliothek Jena, Germany.

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted according to the guidelines of the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Medical Faculty at RWTH Aachen University, Germany (protocol code: EK 271/18; 11 December 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to their having been collected as part of a larger research project that has not yet been completed.

Acknowledgments

Many thanks to all of the study participants and cooperation partners: Berufsfachschule für Logopädie an der staatlichen berufsbildenden Schule für Gesundheit und Soziales Jena, Klinikum Ingolstadt GmbH, Logopädie Sprechfreude, Dasing, Moritz Klinik GmbH & Co. KG, Bad Klosterlausnitz, Praxis für Sprach- und Stimmtherapie Hermine Gascho, Ingolstadt, Selbsthilfegruppe Aphasiker und Schlaganfall Jena des Landesverbandes Thüringen für die Rehabilitation der Aphasiker e. V., Beratungszentrum nach Schlaganfall und Hirnschädigung ZAMOR e. V. Ingolstadt and Uniklinik RWTH Aachen AöR.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Sociodemographic information on gender, age, education and handedness in the study group and control groups.
Table A1. Sociodemographic information on gender, age, education and handedness in the study group and control groups.
Sociodemographic InformationStudy Group
Patients with Facial Paresis, n = 34
Control Group
Patients without Facial Paresis, n = 29
GenderMale: n = 18; 53%Male: n = 20; 69%
Female: n = 16; 47%Female: n = 9; 31%
Age in yearsMean = 62.65 ± 9.26 Mean = 58.38 ± 10.72
Min. = 39 Min. = 35
Max. = 81Max. = 83
EducationNo school degree: No school degree:
n = 4; 11.77%n = 0
Sec. school certificate:Sec. school certificate:
n = 9; 26.47%n = 6; 20.69%
Medium maturity:Medium maturity:
n = 12; 35.29%n = 15; 51.72%
High school:High school:
n = 9; 26.47%n = 8; 27.59%
HandednessLeft: n = 0Left: n = 1. 3.45%
Right: n = 33; 97.06%Right: n = 27; 93.10%
Left and right: n = 1; 2.94%Left and right: n = 1; 3.45%
Note: n = number of participants.
Table A2. Lesion information, times post-onset of the examinations in this study, type of lesion (ischaemic, hemorrhagic or both), affected hemisphere, quantity (number of lesions), limitations in general mental capacity after stroke and aphasia.
Table A2. Lesion information, times post-onset of the examinations in this study, type of lesion (ischaemic, hemorrhagic or both), affected hemisphere, quantity (number of lesions), limitations in general mental capacity after stroke and aphasia.
LesionStudy Group
Patients with Facial Paresis, n = 34
Control Group
Patients without Facial Paresis, n = 29
Time post-onset Mean = 1558 (4;3) ± 2112 (5;9)Mean = 1359 (3;9) ± 2702 (7;5)
in days (in years;months)Min. = 5
Max. = 6361 (17;5)
Min. = 13
Max. = 11,398 (31;2)
Phase post-onset
(Acute: ≤6 weeksAcute: n = 11; 32.35%Acute: n = 11; 37.93%
Post-acute: <1 yearPost-acute: n = 6; 17.65%Post-acute: n = 3; 10.34%
Chronic: ≥1 year)Chronic: n = 17; 50.00%Chronic: n = 15; 51.72%
TypeIschemic: n = 27; 79.41%Ischemic: n = 21; 72.41%
Hemorrhagic: n = 5; 14.71%Hemorrhagic: n = 6; 20.69%
IschemicIschemic
and hemorrhagic: and hemorrhagic:
n = 1; 2.94%n = 1; 3.45%
n.a.: n = 1; 2.94%n.a.: n = 1; 3.45%
HemisphereLeft: n = 12; 35.29%Left: n = 15; 51.72%
Right: n = 13; 38.24%Right: n = 6; 20.69%
Left and right: Left and right:
n = 0n = 2; 6.90%
n.a.: n = 9; 26.47%n.a.: n = 6; 20.69%
Quantity1x: n = 22; 64.71%1x: n = 25; 86.21%
2x: n = 8; 23.53%2x: n = 2; 6.90%
3x: n = 1; 2.94%3x: n = 1; 3.45%
4x: n = 1; 2.94%4x: n = 0
n.a.: n = 2; 5.88%n.a.: n = 1; 3.45%
Limitations in general mental capacity after stroken = 16; 47.06%n = 12; 41.38%
Aphasian = 6; 17.65%n = 9; 31.03%
Note: n.a. means no information was given. n = number of participants.
Table A3. Facial paresis information; diagnosis from the patients’ perspectives and from the patients’ therapists’ perspectives, according to the participant; diagnosis via Sunnybrook Facial Grading System [52,53] carried out as part of this study by a logopaedic examiner and severity classification according to the House–Brackmann Facial Nerve Grading System [79], as well as affected side of the face, time post-onset of the examination for this study and already perceived therapy prior to examination in this study.
Table A3. Facial paresis information; diagnosis from the patients’ perspectives and from the patients’ therapists’ perspectives, according to the participant; diagnosis via Sunnybrook Facial Grading System [52,53] carried out as part of this study by a logopaedic examiner and severity classification according to the House–Brackmann Facial Nerve Grading System [79], as well as affected side of the face, time post-onset of the examination for this study and already perceived therapy prior to examination in this study.
Facial ParesisStudy Group
Patients with Facial Paresis, n = 34
Control Group
Patients without Facial Paresis, n = 29
Diagnosis facial paresis from the patient’s perspectiveFacial paresis: n = 21; 61.76%
-
Left: n = 9; 26.47%
-
Right: n = 12; 35.29%

Non-facial paresis: n = 13; 38.24%
Facial paresis: n = 10; 34.48%
-
Left: n = 2; 6.90%
-
Right: n = 8; 27.58%
Non-facial paresis: n = 19; 65.52%
Diagnosis of facial paresis from the therapist’s perspective (physiotherapy or speech and language therapy)Facial paresis: n = 11; 32.35%
-
Left: n = 4; 11.76%
-
Right: n = 6; 17.65%
-
n.a. to the affected side: n = 1; 2.94%
Non-facial paresis: n = 2; 5.88%
n.a.: n = 21; 61.77%
Facial paresis: n = 0
Non-facial paresis: n = 6; 20.69%
n.a.: n = 23; 79.31%
Diagnosis of facial paresis
Sunnybrook Facial Grading System (total score 0–100)
Mean = 73.12 ± 8.34
Min. = 54
Max. = 83
Grade II: n = 24; 70.59%
Grade III: n = 10; 29.41%
Left:
-
Grade II: n = 11; 61.11%
-
Grade III: n = 7; 38.89%
Right:
-
Grade II: n = 13; 81.25%
-
Grade III: n = 3; 18.75%
Mean = 91.21 ± 3.46
Min. = 87
Max. = 100
Grade I: n = 29; 100%
Time post-onset
in days (in years;months)
Mean = 827 (2;3) ± 1606 (4;5)
Min. = 5
Max. = 5852 (16;0)
Mean = 2207 (6;1) ±3709 (10;2)
Min. = 35
Max. = 11,398 (31;2)
Phase post-onset
(Acute: ≤6 weeks
Post-acute: <1 year
Chronic: ≥1 year)
Acute: n = 14; 41.18%
Post-acute: n = 5; 14.71%
Chronic: n = 7; 20.59%
n.a.: n = 8; 23.53%
Acute: n = 3; 10.35%
Post-acute: n = 1; 3.45%
Chronic: n = 9; 31.03%
n.a.: n = 16; 55.17%
Non-pharmaceutical therapy
at the time of the examination (current)
Yes: n = 9; 26.47%
No: n = 25; 73.53%
Yes: n = 0No: n = 29
StartFrom the stroke to latest post-acute phaseFrom the stroke to latest post-acute phase
FrequencyIsolated therapy units up to 1–3x/weekIndividual therapy units up to 2x/week
DurationMax.: 3.5 monthsMax.: 6 months
Therapist12x speech and language therapy,
2x physiotherapy,
1x physical therapy
5x speech and language therapy,
1x physiotherapy,
1x n.a.
ContentExercises for facial expression, oral motor skills, articulation, proprioceptive neuromuscular facilitation, massageExercises for facial expression, oral motor skills, articulation, stretching M. buccinator
Self-exercisesExercises for facial expression, oral motor skills, articulation, massage, sensitivity trainingExercises for facial expressions, oral motor skills
Note: n.a. means no information was given. n = number of participants.
Table A4. The results for objective (accuracy and time) and subjectively perceived success in emotion recognition are summarised.
Table A4. The results for objective (accuracy and time) and subjectively perceived success in emotion recognition are summarised.
Emotion RecognitionStudy Group
Patients with Facial Paresis, n = 34
Control Group
Patients without Facial Paresis, n = 29
Healthy Controls
Objective facial emotion recognition via Myfacetraining Programm, Accuracy in %Mean = 27.77
SD = 11.04
Min. = 10.00
Max. = 48.00
Mean = 40.79
SD = 15.59
Min. = 12.00
Max. = 64.00
Mean = 71.11
SD = 7.53
Min. = 45.00
Max. = 88.00
n = 147 [46,47]
Objective facial emotion recognition via Myfacetraining Program, Time in sec.Mean = 3.14
SD = 0.47
Min. = 2.04
Max. = 3.86
Mean = 3.19
SD = 0.34
Min. = 1.91
Max. = 3.86
Mean = 3.34
SD = 0.66
Min. = 1.94
Max. = 5.58
n = 147 [46,47]
Objective auditory emotion recognition via MAVs, Accuracy in %Mean = 46.23
SD = 11.63
Min. = 21.67
Max. = 70.00
Mean = 48.05
SD = 11.78
Min. = 23.34
Max. = 61.67
Mean = 72.67
SD = 11.99
Min. = 56.00
Max. = 86.00
n = 29 [45]
Objective auditory emotion recognition via MAVs, Time in sec.Mean = 3.69
SD = 1.20
Min. = 2.25
Max. = 8.75
Mean = 3.20
SD = 0.88
Min. = 1.80
Max. = 4.90
n.a. [45]
Subjective facial emotion recognition via Self-Assessment Questionnaires Emotion Recognition AccuracyMean = −0.71
SD = 1.90
Min. = −6.00
Max. = 6.00
Mean = −0.03
SD = 1.32
Min. = −2.00
Max. = 6.00
n.a.
Subjective facial emotion recognition via Self-Assessment Questionnaires Emotion Recognition TimeMean = −1.91
SD = 2.90
Min. = −6.00
Max. = 6.00
Mean = −1.00
SD = 2.52
Min. = −6.00
Max. = 6.00
n.a.
Note: n.a. means no information was given. n = number of participants.
Table A5. Summary of facial paresis and general mental capacity information.
Table A5. Summary of facial paresis and general mental capacity information.
Study Group
Patients with Facial Paresis, n = 34
Control Group
Patients without Facial Paresis, n = 29
With limitations in general mental capacityn = 16n = 12
Without limitations in general mental capacityn = 18n = 17
Types of limitation in general mental capacityMemory: n = 10Memory: n = 8
Concentration: n = 9Concentration: n = 5
Slowdown: n = 3Slowdown: n = 1
Fatigue: n = 2Fatigue: n = 2
Complex thinking: n = 1Complex thinking: n = 0
Neglect on spec: n = 1Neglect on spec: n = 0
Orientation in time: n = 1Orientation in time: n = 0
Orientation in place: n = 1Orientation in place: n = 0
Overall deterioration: n = 1Overall deterioration: n = 0
Acalculia: n = 0Acalculia: n = 1
Arousal: n = 0Arousal: n = 1
Inner unrest: n = 0Inner unrest: n = 1
Note: n = number of participants. For limitations in general mental capacity, multiple deficit types per participant are possible. For this, n describes the number of limitations per group.
Table A6. Summary of facial paresis and aphasia information.
Table A6. Summary of facial paresis and aphasia information.
Study Group
Patients with Facial Paresis, n = 34
Control Group
Patients without Facial Paresis, n = 29
With aphasian = 6n = 9
Without aphasian = 28n = 20
Note: n = number of participants.
Table A7. Univariate regression analysis.
Table A7. Univariate regression analysis.
Accuracy of Facial Emotion Recognition
Standardised Beta95.0% Confidence Intervalp-Value
Lower boundHigher bound
Diagnosis of facial paresis−0.444−19.762−6.295<0.001
Time taken for facial emotion recognition
Diagnosis of facial paresis−0.053−0.2530.1660.680
Accuracy of auditory emotion recognition
Diagnosis of facial paresis−0.079−7.7334.0910.540
Time taken for auditory emotion recognition
Diagnosis of facial paresis0.231−0.0401.0330.069
Table A8. Multivariate regression analysis.
Table A8. Multivariate regression analysis.
Accuracy of Facial Emotion Recognition
Standardised Beta95.0% Confidence Intervalp-Value
Lower boundHigher bound
Diagnosis of facial paresis−0.353−16.920−3.7870.003
Sex0.022−6.3067.6150.851
Age−0.393−0.891−0.256<0.001
Subjective judgement of accuracy−0.014−2.3592.1100.911
Subjective judgement of time taken0.032−1.1971.5420.802
Limitations in general mental capacity0.054−5.2138.3920.641
Time post-onset, acute, post-acute, chronic−0.227−7.4170.1280.058
Time of facial emotion recognition
Diagnosis of facial paresis−0.029−0.2480.2010.834
Sex−0.173−0.3830.0930.228
Age−0.186−0.0180.0030.167
Subjective judgement of accuracy0.013−0.0730.0800.935
Subjective judgement of time taken0.057−0.0380.0550.715
Limitations in general mental capacity0.076−0.1700.2950.593
Time post-onset, acute, post-acute, chronic−0.252−0.2420.0160.085
Accuracy of auditory emotion recognition
Diagnosis of facial paresis0.015−4.9005.5960.895
Sex0.082−3.6387.4880.491
Age−0.428−0.747−0.239<0.001
Subjective judgement of accuracy−0.160−2.8940.6780.219
Subjective judgement of time taken0.106−0.6461.5420.416
Limitations in general mental capacity0.068−3.8597.0150.563
Time post-onset, acute, post-acute, chronic−0.374−7.750−1.7200.003
Time of auditory emotion recognition
Diagnosis of facial paresis0.227−0.0741.0520.088
Sex−0.050−0.7060.4890.717
Age0.153−0.0110.0440.232
Subjective judgement of accuracy0.184−0.0730.3100.220
Subjective judgement of time taken−0.033−0.1310.1040.825
Limitations in general mental capacity−0.173−0.9590.2090.203
Time post-onset, acute, post-acute, chronic0.205−0.0830.5650.141

Appendix B

Appendix B.1. Additional Information on Data Collection

Each patient was examined once. The patient was first informed about the study and about data privacy. After the declaration of informed consent, an anamnesis took place (see Table A1, Table A2 and Table A3, Appendix A) before the examination was conducted. All data were collected by the same examiner. All participants received the same standardised verbal instruction to perform the following tasks.

Appendix B.2. Facial Emotion Recognition: Myfacetraining (MFT) Program

The Myfacetraining (MFT) Program (CRAFTA Cranio Facial Therapy Academy, Hamburg, Germany) [47,49] measured objective facial emotion recognition with respect to accuracy and time taken [47,49]. Portraits of people, each showing a basic emotion with their face, were presented on a lenovo yoga 500 14” touchscreen device. The person was first shown in a neutral position (one second) then with an emotional facial expression (basic emotion). Six additional answer options were displayed on the right side of the screen; these were the basic emotions [47].
By selecting an answer option (in 85% (n = 54) of cases via touchscreen, in 6.35% (n = 4) of cases via touch-pen due to hemiparesis, in 7.95% (n = 5) via mouse due to hemiparesis), the program recorded the accuracy (right or wrong answer) as well as the reaction time (in seconds). Immediately afterwards, the next screen appeared. In a standardised test, a total of 42 images of three different adult women and three different men (one person per picture) in the same order were presented. Each basic emotion was shown seven times (six basic emotions × seven images = 42 images). The time limit to respond was 10 s. If there was no response within this time, the response time was considered to have been exceeded and therefore the question was marked unanswered and the next emotion was presented. Objective facial emotion recognition was measured with respect to accuracy and time [47]. After testing, the program reproduced an overview of the time taken and the accuracy scores for all the emotion questions together and separately and the time and exchange emotion, if available, for all 42 pictures.
A pre-test with ten items was performed. The pre-test ensured that the task was understood [48]. Questions asked of the patient regarding the test procedure were answered. However, no assistance was given with regard to the content of the test.
With the Myfacetraining Program, normal values for 147 healthy subjects are available. Accuracy in percentages: mean = 71.11 ± 7.53; min. = 45.00; max. = 88.00. Time in seconds: mean = 3.34 ± 0.66; min. = 1.94; max. = 5.58 [46,47] (see, also, Figure 1 and Figure 3; Table A4, Appendix A).

Appendix B.3. Auditory Emotion Recognition: Montreal Affective Voices

As stimuli for auditory emotion recognition, part of the Montreal Affective Voices (MAVs) [45] was used. These are emotional, non-linguistic, vocal expressions of /a/ (to be compared with a as in apple, British English). Five women and five men each presented the six basic emotions with their voice once each [45], so that in the present study a total of 60 (= 10 persons × 6 basic emotions) items were used.
For the presentation of the MAVs, software was available which, in addition to the accuracy of emotion recognition, also checks the intensity of the emotion but neglects the time taken [80]. For the present study, which examined the accuracy and time taken for emotion recognition, the procedure had therefore to be adapted. For this purpose, a specially programmed experiment with the software PsychoPy, version 3.0.0b9 [50] was used, which on the one hand reproduced the MAVs and on the other hand recorded the selected response option and reaction time. The sound was given once [80] via standard headphones [45]. The sequence of stimuli was randomised and standardised and presented in the same order for all participants.
Each participant was asked to assess an emotion by selecting a response option [81]. Following the original software [80], the participant selected one of the response options (one of the six basic emotions or neutral/unknown) by pointing at a surface (A4 size). Ten seconds of time were allowed for response to each task.
As in objective facial emotion recognition, a pre-test with ten items (initially randomised, later presented in the same order) was performed too. In addition, the examiner checked that the headphones were comfortably fitted. The volume was adjusted individually [45]. Questions asked of the patient regarding the test procedure were answered. However, no assistance was given with regard to the content of the test.
Standard values are available for accuracy (in percentages) of emotion recognition: mean = 72,67 ± 11.66; min. = 56.00; max. = 86.00 (see, also, Figure 2 and Table A4, Appendix A). However, no data were collected for time taken [45]. As proposed by Belin et al. [45] and explained above, MAVs (selected items, adapted to the circumstances of this study) were used. The MAVs, as material for auditory emotion recognition assessment, are explicitly recommended for comparisons of facial emotion recognition. They are particularly well-suited, since only the auditory modality is addressed. Furthermore, the MAVs do not contain any linguistic information, which excludes distortion or aggravated conditions for patients with aphasia [45]. Mild aphasia was not necessarily a criterion for exclusion in this study (see Table 1).

Appendix B.4. Sunnybrook Facial Grading System for Diagnosing Facial Palsy

With the Sunnybrook Facial Grading System, each face was rated in three areas by comparing the affected side of the face with the intact side. This resulted in three values: (1) Resting Symmetry Score (symmetry at rest), (2) Voluntary Movement Score (symmetry of voluntary movements) and (3) Synkinesis Score (synkinesis). With these three scores, a total score (0–100 points) was calculated. The lower the total score, the more pronounced the facial paresis respectively paralysis. The authors did not give any recommendation for a further classification according to degree of severity or the point value for a diagnosis of facial palsy actually made [52,53]. For the present study, however, an unambiguous diagnosis of the presence of facial paresis seemed indispensable to classify the participants into the appropriate target or control groups (with or without facial paresis). The severity classification of the present study was therefore based on the procedures of the House–Brackman Facial Nerve Grading System [79] and the Facial Nerve Grading System 2.0 [55]. For these measuring instruments, the total value to be achieved is divided into six groups or grades (degree I: normal function up to degree VI: total paralysis) [55,79]. This classification was also used in the present work. For this purpose, the maximum total score (100 points) to be achieved in the Sunnybrook Facial Grading System was divided by six and thus into six equally sized areas (100–84 points: normal function, no facial paresis; 83–67: light facial paresis; 66–50 moderate facial paresis; 49–33 medium facial paresis; 32–16 severe facial paresis; 15–0 complete facial paresis with respect to paralysis). Once the total score had been evaluated by the logopaedic examiner, the severity level could be determined. According to this definition, a facial paresis from grade II (≤83 points) could be presented. This, in turn, implied an admission of a natural portion of asymmetry in the face and is consistent with previous research [82].

References

  1. Schirmer, A.; Adolphs, R. Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence. Trends Cogn. Sci. 2017, 21, 216–228. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Young, A.; Perrett, D.; Calder, A.; Sprengelmeyer, R.; Ekman, P. Facial Expressions of Emotion—Stimuli and Tests (FEEST); Thames Valley Test Company: Suffolk, UK, 2002. [Google Scholar]
  3. Knapp, M.L.; Hall, J.A.; Horgan, T.G. Nonverbal Communication in Human Interaction; Cengage Learning: Boston, MA, USA, 2013. [Google Scholar]
  4. Ekman, P.; Oster, H. Facial Expression of Emotion. Annu. Rev. Psychol. 1979, 30, 527–554. [Google Scholar] [CrossRef]
  5. Diener, H.C. 2016. Available online: https://www.pschyrembel.de/Parese/K0GCP/doc/ (accessed on 25 June 2020).
  6. Radice-Neumann, D.; Zupan, B.; Tomita, M.; Willer, B. Training Emotional Processing in Persons With Brain Injury. J. Head Trauma Rehabil. 2009, 5, 313–323. [Google Scholar] [CrossRef] [PubMed]
  7. Cattaneo, L.; Pavesi, G. The facial motor system. Neurosci. Biobehav. Rev. 2014, 38, 135–159. [Google Scholar] [CrossRef]
  8. Levenson, R.W. Basic Emotion Questions. Emot. Rev. 2011, 3, 379–386. [Google Scholar] [CrossRef]
  9. Ekman, P. Universal Facial Expressions of Emotion. Calif. Ment. Health Res. Dig. 1970, 8, 151–158. [Google Scholar]
  10. Ekman, P. An argument for basic emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
  11. Dimberg, U.; Thunberg, M.; Elmehed, K. Unconscious Facial Reactions to Emotional Facial Expressions. Psychol. Sci. 2000, 11, 86–89. [Google Scholar] [CrossRef]
  12. Boloorizadeh, P.; Tojari, F. Facial expression recognition: Age, gender and exposure duration impact. Pro-Cedia Soc. Behav. Sci. 2013, 84, 1369–1375. [Google Scholar] [CrossRef] [Green Version]
  13. Williams, L.M.; Mathersul, D.; Palmer, D.M.; Gur, R.C.; Gur, R.E.; Gordon, E. Explicit identification and implicit recognition of facial emotions: I. Age effects in males and femals across 10 decades. J. Clin. Exp. Neuropsychol. 2009, 31, 257–277. [Google Scholar] [CrossRef]
  14. Palermo, R.; Coltheart, M. Photographs of facial expression: Accuracy, response times, and ratings of in-tensity. Behav. Res. Methods Instrum. Comput. 2004, 36, 634–638. [Google Scholar] [CrossRef] [PubMed]
  15. Hampson, E.; von Anders, S.M.; Mullin, L.I. A female advantage in the recognition of emotional facial ex-pressions: Test of an evolutionary hypothesis. Evol. Hum. Behav. 2006, 27, 401–416. [Google Scholar] [CrossRef] [Green Version]
  16. Ruffman, T.; Henry, J.D.; Livingstone, V.; Phillips, L.H. A meta-analytic review of emotion recognition and aging: Implications for neuropsychological models of aging. Neurosci. Behav. Rev. 2008, 4, 863–881. [Google Scholar] [CrossRef] [PubMed]
  17. Von Piekartz, H.; Mohr, G. Reduction of head and face pain by challenging lateralization and basic emotions: A proposal for future assessment and rehabilitation strategies. J. Man. Manip. Ther. 2014, 22, 24–35. [Google Scholar] [CrossRef] [Green Version]
  18. Lindquist, K.A. Emotions emerge from more basic psychological ingredients: A modern psychological constructionist model. Emot. Rev. 2013, 5, 356–368. [Google Scholar] [CrossRef]
  19. Palermo-Gallagher, N.; Amunts, K. A short review on emotion processing: A lateralized network o neuronal networks. Brain Struct. Funct. 2022, 227, 673–684. [Google Scholar] [CrossRef]
  20. Gianotti, G. A historical review of investigations on laterality of emotions in the human brain. J. Hist. Neurosci. 2019, 28, 23–41. [Google Scholar] [CrossRef]
  21. Mohr, G.; Konnerth, V.; von Piekartz, H.J.M. Lateralitätserkennung und (emotionale) Expressionen des Gesichts—Beurteilung und Behandlung. In Kiefer, Gesichts-und Zervikalregion; Thieme: Stuttgart, Germany, 2015; pp. 494–512. [Google Scholar]
  22. Neal, D.T.; Chartrand, T.L. Embodied Emotion Perception: Amplifying and Dampening Facial Feedback modulates Emotion Perception Accuracy. Soc. Psychol. Personal. Sci. 2011, 2, 673–678. [Google Scholar] [CrossRef]
  23. Goldman, A.I.; Sripada, C.S. Simulationist models of face-based emotion recognition. Cognition 2005, 94, 193–213. [Google Scholar] [CrossRef]
  24. Bartolome, G. Grundlagen der Funktionellen Dysphagietherapie (FDT): Restituierende Therapieverfahren. In Schluckstörungen: Diagnostik und Rehabilitation; Urban & Fischer: München, Germany, 2010; pp. 245–370. [Google Scholar]
  25. Neely, J.G. Central Causes of Facial Paralysis. In The Facial Nerve; Thieme: New York, NY, USA, 2014; pp. 129–136. [Google Scholar]
  26. Klingner, C.M.; Witte, O.W. Central Facial Palsy. In Facial Nerve Disorders and Diseases: Diagnosis and Management; Thieme: Stuttgart, Germany, 2016; pp. 358–369. [Google Scholar]
  27. Konnerth, V.; Mohr, G.; von Piekartz, H. Fähigkeit von Patienten mit einer peripheren Fazialisparese zur Erkennung von Emotionen—Eine Pilotstudie. Rehabilitation 2016, 55, 19–25. [Google Scholar] [CrossRef]
  28. Storbeck, F.; Schlegelmilch, K.; Streitberger, K.-J.; Sommer, W.; Ploner, C.J. Delayed recognition of emotional facial expressions in Bell’s palsy. Cortex 2019, 120, 524–531. [Google Scholar] [CrossRef]
  29. Korb, S.; Wood, A.; Banks, C.A.; Agoulnik, D.; Hadlock, T.A.; Niedenthal, P.M. Asymmetry of Facial Mimicry and Emotion Perception in Patients With Unilateral Facial Paralysis. JAMA Facial Plast. Surg. 2016, 18, 222–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Kim, M.J.; Neta, M.; Davis, F.C.; Ruberry, E.J.; Dinescu, D.; Heatherton, T.F.; Stotland, M.A.; Whalen, P.J. Botulinum toxin-induced facial muscle paralysis affects amygdala responses to the perception of emotional expressions: Preliminary findings from an A-B-A design. Biol. Mood Anxiety Disord. 2014, 4, 1–8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Strack, F.; Martin, L.L.; Stepper, S. Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. J. Personal. Soc. Psychol. 1988, 54, 768–777. [Google Scholar] [CrossRef]
  32. Havas, D.A.; Glenberg, A.M.; Gutwoski, K.A.; Lucarelli, M.J.; Davidson, R.J. Cosmetic Use of Botulinum Toxin-A Affects. Psychol. Sci. 2010, 21, 895–900. [Google Scholar] [CrossRef]
  33. Niedenthal, P.M.; Brauer, M.; Halberstadt, J.B.; Innes-Ker, A.H. When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn. Emot. 2001, 15, 853–864. [Google Scholar] [CrossRef]
  34. Keillor, J.M.; Barrett, A.M.; Crucian, G.P.; Kortenkamp, S.; Heilman, K.M. Emotional experience and perception in the absence of facial feedback. J. Int. Neuropsychol. Soc. 2002, 8, 130–135. [Google Scholar] [CrossRef]
  35. Bogart, K.R.; Matsumoto, D. Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome. Soc. Neurosci. 2010, 5, 241–251. [Google Scholar] [CrossRef]
  36. Calder, A.J.; Keane, J.; Cole, J.; Campbell, R.; Young, A.W. Facial Expression Recognition by People with Möbius Syndrome. Cogn. Neuropsychol. 2000, 17, 73–87. [Google Scholar] [CrossRef]
  37. Kuriakose, D.; Xiao, Z. Pathophysiology and Treatment of Stroke: Present Status und Future Perspectives. Int. J. Mol. Sci. 2020, 21, 7609. [Google Scholar] [CrossRef]
  38. Armstrong, M.J.; Okun, M.S. Diagnosis and Treatment of Parkinson Disease: A Review. Jama 2020, 323, 548–560. [Google Scholar] [CrossRef] [PubMed]
  39. Finkensieper, M.; Volk, G.F.; Guntinas-Lichius, O. Erkrankungen des Nervus facialis. Laryngo-Rhino-Otologie 2012, 91, 121–142. [Google Scholar] [CrossRef] [PubMed]
  40. Bologna, M.; Fabbrini, G.; Marsili, L.; Defazio, G.; Thompson, P.D.; Berardelli, A. Facial bradynkinesia. J. Neurol. Neurosurg. Psychiatry 2013, 84, 681–685. [Google Scholar] [CrossRef] [PubMed]
  41. Bologna, M.; Berardelli, I.; Paprella, G.; Marsili, L.; Ricciardi, L.; Fabbrini, G.; Berardelli, A. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson’s Disease. Front. Neurol. 2016, 7, 230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Marsili, L.; Agostino, R.; Bologna, M.; Belvisi, D.; Palma, A.; Fabbrini, G.; Berardelli, A. Bradykinesia of psed smiling and voluntary movement of the lower face in Parkinson’s disease. Parkinsonism Relat. Disord. 2014, 20, 370–375. [Google Scholar] [CrossRef]
  43. Yuvaraj, R.; Murugappan, M.; Norlinah, M.I.; Sundaraj, K.; Khairiyah, M. Review of Emotion Recognition in Stroke Patients. Dement. Cogn. Disord. 2013, 36, 179–196. [Google Scholar] [CrossRef]
  44. Vaughan, A.; Copley, A.; Miles, A. Physical rehabilitation of central facial palsy: A survey of current multi-disciplinary practice. Int. J. Speech-Lang. Pathol. 2021, 1–10. [Google Scholar] [CrossRef]
  45. Belin, P.; Fillion-Bilodeau, S.; Gosselin, F. The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing. Behav. Res. Methods 2008, 40, 531–539. [Google Scholar] [CrossRef] [Green Version]
  46. Herzer, S.; Maigler, A. Eine Revision der Referenzwerte der sechs Basisemotionen des CRAFTA Face-Mirroring Programms. In Eine Querschnittstudie; Hochschule Osnabrück: Osnabrück, Germany, 2016. [Google Scholar]
  47. CRAFTA Cranio Facial Therapy Academy. Operating Guidelines CRAFTA Facemirroring Assessment and Treatment. Available online: https://www.myfacetraining.com/downloads/CRAFTA%20Operating%20Guidelines.pdf (accessed on 28 January 2019).
  48. Von Piekartz, H.; Wallwork, S.B.; Mohr, G.; Butler, D.S.; Moseley, G.L. People with chronic facial pain per-form worse than controls at a facial emotion recognition task, but it is not all about the emotion. J. Oral Rehabil. 2015, 42, 243–250. [Google Scholar] [CrossRef]
  49. Myfacetraining. Available online: https://www.myfacetraining.com/ (accessed on 27 August 2019).
  50. Peirce, J.W.; MacAskill, M.R. Building Experiments in PsychoPy; Sage: London, UK, 2018. [Google Scholar]
  51. Coulson, S.E.; O’Dwyer, N.J.; Adams, R.D.; Croxson, G.R. Expression of Emotion and Quality of Life After Facial Nerve Paralysis. Otol. Neurol. 2004, 25, 1014–1019. [Google Scholar] [CrossRef]
  52. Ross, B.G.; Fradet, G.; Nedzelski, J.M. Development of a sensitive clinical facial grading system. Otolaryngol. Head Neck Surg. 1996, 114, 380–386. [Google Scholar] [CrossRef]
  53. Neumann, T.; Lorenz, A.; Volk, G.F.; Hamzei, F.; Schulz, S.; Guntinas-Lichius, O. Validierung einer Deutschen Version des Sunnybrook Facial Grading Systems. Laryngo-Rhino-Otologie 2017, 96, 168–174. [Google Scholar] [CrossRef] [PubMed]
  54. Guntinas-Lichius, O.; Finkensieper, M. Grading. In Facial Nerve Disorders and Diseases: Diagnosis and Management; Thieme: Stuttgart, Germany, 2016; pp. 94–111. [Google Scholar]
  55. Fattah, A.; Gurusinghe, A.; Gavilan, J.; Hadlock, T.; Markus, J.; Marres, H.; Nduka, C.; Slattery, W.; Snyder-Warwick, A. Facial Nerve Grading Instruments: Systematic Review of the Literature and Suggestion for Uniformity. Plast. Reconstr. Surg. 2015, 135, 569–579. [Google Scholar] [CrossRef] [PubMed]
  56. Akulov, M.A.; Orlova, A.S.; Usachev, D.J.; Shimansky, V.N.; Tanjashin, S.V.; Khatkova, S.E.; Yunosha-Shanyavskaya, A.V. IncobotulinumtoxinA treatment of facial nerve palsy after neurosurgery. J. Neurol. Sci. 2017, 381, 130–134. [Google Scholar] [CrossRef]
  57. Beurskens, C.H.; Heymans, P.G. Mime therapy improves facial symmetry in people with long-term facial nerve paresis: A randomised controlled trail. Aust. J. Physiother. 2006, 52, 177–183. [Google Scholar] [CrossRef]
  58. Goo, B.; Jeong, S.M.; Kim, J.U.; Park, Y.C.; Seo, B.K.; Baek, Y.H.; Yook, T.H.; Nam, S.S. Clinical efficacy and safety of thread-embedding acupuncture for treatment of the sequelae of Bell’s palsy: A protocol for a patient-assessor blinded, randomized, controlled, parallel clinical trial. Medicine 2019, 98, e14508. [Google Scholar] [CrossRef]
  59. Kim, J.; Choi, J.Y. The effect of subthreshold continuous electrical stimulation on the facial function of patients with Bell’s palsy. Acta Oto-Laryngol. 2016, 136, 100–105. [Google Scholar] [CrossRef]
  60. Kuttenreich, A.-M.; Rethfeldt, W.S.; von Piekartz, H. Autobiografische Erinnerungen bei Behandlung zentraler Fazialisparesen. Forum Logopädie 2018, 32, 6–13. [Google Scholar]
  61. Kwon, H.-J.; Choi, J.-Y.; Lee, M.S.; Kim, Y.-S.; Shin, B.-C.; Kim, J.-I. Acupuncture for the sequelae of Bell’s palsy: A randomized controlled trial. Trails 2015, 16, 246–253. [Google Scholar] [CrossRef] [Green Version]
  62. Ton, G.; Lee, L.W.; Ng, H.P.; Liao, H.Y.; Chen, Y.H.; Tu, C.H.; Tseng, C.H.; Ho, W.C.; Lee, Y.C. Efficacy of laser acupuncture for patients with chronic Bell’s palsy: A study protocol for a randomized, double-blind, sham-controlled pilot trial. Medicine 2019, 98, e15120. [Google Scholar] [CrossRef]
  63. Cambridge Dictionary. Quick and Dirty. 2020. Available online: https://dictionary.cambridge.org/de/worterbuch/englisch/quick-and-dirty (accessed on 9 September 2020).
  64. Deutsche Gesellschaft für Allgemeinmedizin und Familienmedizin (DEGAM). DEGAM-Leitlinie Nr. 8 Schlaganfall. 2012. Available online: https://www.awmf.org/uploads/tx_szleitlinien/053-011l_S3_Schlaganfall_2012-abgelaufen.pdf (accessed on 23 June 2018).
  65. Sánchez-Lozano, E.; Lopez-Otero, P.; Docio-Fernandez, L.; Argones-Rúa, E.; Alba-Castro, J.L. Audiovisual Three-Level Fusion for Continuous Estimation of Russell’s emotion Circumplex. In Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge, Barcelona, Spain, 21 October 2013; pp. 31–40. [Google Scholar]
  66. Bundesarbeitsgemeinschaft für Rehabilitation. Arbeitshilfe für die Rehabilitation von Schlaganfallpatienten. 1998. Available online: https://www.bar-frankfurt.de/service/publikationen/produktdetails/produkt/65.html (accessed on 23 September 2019).
  67. World Health Organization (WHO). Internationale Klassifikation der Funktionsfähigkeit, Behinderung und Gesundheit (ICF). 2005. Available online: https://www.dimdi.de/dynamic/de/klassifikationen/downloads/?dir=icf (accessed on 18 September 2019).
  68. Coles, N.A.; Larsen, J.T.; Lench, H.C. A Meta-Analysis of the Facial Feedback Literature: Effects of Facial Feedback on Emotion Experience Are Small and Variable. Psychol. Bull. 2019, 145, 610–651. [Google Scholar] [CrossRef] [PubMed]
  69. Dobel, C.; Miltner, W.H.R.; Witte, O.W.; Volk, G.F.; Guntinas-Lichius, O. Emotionale Auswirkung einer Fazialisparese. Laryngo-Rhino-Otologie 2013, 92, 9–23. [Google Scholar] [PubMed]
  70. Taylor, G.J.; Bagby, R.M. An overview of the alexithymia construct. In The Handbook of Emotional Intelligence: Theory, Development, Assessment, and Application at Home, School, and in the Workplace; Jossey-Bass: San Francisco, CA, USA, 2000; pp. 40–67. [Google Scholar]
  71. Beushausen, U.; Grötzbach, H. Evidenzbasierte Sprachtherapie: Grundlagen und Praxis; Elsevier: München, Germany, 2011. [Google Scholar]
  72. Dollaghan, C. The Handbook for Evidence-Based Practice in Communication Disorders; Paul H. Books: Baltimore, MD, USA, 2007. [Google Scholar]
  73. Gilden, D.H. Clinical Practice: Bell’s Palsy. N. Engl. J. Med. 2004, 351, 1323–1331. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Hildebrandt, A.; Sommer, W.; Schacht, A.; Wilhelm, O. Perceiving and remembering emotional facial ex-pressions—A basis facet of emotional intelligence. Intelligence 2015, 50, 52–67. [Google Scholar] [CrossRef]
  75. Olderbak, S.; Semmler, M.; Doebler, P. Four-branch model of ability emotion intelligence with fluid and crystallized intelligence: A meta-analysis of relations. Emot. Rev. 2019, 11, 166–183. [Google Scholar] [CrossRef]
  76. Schlegel, K.; Palese, T.; Mast, M.S.; Rammsayer, T.H.; Hall, J.A.; Murphy, N.A. A meta-analysis of the relationship between emotion recognition ability and intelligence. Cogn. Emot. 2020, 2, 329–351. [Google Scholar] [CrossRef]
  77. Ricciardi, L.; Visco-Comandini, F.; Erro, R.; Morgante, F.; Volpe, D.; Kilner, J.; Edwards, M.J.; Bologna, M. Emotional facedness in Parkinson’s disease. J. Neural Transm. 2018, 125, 1819–1827. [Google Scholar] [CrossRef]
  78. Rosenberg, H.; McDonald, S.; Rosenberg, J.; Westbrook, R.F. Measuring emotion perception following traumatic brain injury: The Complex Audio Visual Emotion Assessment Task (CAVEAT). Neuropsychol. Rehabil. 2019, 29, 232–250. [Google Scholar] [CrossRef]
  79. House, J.W.; Brackmann, D.E. Facial nerve grading system. Otolaryngol.-Head Neck Surg. 1985, 93, 146–147. [Google Scholar] [CrossRef]
  80. Online Psychology Research. Montreal Affective Voices. Available online: https://experiments.psy.gla.ac.uk//index.php (accessed on 11 June 2018).
  81. Paquette, S.; Peretz, I.; Belin, P. The “Musical Emotional Bursts”: A validated set of musical affect bursts to investigate auditory affective processing. Front. Psychol. 2013, 4, 509. [Google Scholar] [CrossRef] [Green Version]
  82. Miller, M.Q.; Hadlock, T.A.; Fortier, E.; Guarin, D.L. The Auto-eFACE: Machine Learning-Enhanced Program Yields Automated Facial Palsy Assessment Tool. Plast. Reconstr. Surg. 2021, 147, 467–474. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Accuracy of facial emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly worse compared to healthy controls (p < 0.001) and compared to participants after stroke without facial paresis (p < 0.001). The data for healthy controls were not collected in this study but were taken from [46,47], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Figure 1. Accuracy of facial emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly worse compared to healthy controls (p < 0.001) and compared to participants after stroke without facial paresis (p < 0.001). The data for healthy controls were not collected in this study but were taken from [46,47], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Diagnostics 12 01721 g001
Figure 2. Accuracy of auditory emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly worse compared to healthy controls (p < 0.001) but did not differ significantly compared to participants after stroke without facial paresis (p = 0.540). The data for healthy controls were not collected in this study but were taken from [45], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Figure 2. Accuracy of auditory emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly worse compared to healthy controls (p < 0.001) but did not differ significantly compared to participants after stroke without facial paresis (p = 0.540). The data for healthy controls were not collected in this study but were taken from [45], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Diagnostics 12 01721 g002
Figure 3. Average time of facial emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly faster compared to healthy controls (p = 0.02) but did not differ significantly compared to participants after stroke without facial paresis (p = 0.68). The data for healthy controls were not collected in this study but were taken from [46,47], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Figure 3. Average time of facial emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis performed significantly faster compared to healthy controls (p = 0.02) but did not differ significantly compared to participants after stroke without facial paresis (p = 0.68). The data for healthy controls were not collected in this study but were taken from [46,47], so no information on the actual distribution of the data is available but only the mean as an indicator of the central tendency. Therefore, the figures only contain two box plots, not three.
Diagnostics 12 01721 g003
Figure 4. Average time taken for auditory emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis did not differ significantly compared to participants after stroke without facial paresis (p = 0.069).
Figure 4. Average time taken for auditory emotion recognition (mean, median, interquartile range). Participants after stroke with facial paresis did not differ significantly compared to participants after stroke without facial paresis (p = 0.069).
Diagnostics 12 01721 g004
Figure 5. Accuracy and time taken in subjective facial emotion recognition (mean, median, interquartile range) in participants after stroke with facial paresis. Participants felt significantly more restricted in terms of time compared to accuracy (p = 0.011).
Figure 5. Accuracy and time taken in subjective facial emotion recognition (mean, median, interquartile range) in participants after stroke with facial paresis. Participants felt significantly more restricted in terms of time compared to accuracy (p = 0.011).
Diagnostics 12 01721 g005
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
Adult persons (≥18 years) with or without unilateral central facial paresis after stroke (ischemic or hemorrhagic)Children and adults with peripheral facial paresis
Acute, post-acute or chronic phase of strokeOther neurological or psychological diseases
For the investigation:
-
Capacity for approximately 75 min, sitting for approximately 10 min
-
Ability to choose answer options
-
Communication skills needed to follow instructions and to answer questionnaires
For the investigation:
-
Impairment of general status, communication skills and/or ability to answer such that the investigation would not be possible
Normal or corrected visual and hearing ability
Ability to consentNo ability to consent
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kuttenreich, A.-M.; von Piekartz, H.; Heim, S. Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis? Diagnostics 2022, 12, 1721. https://doi.org/10.3390/diagnostics12071721

AMA Style

Kuttenreich A-M, von Piekartz H, Heim S. Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis? Diagnostics. 2022; 12(7):1721. https://doi.org/10.3390/diagnostics12071721

Chicago/Turabian Style

Kuttenreich, Anna-Maria, Harry von Piekartz, and Stefan Heim. 2022. "Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis?" Diagnostics 12, no. 7: 1721. https://doi.org/10.3390/diagnostics12071721

APA Style

Kuttenreich, A. -M., von Piekartz, H., & Heim, S. (2022). Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis? Diagnostics, 12(7), 1721. https://doi.org/10.3390/diagnostics12071721

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop