Next Article in Journal
Performance Evaluation of a PET of 7T Bruker Micro-PET/MR Based on NEMA NU 4-2008 Standards
Previous Article in Journal
Research on the Sequential Difference Histogram Failure Principle Applied to the Signal Design of Radio Frequency Stealth Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study

by
Álvaro García López
1,*,
Víctor Cerdán
2,
Tomás Ortiz
3,
José Manuel Sánchez Pena
1 and
Ricardo Vergaz
1
1
GDAF-UC3M Department of Electronic Technology, Carlos III University of Madrid, Leganés, 28911 Madrid, Spain
2
Department of Applied Communication Sciences, Complutense University of Madrid, 28040 Madrid, Spain
3
Department of Psychiatric, Complutense University of Madrid, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(14), 2196; https://doi.org/10.3390/electronics11142196
Submission received: 20 June 2022 / Revised: 6 July 2022 / Accepted: 10 July 2022 / Published: 13 July 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Despite technological and accessibility advances, the performing arts and their cultural offerings remain inaccessible to many people. By using vibrotactile stimulation as an alternative channel, we explored a different way to enhance emotional processes produced while watching audiovisual media and, thus, elicit a greater emotional reaction in hearing-impaired people. We recorded the brain activity of 35 participants with normal hearing and 8 participants with severe and total hearing loss. The results showed activation in the same areas both in participants with normal hearing while watching a video, and in hearing-impaired participants while watching the same video with synchronized soft vibrotactile stimulation in both hands, based on a proprietary stimulation glove. These brain areas (bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum) have been reported as emotional and attentional areas. We conclude that vibrotactile stimulation can elicit the appropriate cortex activation while watching audiovisual media.

1. Introduction

Impaired people face numerous daily barriers—not only architectural barriers, but also cultural barriers. Article 30 of the United Nations Declaration establishes the right of persons with disabilities to participate on equal terms in cultural life [1]. If we remove these barriers, we will ensure that everyone, regardless of their abilities, can fulfil their right to full development.
In our daily lives, audiovisual stimuli are essential; the absence of any of these channels (visual, auditory) generates the loss of a critical part of the information. A clear example is in traditional movies, designed to be enjoyed through these two channels [2,3], or to use these channels for controlling consciousness with visually induced motion sickness [4]. Part of this information comes from the music of the film, with the consequent generation of emotions both by the music’s own emotional evocation [5] and by the setting of the scene [6].
On the other hand, many similarities between auditory and touch channels can be found in terms of psychophysical characteristics, although their anatomy and physiology are significantly different [7]. This is due to the tight physical interrelationship between the properties of sound and vibration, such as pressure or frequency [8,9]. Okazaki et al. explored the frequency relationship of tactile and auditory stimuli, suggesting the existence of a mechanism that could link these stimuli, as their frequencies share the same harmonic structure [10]. Other studies have demonstrated the existence of a cross-modal area in the brain for auditory and tactile integration [11], or the relationship between an intermodal (tactile and hearing) bias in the perception of sound intensity [12]. These channel relationships are so tight that vibrotactile stimulation, when presented in a synchronized way with an auditory stimulus, can improve auditory pitch discrimination tasks [13].
Based on this relationship, some investigators have developed devices which mix music and vibrotactile stimulation. The Emoti-Chair, for example, is a device that enhances entertainment through vibrotactile stimulation: during its tests, volunteers said that they felt the music [14]. Other studies have shown that vibrotactile stimulation, synchronized with music in a dance-like context, enhances the experience of deaf people owing to the musical beat synchronization [15], an effect proven in children as well [16].
In music, although there have been numerous studies testing different stimuli and vibrotactile devices, in the world of cinema, this type of research is less abundant. By testing the joint experience of watching horror films and receiving vibrotactile stimulation through a high-resolution continuous vibrotactile display, evidence of emotion augmentation was found [17], although it was based on the subjective emotional response of the participants compiled through a self-reported survey. Moreover, vibrotactile stimulation, generated by a haptic glove prototype, has shown a potential to enhance mood, as different combinations of intensity and frequency seem to elicit different moods [18].
While we preferred the use of EEG measurements and their inversion procedures in our study, due to their efficiency and low cost, it is worth mentioning that there are also other human-machine interfaces (HMIs) such as brain-computer interfaces (BCIs) and eye-tracking-based methodologies [19]. These techniques have provided strong coupling between the results of a cognitive psychological attention test and the attention levels determined by BCI systems. Such was the case of an examination and comparison between an EEG-based attention test and a continuous performance test (CPT), as well as a Test of Variables of Attention (TOVA) [20], or different comparisons with other human-computer interaction eye movement tracking tests [21,22,23,24]. Moreover, some interesting results have also been obtained when applying deep learning techniques in inversion problems for eye tracking and other measurements [25,26,27,28,29,30].
Despite technological and accessibility advances, a great part of cultural offerings is still inaccessible for many people. The aim of this research was to obtain proof of whether multimodal stimulation, mixing vision and touch, increases emotional activation in people with hearing impairment during the viewing of a film. This proof is based on the simultaneous recording of brain activity by means of electroencephalography (EEG). The stimulation system is described in depth, both in terms of the hardware and stimulation procedure.

2. Materials and Methods

2.1. Participants

Two groups of participants were recruited. For the “control group”, volunteers were recruited from the medical school where the study was conducted, whereas for the “experimental group”, volunteers were recruited through different associations of deaf people. All volunteers were informed of the objective and general procedure of the study. Before the study, the subjects were asked for the following information of their clinical conditions: if they suffered from phobias, mental disorders, psychiatric or neurological pathologies, and/or if they consumed psychotropic substances. None of the participants were in any of these situations.
The control group was formed of 34 people (23 females and 11 males) with self-reported normal hearing, aged between 18 and 57 (mean: 19.72, SD: 5.41). This sample is in line with other similar EEG studies [31,32,33]. Nineteen of the participants in the control group had a high school diploma, eleven had a university degree, and four had a higher or postgraduate degree. Most of the participants were medical students at the Complutense University of Madrid.
For the experimental group, 8 people (7 female participants and 1 male participant) with self-reported hearing loss were recruited, aged between 19 and 60 (mean: 41.88, SD: 16.89). This sample is also in line with other similar EEG studies [34,35].
The self-reported hearing losses were classified within the categories appearing in Table 1, following the Audiometric Classification of Hearing Impairments of the International Bureau for Audiophonology [36].
Five participants had very severe hearing loss and wore hearing aids (one of them had a cochlear implant in one ear), and three participants had total hearing loss (one of them had a cochlear implant). Four of the participants from the experimental group had a bachelor’s degree, three had a university degree, and one had a postgraduate degree.
This study is in a line of research whose clinical trials were approved by the Clinical Research Ethics Committee (CEIC) of the San Carlos Clinical Hospital on 4 April 2019, Madrid (Spain).

2.2. Materials

2.2.1. Stimuli

Some studies that used audiovisual stimulation to analyze emotions with EEG, such as [37] and, the most relevant, that of Pereira et al. [38], demonstrated that the use of videos of more than one minute was convenient when trying to detect emotions through EEG. Following this conclusion, we used a film sequence that was 2:20 min long.
This sequence was shot by the authors. In it, a couple arrives at a room. They seem angry and there is a tense atmosphere as if they have had an important discussion. Sporadically, the woman pushes the man. The intention of the director (V. Cerdán) was to create a tense atmosphere to show the existing uneasiness between the partners.
To check the validity of the film, 110 students from the Information Sciences Faculty of the Complutense University of Madrid were surveyed to evaluate the types of emotions that were transmitted by the film. A questionnaire was applied using a discrete model of emotions, which sets a space of limited, discrete, and basic emotions, as well as some complex emotions [39,40]. Although the quantity of basic emotions is still a controversial topic, consensus has been reached for the following six emotions: anger, fear, sadness, happiness, disgust, and surprise [41,42,43,44]. Moreover, emotions can be analyzed under the view of their content in terms of valence (negative to positive, i.e., unpleasant to pleasant) and excitation (low to high) [39,40]. For example, Zhao et al. [40] categorized the negative emotions as anxiety, fear, sadness, and angriness, being catalogued in terms of arousal. In this preliminary study, the students rated the film with a negative emotion, with an intensity of 4.3 out of 5. This sample rated the video with the following feelings: anxiety, 17%; fear, 37%; sadness, 34%; angriness, 12%.
Vibrotactile stimuli were programmed on a pair of gloves specifically designed for the experiment and described in Section 2.2.2, which applied a soft tactile vibration on every finger and palm simulating a beat or a push. Each vibration was a 250 ms pulse train of a 1 kHz square signal. The vibration pattern was produced in both hands in a synchronous way. It was mandatory for there to be an exact timing, as well as a perfect fit of the glove to the user’s hand, ensuring the right haptic stimulation by means of the motors.

2.2.2. Hardware

The hardware consisted of two computers: a control computer for triggering the video, which sent marks to the EEG amplifier and synchronized the driving signal to the glove, with the screen and speakers pointing to the participant; and a register computer for acquiring the 64 channels of EEG data, using a custom-designed electrode Neuroscan cap and an ATI EEG system (Advantek SRL). The reference electrodes were placed on the mastoids, and the ground electrode was placed on the forehead. Data were processed to an average reference following acquisition, with a band-pass filter of 0.05–30 Hz at a sample rate of 1000 Hz. Impedances were kept under 5 kΩ. The EEG recordings were taken in a small, faradized room, with a comfortable armchair where the subjects sat down during the tests.
For the vibrotactile stimulation, two haptic gloves were used, as shown in Figure 1. A golf glove type was selected due to comfort and the easy installation of the stimulation motors. Each glove was an Inesis Golf Glove 100 with Uxcell 1030 coin micromotors, with five located on each of the finger pads and one on the palm, providing the haptic stimuli by vibration. The choice of these motor placement points was based on the Penfield homunculus, which represents a map of the location of the organs and senses in the cerebral cortex, deforming the body according to the corresponding space that the cortex assigns to each organ, where the larger the size, the greater the sensitivity of the tissue [45]. This higher sensitivity is due to the existence of a higher concentration of fast adapting receptors such as the Meissner and Pacini receptors, which are responsible for the uptake of vibrational stimuli [46,47,48,49]. The motors work under 3 V DC and consume 70 mA. To generate the stimuli, an Arduino UNO rev3 was used, triggered by the control PC and synchronized with the viewing of the film.

2.2.3. Procedure

Participants were first asked to complete a survey that included questions about their age, gender, education level, type and degree of hearing loss, and hearing aids. For this test, no information about the dominant hand was requested, as the system consisted of a pair of gloves and thus both hands were to be stimulated.
Each participant underwent a control test to verify that the vibration itself did not generate brain activity beyond the expected tactile recognition. This vibration consisted of a signal at a constant frequency of 2 Hz with a duty cycle of 10%, the 6 motors of each hand being 50 ms active and 450 ms off, with a total test duration of 3 min. The reason for using such a low duty cycle is twofold: not to produce discomfort due to its long duration, but sufficient to maintain the constancy of the tactile stimulus.
Each participant was invited to attend an individual session. All sessions were carried out during COVID-19, so hygienic and disinfection measures were taken. These measures required the use of a mask throughout the entire test, the use of hydroalcoholic gel, and disinfection of the material, couch, gloves, EEG cap, etc., with alcohol, as well as sufficient ventilation of the “faradized” room. In this room, in order to minimize the contagion risk, only one additional person was allowed to enter. This person put the EEG cap on the subject’s head, verified the fitting of both gloves, and checked the correct state of the system. It was mandatory to ensure a correct fit, and that the twelve motors were adjusted to their respective fingertips or palms.
Once the participant’s preparation was completed, it was explained to them that they would be watching a series of videos. To do this, they were asked to sit down, emphasizing the need to stay comfortable and, above all, relaxed during the projection. After that, the room lights were turned off and the test began.
The stimuli were presented in random sequences to avoid the learning of any of the videos, and according to the following conditions:
  • Condition 1: This was a reference condition to obtain a baseline emotional experience that will serve as a comparison with the control group. Participants in the control group watched the video. This condition presented neither auditory nor tactile stimulation.
  • Condition 2: This was the same as Condition 1 and only for the experimental group. This condition presented only visual stimulation. It did not include auditory stimulation or tactile stimulation.
  • Condition 3: This condition was only for the experimental group, and it presented visual stimulation and tactile stimulation while watching the video.
In Condition 3, each moment the director decided to insert a tactile stimulus, a mark was generated in the EEG to analyze the 100 ms that followed each trigger of the vibration. All the stimulations were tracked using this method to evaluate the differences in brain activity among the different viewers. Once the EEG record was obtained, a SAM questionnaire was performed to evaluate the emotional experience of the viewers, regarding the valence, excitation, and presence or intensity of emotions [43,50,51]. Additionally, each participant individually reported their subjective emotional experience for each of the conditions of the same film on a 5-level Likert scale. The duration of the whole experiment, including EEG instrumentation and glove setups and initializations, as well as the questionnaires, was around 60 min per participant. No payment was made to any participant for this study.
The recorded EEG data were averaged using a mean reference. A visual inspection of each of the obtained registers was performed to clean the data of artifacts due to eye or muscle movements. These noisy channels were substituted by a linear interpolation of adjacent channels. Additionally, the channels whose square magnitude was higher than four standard deviations of their mean power were substituted with the mean of the adjacent channels [52]. The numerical values shown, according to the LORETA algorithm, correspond to µA/m (microampere per meter), i.e., in units of magnetic field strength.
Source localization was carried out by solving the EEG inverse problem using the traditional method of LORETA (low-resolution electromagnetic tomography) [53], identifying the sources of neural currents underlying the potentials recorded at the scalp level. After the sources were obtained, the corresponding brain map was generated, using the average brain atlas model of the Montreal Neurological Institute (MNI) as a reference [54].
Once the areas of maximum activation were located, statistical parametric maps (SPM) were calculated according to Hotelling’s T2 test on a voxel-by-voxel basis versus zero to estimate statistically significant sources. When applied to the registered EEG values over all the electrodes in neuroscientific experiments, it has been proven to detect brain areas with statistically significant activation compared with the average state. Hotelling’s T2 test is a multivariate statistical analysis for hypothesis testing based on a covariance matrix. Statistically significant differences between conditions were located by calculating the SPM for Hotelling’s T2 test on a voxel-by-voxel basis for independent groups [55]. In this way, the resulting probability maps were obtained with thresholds at a false discovery rate (FDR) q ¼ of 0.05 [56] and depicted as 3D activation images superimposed on the MNI brain model. Once the probability maps were obtained, we proceeded to the identification of anatomical structures whose cluster size was more than 10 voxels and greater than the threshold, according to the AAL atlas [57]. Local maxima were then identified by locating them according to the MNI coordinate system.

3. Results

To compare the different conditions in both groups, we considered up to eight maximum statistically significant activation peaks in each test, obtained with Hotelling’s T2 test against zero. The following tables show, in the first column, the AAL (Automated Anatomical Labeling) corresponding to the location of the activated zone; X, Y, and Z corresponding to the coordinates of the activated zone according to the average Montreal Neurological Institute (MNI) atlas; and Act. [µA/m] corresponding to the numerical value of the zone activation.
In Condition 1 (hearing participants with neither auditory nor tactile stimulation), as shown in Figure 2 and Table 2, maximum statistically significant activation was found in the bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum.
In Condition 2 (hearing-impaired participants and only visual stimulation), as shown in Figure 3 and Table 3, maximum statistically significant activation was found in the bilateral middle frontal orbitofrontal, left superior temporal lobe, bilateral superior frontal gyrus, left cingulum, and insula areas.
In Condition 3 (hearing-impaired participants and visual and vibrotactile stimulation), as shown in Figure 4 and Table 4, maximum statistically significant activation was found in the bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum. In a remarkable comparison of Figure 2 and Figure 4, the difference apparently lies in the scale only.

4. Discussion

The cingulate gyrus is an area related to self-regulation and reward mechanisms. Evidence has been found demonstrating coupling of the cingulate to cognitive and emotional areas during a task process (involving motor processes), and that both areas are activated at the same time [58].
Condition 2 (hearing-impaired participants with only visual stimulation) showed similar stimulated areas to Condition 1—superior temporal lobe, bilateral frontal orbitofrontal, and cingulum—but also the right superior temporal gyrus, where the main peaks appeared, which has been identified as a region for multisensory integration [59,60], and the medial temporal gyrus, which is associated with visual and auditory processing [6]. The insula also showed an important peak activation. The insula works both as a nerve center and an integrator between sensory systems. It also works during the perception of affective sounds [61]. Therefore, it concerns emotional processing, translating affective auditory signals into subjective emotions [60]. Finally, it is involved in audiovisual integration tasks [62].
This result is coherent with previous experiments with hearing-impaired participants, where these areas were triggered with high intensity when watching a muted video [63].
In Condition 3 (hearing-impaired participants and visual and vibrotactile stimulation), we again found peak activation in the bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum, which are all the same activation areas as those found in Condition 1, but with a remarkably higher statistically significant activation (see scale in Figure 4). The middle frontal gyrus is noteworthy because, as cognitive processes progress, emotions are increasingly processed in the right hemisphere [64,65].

5. Conclusions

This research has several limitations. Firstly, the size of the target sample was small. However, as previously mentioned, there are studies on the same subject with groups of a similar size. Secondly, none of the participants had previous training, and thus we cannot compare whether, in the case of trained sensory stimulation, the same results would be obtained.
Nevertheless, in the light of the three experiments conducted as a whole, the Condition 3 experiment in hearing-impaired people showed that the processes that were triggered by the visual experience (the ones that Condition 2 shows) were progressively focused on the right areas to be activated by the full audiovisual experience (the ones revealed by the Condition 1 experiment). Indeed, they were produced with a higher intensity.
Therefore, according to the obtained results, we can claim that multimodal stimulation—touch associated with audiovisual stimuli—not only reinforces attentional processes but is also linked with emotional processes related to the activity of frontal brain areas. This, combined with the previously stated issues to solve, paves the way to a complete study of the neural-based emotional activations in hearing-impaired people when multimodal stimuli are used in audiovisual shows.

Author Contributions

Conceptualization and methodology, all authors; hardware and software, Á.G.L., R.V.; validation, T.O., V.C.; formal analysis, Á.G.L., V.C., T.O.; resources, T.O., J.M.S.P., R.V.; data curation, V.C., T.O., Á.G.L.; writing—original draft preparation, R.V., Á.G.L.; writing—review and editing, R.V., Á.G.L.; supervision, T.O., J.M.S.P.; project administration and funding acquisition, T.O., J.M.S.P., R.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Comunidad de Madrid and FSE/FEDER Program under grant SINFOTON2-CM (S2018/NMT-4326).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Clinical Research Ethics Committee (CEIC) of the San Carlos Clinical Hospital.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the anonymity of the volunteers.

Acknowledgments

The authors would like to thank Revuelta, P. and all the volunteers who participated in the tests.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. UN. UN Convention on the Rights of Persons with Disabilities (CRPD). In Article 30—Participation in Cultural Life, Recreation, Leisure and Sport; UN: New York, NY, USA, 2022. [Google Scholar]
  2. Huang, R.S.; Chen, C.F.; Sereno, M.I. Spa- tiotemporal integration of looming visual and tactile stimuli near the face. Hum. Brain Mapp. 2018, 39, 1256–2176. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Akiba, H.T.; Costa, M.F.; Gomes, J.S.; Oda, E.; Simurro, P.B.; Dias, A.M. Neural Correlates of Preference: A Transmodal Validation Study. Front. Hum. Neurosci. 2019, 13, 73. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Sugiura, A.; Tanaka, K.; Ohta, K.; Kitamura, K.; Morisaki, S.; Takada, H. Effect of controlled consciousness on sense of presence and visually induced motion sickness while viewing stereoscopic movies. In International Conference on Universal Access in Human-Computer Interaction; Springer: Cham, Switzerland, 2018; pp. 122–131. [Google Scholar]
  5. Chatterjee, A.; Cardilo, E. Brain, Beauty, and Art: Essays Bringing Neuroaesthetics into Focus; Oxford University Press: Oxford, UK, 2021; ISBN 019751362X/9780197513620. [Google Scholar]
  6. Pehrs, C.; Deserno, L.; Bakels, J.-H.; Schlochtermeier, L.H.; Kappelhoff, H.; Jacobs, A.M.; Fritz, T.H.; Koelsch, S.; Kuchinke, L. How music alters a kiss: Superior temporal gyrus controls fusiform-amygdalar effective connectivity. Soc. Cogn. Affect. Neurosci. 2013, 9, 1770–1778. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Merchel, S.; Altinsoy, M.E. Psychophysical comparison of the auditory and tactile perception: A survey. J. Multimodal. User Interfaces 2020, 14, 271–283. [Google Scholar] [CrossRef]
  8. Sugita, Y.; Suzuki, Y. Audiovisual perception: Implicit estimation of sound-arrival time. Nature 2003, 421, 911. [Google Scholar] [CrossRef]
  9. Bresciani, J.-P.; Ernst, M.O.; Drewing, K.; Bouyer, G.; Maury, V.; Kheddar, A. Feeling what you hear: Auditory signals can modulate tactile tap perception. Exp. Brain Res. 2005, 162, 172–180. [Google Scholar] [CrossRef]
  10. Okazaki, R.; Hachisu, T.; Sato, M.; Fukushima, S.; Hayward, V.; Kajimoto, H. Judged Consonance of Tactile and Auditory Frequencies. In Proceedings of the 2013 World Haptics Conference (WHC), Daejeon, Korea, 14–17 April 2013; pp. 663–666. [Google Scholar]
  11. Huang, J.; Gamble, D.; Sarnlertsophon, K.; Wang, X.; Hsiao, S.; Goldreich, D. Feeling Music: Integration of Auditory and Tactile Inputs in Musical Meter Perception. PLoS ONE 2012, 7, e48496. [Google Scholar] [CrossRef] [Green Version]
  12. Schürmann, M.; Caetano, G.; Jousmäki, V.; Hari, R. Hands help hearing: Facilitatory audiotactile interaction at low sound-intensity levels. J. Acoust. Soc. Am. 2004, 115, 830–832. [Google Scholar] [CrossRef] [Green Version]
  13. Young, G.W.; Murphy, D.; Weeter, J. Haptics in music: The effects of vibrotactile stimulus in low frequency auditory difference detection tasks. IEEE Trans. Haptics 2016, 10, 135–139. [Google Scholar] [CrossRef]
  14. Baijal, A.; Kim, J.; Branje, C.; Russo, F.; Fels, D.I. Composing vibrotactile music: A multi-sensory experience with the emoti-chair. In Proceedings of the 2012 IEEE Haptics Symposium (Haptics), Vancouver, BC, Canada, 4–7 March 2012; pp. 509–515. [Google Scholar]
  15. Tranchant, P.; Shiell, M.M.; Giordano, M.; Nadeau, A.; Peretz, I.; Zatorre, R.J. Feeling the beat: Bouncing synchronization to vibrotactile music in hearing and early deaf people. Front. Neurosci. 2017, 11, 507. [Google Scholar] [CrossRef]
  16. Yao, L.; Shi, Y.; Chi, H.; Ji, X.; Ying, F. Music-touch shoes: Vibrotactile interface for hearing impaired dancers. In Proceedings of the Fourth International Conference on Tangible, Embedded, and Embodied Interaction, Cambridge, MA, USA, 24–27 January 2010; pp. 275–276. [Google Scholar]
  17. Branje, C.; Nespoil, G.; Russo, F.; Fels, D.I. The Effect of Vibrotactile Stimulation on the Emotional Response to Horror Films. Comput. Entertain. 2014, 11, 1–13. [Google Scholar] [CrossRef]
  18. Mazzoni, A.; Bryan-Kinns, N. Mood Glove: A haptic wearable prototype system to enhance mood music in film. Entertain. Comput. 2016, 17, 9–17. [Google Scholar] [CrossRef] [Green Version]
  19. Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Examine the effect of different web-based media on human brain waves. In Proceedings of the 8th IEEE International Conference on Cognitive Infocommunications, Debrecen, Hungary, 11–14 September 2017; pp. 407–412. [Google Scholar] [CrossRef]
  20. Katona, J.; Kovari, A. The evaluation of bci and pebl-based attention tests. Acta Polytech. Hung. 2018, 15, 225–249. [Google Scholar]
  21. Katona, J.; Kovari, A.; Heldal, I.; Costescu, C.; Rosan, A.; Demeter, R.; Thill, S.; Stefanut, T. Using eye-tracking to examine query syntax and method syntax comprehension in LINQ. In Proceedings of the 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 23–25 September 2020; pp. 437–444. [Google Scholar]
  22. Katona, J. Measuring Cognition Load Using Eye-Tracking Parameters Based on Algorithm Description Tools. Sensors 2022, 22, 912. [Google Scholar] [CrossRef]
  23. Katona, J. Clean and dirty code comprehension by eye-tracking based evaluation using GP3 eye tracker. Acta Polytech. Hung. 2021, 18, 79–99. [Google Scholar] [CrossRef]
  24. Katona, J. Analyse the Readability of LINQ Code using an Eye-Tracking-based Evaluation. Acta Polytech. Hung. 2021, 18, 193–215. [Google Scholar] [CrossRef]
  25. Negi, A.; Kumar, K. Viability and Applicability of Deep Learning Approach for COVID-19 Preventive Measures Implementation. In Proceedings of the International Conference on Artificial Intelligence and Sustainable Engineering; Springer: Singapore, 2022; pp. 367–379. [Google Scholar]
  26. Sheikh, D.; Vansh, A.R.; Verma, H.; Chauhan, N.; Kumar, R.; Sharma, R.; Negi, P.C.; Awasthi, L.K. An ECG Heartbeat Classification Strategy using Deep Learning for Automated Cardiocare Application. In Proceedings of the 3rd International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), Greater Noida, India, 17–18 December 2021; pp. 515–520. [Google Scholar]
  27. Yadav, A.; Verma, H.K.; Awasthi, L.K. Voting Classification Method with PCA and K-Means for Diabetic Prediction. In Innovations in Computer Science and Engineering; Springer: Singapore, 2021; pp. 651–656. [Google Scholar]
  28. Kumar, K.; Mishra, A.; Dahiya, S.; Kumar, A. A Technique for Human Upper Body Parts Movement Tracking. IETE J. Res. 2022, 1–10. [Google Scholar] [CrossRef]
  29. Negi, A.; Kumar, K. Classification and detection of citrus diseases using deep learning. In Data Science and Its Applications; Chapman and Hall/CRC: Boca Raton, FL, USA, 2012; pp. 63–85. [Google Scholar]
  30. Negi, A.; Kumar, K.; Chaudhari, N.S.; Singh, N.; Chauhan, P. Predictive analytics for recognizing human activities using residual network and fine-tuning. In International Conference on Big Data Analytics; Springer: Cham, Switzerland, 2021; pp. 296–310. [Google Scholar]
  31. Lee, G.; Kwon, M.; Kavuri Sri, S.; Lee, M. Emotion recognition based on 3D fuzzy visual and EEG features in movie clips. Neurocomputing 2014, 144, 560–568. [Google Scholar] [CrossRef]
  32. Pradhapan, P.; Velazquez, E.R.; Witteveen, J.A.; Tonoyan, Y.; MihajloviÄ, V. The Role of Features Types and Personalized Assessment in Detecting Affective State Using Dry Electrode EEG. Sensors 2020, 20, 6810. [Google Scholar] [CrossRef]
  33. Jalilifard, A.; Rastegarnia, A.; Pizzolato, E.B.; Islam, M.K. Classification of emotions induced by horror and relaxing movies using single-channel EEG recordings. Int. J. Electr. Comput. Eng. 2020, 10, 3826. [Google Scholar] [CrossRef]
  34. Bos, D.O. EEG-based emotion recognition. Influ. Vis. Audit. Stimuli 2006, 56, 1–17. [Google Scholar]
  35. Hassib, M.; Pfeiffer, M.; Rohs, S.S.M.; Alt, F. Emotion Actuator: Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2017; pp. 6133–6146. [Google Scholar] [CrossRef]
  36. BIAP Rec_02-1_en Audiometric Classification of Hearing Impairmen. Available online: https://www.biap.org/es/recommandations/recommendations/tc-02-classification/213-rec-02-1-en-audiometric-classification-of-hearing-impairments/file (accessed on 10 June 2022).
  37. Kang, D.; Kim, J.; Jang, D.-P.; Cho, Y.S.; Kim, S.-P. Investigation of engagement of viewers in movie trailers using electroencephalography. Brain-Comput. Interfaces 2015, 2, 193–201. [Google Scholar] [CrossRef]
  38. Pereira, E.T.; Gomes, H.M.; Veloso, L.R.; Mota, M.R.A. Empirical evidence relating EEG signal duration to emotion classification performance. IEEE Trans. Affect. Comput. 2021, 12, 154–164. [Google Scholar] [CrossRef]
  39. Barrett, L.F.; Mesquita, B.; Ochsner, K.N.; Gross, J.J. The experience of emotion. Annu. Rev. Psychol. 2007, 58, 373–403. [Google Scholar] [CrossRef] [Green Version]
  40. Zhao, G.; Zhang, Y.; Ge, Y. Frontal EEG Asymmetry and Middle Line Power Difference in Discrete Emotions. Front. Behav. Neurosci. 2018, 12, 225–235. [Google Scholar] [CrossRef] [Green Version]
  41. Ortony, A.; Turner, T.J. What’s basic about basic emotions? Psychol. Rev. 1990, 97, 315–331. [Google Scholar] [CrossRef]
  42. Panksepp, J. Affective consciousness in animals: Perspectives on dimensional and primary process emotion approaches. Proc. Biol. Sci. 2010, 277, 2905–2907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Barrett, L.F. Was Darwin wrong about emotional expressions? Curr. Dir. Psychol. Sci. 2011, 20, 400–406. [Google Scholar] [CrossRef] [Green Version]
  44. Ekman, P.; Cordaro, D. What is meant by calling emotions basic. Emot. Rev. 2011, 3, 364–370. [Google Scholar] [CrossRef]
  45. Schott, G.D. Penfield’s homunculus: A note on cerebral cartography. J. Neurol. Neurosurg. Psychiatry 1993, 56, 329–333. [Google Scholar] [CrossRef] [Green Version]
  46. Lundborg, G.; Rosen, B. Sensory substitution in prosthetics. Hand Clin. 2001, 17, 481–488. [Google Scholar] [CrossRef]
  47. Schmidt, P.A.; Maël, E.; Würtz, R.P. A sensor for dynamic tactile information with applications in human–robot interaction and object exploration. Robot. Auton. Syst. 2006, 54, 1005–1014. [Google Scholar] [CrossRef]
  48. Yoon, M.J.; Yu, K.H. Psychophysical experiment of vibrotactile pattern recognition at fingertip. In Proceedings of the 2006 SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 4601–4605. [Google Scholar]
  49. Velázquez, R. Wearable assistive devices for the blind. In Wearable and Autonomous Biomedical Devices and Systems for Smart Environment; Springer: Berlin/Heidelberg, Germany, 2010; pp. 331–349. [Google Scholar]
  50. Lang, P.J. The Cognitive Psychophysiology of Emotion: Anxiety and the Anxiety Disorders; Lawrence Erlbaum: Hillsdale, NJ, USA, 1985. [Google Scholar]
  51. Geethanjali, B.; Adalarasu, K.; Hemapraba, A.; Kumar, S.P.; Rajasekeran, R. Emotion analysis using SAM (Self-Assessment Manikin) scale. Biomed. Res. Tokyo 2017, 28, 18–24. [Google Scholar]
  52. Dmochowski, J.P.; Bezdek M a Abelson, B.P.; Johnson, J.S.; Schumacher, E.H.; Parra, L.C. Audience preferences are predicted by temporal reliability of neural processing. Nat. Commun. 2014, 5, 1–9. [Google Scholar] [CrossRef] [Green Version]
  53. Pascual-Marqui, R.D.; Michel, C.M.; Lehmann, D. Low resolution electromagnetic tomography: A new method for localizing electrical activity in the brain. Int. J. Psychophysiol. 1994, 18, 49–65. [Google Scholar] [CrossRef]
  54. Evans, A.; Collins, D.; Mills, S.; Brown, E.; Kelly, R.; Peters, T. 3D Statistical Neuroanatomical Models from 305 MRI Volumes. In Proceedings of the 1993 IEEE Conference Record Nuclear Science Symposium and Medical Imaging Conference, San Francisco, CA, USA, 31 October 1993; Volume 3, pp. 1813–1817. [Google Scholar]
  55. Carbonell, F.; Galan, L.; Valdes, P.; Worsley, K.; Biscay, R.J.; Diaz-Comas, L. Random field-union intersection tests for EEG/MEG imaging. NeuroImage 2004, 22, 268–276. [Google Scholar] [CrossRef]
  56. Lage-Castellanos, A.; Martínez-Montes, E.; Hernández-Cabrera, J.A.; Galán, L. False discovery rate and permutation test: An evaluation in ERP data analysis. Stat. Med. 2010, 29, 63–74. [Google Scholar] [CrossRef]
  57. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, O.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage 2002, 15, 273–289. [Google Scholar] [CrossRef]
  58. Pereira, M.G.; de Oliveira, L.; Erthal, F.S.; Joffily, M.; Mocaiber, I.F.; Volchan, E.; Pessoa, L. Emotion affects action: Midcingulate cortex as a pivotal node of interaction between negative emotion and motor signals. Cogn. Affect. Behav. Neurosci. 2010, 10, 94–106. [Google Scholar] [CrossRef]
  59. Menon, V.; Levitin, D.J. The rewards of music listening: Response and physiological connectivity of the mesolimbic system. NeuroImage 2005, 28, 175–184. [Google Scholar] [CrossRef]
  60. Kotz, S.A.; Kalberlah, C.; Bahlmann, J.; Friederici, A.D.; Haynes, J.-D. Predicting vocal emotion expressions from the human brain. Hum. Brain Mapp. 2013, 34, 1971–1981. [Google Scholar] [CrossRef]
  61. Mirz, F.; Gjedde, A.; Sdkilde-Jrgensen, H.; Pedersen, C.B. Functional brain imaging of tinnitus-like perception induced by aversive auditory stimuli. NeuroReport 2000, 11, 633–637. [Google Scholar] [CrossRef] [Green Version]
  62. Wildgruber, D.; Hertrich, I.; Riecker, A.; Erb, M.; Anders, S.; Grodd, W.; Ackermann, H. Distinct Frontal Regions Subserve Evaluation of Linguistic and Emotional Aspects of Speech Intonation. Cereb. Cortex 2004, 14, 1384–1389. [Google Scholar] [CrossRef] [Green Version]
  63. Revuelta, P.; Ortiz, T.; Lucía, M.J.; Ruiz, B.; Sánchez-Pena, J.M. Limitations of standard accessible captioning of sounds and music for deaf and hard of hearing people: An EEG study. Front. Integr. Neurosci. 2020, 14, 1. [Google Scholar] [CrossRef] [Green Version]
  64. Yuvaraj, R.; Murugappan, M.; Ibrahim, N.M.; Sundaraj, K.; Omar, M.I.; Mohamad, K.; Palaniappan, R.; Satiyan, M. Inter-hemispheric EEG coherence analysis in Parkinson’s disease: Assessing brain activity during emotion processing. J. Neural. Transm. 2015, 122, 237–252. [Google Scholar] [CrossRef]
  65. Machado, L.; Cantilino, A. A systematic review of the neural correlates of positive emotions. Rev. Bras. De Psiquiatr. 2016, 39, 172–179. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Haptic gloves.
Figure 1. Haptic gloves.
Electronics 11 02196 g001
Figure 2. Mean electrical maps for Condition 1 (hearing participants, audiovisual stimulation). Maximal intensity projection areas are displayed in yellow/red color. SPMs were computed based on a voxel-by-voxel Hotelling T2 test against zero.
Figure 2. Mean electrical maps for Condition 1 (hearing participants, audiovisual stimulation). Maximal intensity projection areas are displayed in yellow/red color. SPMs were computed based on a voxel-by-voxel Hotelling T2 test against zero.
Electronics 11 02196 g002
Figure 3. Mean electrical maps for Condition 2 (hearing-impaired participants, visual stimulation). Maximal intensity projection areas are displayed in yellow/red color. SPMs were computed based on a voxel-by-voxel Hotelling T2 test against zero.
Figure 3. Mean electrical maps for Condition 2 (hearing-impaired participants, visual stimulation). Maximal intensity projection areas are displayed in yellow/red color. SPMs were computed based on a voxel-by-voxel Hotelling T2 test against zero.
Electronics 11 02196 g003
Figure 4. Mean electrical maps for Condition 3 (hearing-impaired participants, visual stimulation). Maximal intensity projection areas are displayed in yellow/red color. SPMs were computed based on a voxel-by-voxel Hotelling T2 test against zero.
Figure 4. Mean electrical maps for Condition 3 (hearing-impaired participants, visual stimulation). Maximal intensity projection areas are displayed in yellow/red color. SPMs were computed based on a voxel-by-voxel Hotelling T2 test against zero.
Electronics 11 02196 g004
Table 1. BIAP Classification.
Table 1. BIAP Classification.
ClassificationLossComments
Normal or subnormal hearing>20 dBMild tone disorder with no social consequences.
Mild hearing loss21–40 dBSpeech is perceived if the voice is normal; difficulties arise if the voice is low-pitched or distant from the subject.
Moderate hearing loss41–70 dBSpeech is perceived if the voice is loud; the subject better understands what is being said if he/she can see his/her interlocutor.
Severe hearing loss71–90 dBSpeech is perceived if the voice is loud and close to the ear; loud noises are also perceived.
Very severe hearing loss91–119 dBSpeech is not perceived; only loud noises are perceived.
Total—cophosis or anacusis≥120 dBNothing is perceived.
Table 2. Condition 1. Brain XYZ coordinates of the maximum in each of the activated areas.
Table 2. Condition 1. Brain XYZ coordinates of the maximum in each of the activated areas.
AALXYZAct. [µA/m]
Frontal_Mid_Orb_L−254−42.631
Frontal_Mid_Orb_R254−42.590
Frontal_Sup_Medial_L−25402.586
Cingulum_Ant_L−25002.568
Frontal_Sup_Medial_R25002.535
Table 3. Condition 2, with the same columns as Table 2.
Table 3. Condition 2, with the same columns as Table 2.
AALXYZAct. [µA/m]
Frontal_Mid_Orb_L−250−42.407
Temporal_Sup_L−50−2−42.401
Frontal_Mid_Orb_R250−42.39
Cingulum_Ant_L−25002.382
Frontal_Sup_Medial_R25002.364
Frontal_Sup_Medial_L−25402.352
Temporal_Pole_Sup_L−50602.254
Insula_L−466−42.249
Table 4. Condition 3, with the same columns as Table 2 and Table 3.
Table 4. Condition 3, with the same columns as Table 2 and Table 3.
AALXYZAct. [µA/m]
Frontal_Mid_Orb_L−254−44.725
Frontal_Sup_Medial_L−25404.700
Frontal_Mid_Orb_R254−44.667
Cingulum_Ant_L−25004.598
Frontal_Sup_Medial_R25004.547
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

García López, Á.; Cerdán, V.; Ortiz, T.; Sánchez Pena, J.M.; Vergaz, R. Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study. Electronics 2022, 11, 2196. https://doi.org/10.3390/electronics11142196

AMA Style

García López Á, Cerdán V, Ortiz T, Sánchez Pena JM, Vergaz R. Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study. Electronics. 2022; 11(14):2196. https://doi.org/10.3390/electronics11142196

Chicago/Turabian Style

García López, Álvaro, Víctor Cerdán, Tomás Ortiz, José Manuel Sánchez Pena, and Ricardo Vergaz. 2022. "Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study" Electronics 11, no. 14: 2196. https://doi.org/10.3390/electronics11142196

APA Style

García López, Á., Cerdán, V., Ortiz, T., Sánchez Pena, J. M., & Vergaz, R. (2022). Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study. Electronics, 11(14), 2196. https://doi.org/10.3390/electronics11142196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop