sensors-logo

Journal Browser

Journal Browser

Emotion Recognition and Biometric Authentication with Contactless Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: 10 December 2024 | Viewed by 2922

Special Issue Editor


E-Mail Website
Guest Editor
Department of Intelligent Engineering Informatics for Human, Sangmyung University, Seoul 03016, Republic of Korea
Interests: computer vision; pattern recognition; biometrics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The forthcoming Special Issue in the Sensors journal is poised to explore the confluence between emotion recognition and biometric authentication by leveraging novel, contactless sensing technologies. Emotions, pivotal in shaping human interactions and responses, when accurately deciphered, offer boundless opportunities spanning mental health surveillance, enhanced human–machine rapport, and tailored services. Concurrently, biometric validation has solidified its place as the linchpin of modern security architectures, demanding perpetual innovations to boost precision and enrich user engagement. This Special Issue is set to spotlight investigations centered on pioneering techniques, state-of-the-art algorithms, and avant-garde methodologies tailored for discerning emotions and orchestrating biometric verifications using non-invasive sensors, including but not limited to cameras, microphones, and other subtle devices. By seamlessly intertwining the realms of emotion detection and biometric validation, this curated research anthology is primed to catalyze the emergence of systems that are more organic, fortified, and attuned to user preferences. Research topics that align with each purpose of emotion recognition and biometric authentication, without combining the two, can also be submitted to this Special Issue.

Prof. Dr. Eui Chul Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • emotion recognition
  • biometric authentication
  • contactless sensing
  • remote biosignal sensing
  • multimodal fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 11917 KiB  
Article
Exploring Spectrogram-Based Audio Classification for Parkinson’s Disease: A Study on Speech Classification and Qualitative Reliability Verification
by Seung-Min Jeong, Seunghyun Kim, Eui Chul Lee and Han Joon Kim
Sensors 2024, 24(14), 4625; https://doi.org/10.3390/s24144625 - 17 Jul 2024
Viewed by 1018
Abstract
Patients suffering from Parkinson’s disease suffer from voice impairment. In this study, we introduce models to classify normal and Parkinson’s patients using their speech. We used an AST (audio spectrogram transformer), a transformer-based speech classification model that has recently outperformed CNN-based models in [...] Read more.
Patients suffering from Parkinson’s disease suffer from voice impairment. In this study, we introduce models to classify normal and Parkinson’s patients using their speech. We used an AST (audio spectrogram transformer), a transformer-based speech classification model that has recently outperformed CNN-based models in many fields, and a CNN-based PSLA (pretraining, sampling, labeling, and aggregation), a high-performance model in the existing speech classification field, for the study. This study compares and analyzes the models from both quantitative and qualitative perspectives. First, qualitatively, PSLA outperformed AST by more than 4% in accuracy, and the AUC was also higher, with 94.16% for AST and 97.43% for PSLA. Furthermore, we qualitatively evaluated the ability of the models to capture the acoustic features of Parkinson’s through various CAM (class activation map)-based XAI (eXplainable AI) models such as GradCAM and EigenCAM. Based on PSLA, we found that the model focuses well on the muffled frequency band of Parkinson’s speech, and the heatmap analysis of false positives and false negatives shows that the speech features are also visually represented when the model actually makes incorrect predictions. The contribution of this paper is that we not only found a suitable model for diagnosing Parkinson’s through speech using two different types of models but also validated the predictions of the model in practice. Full article
Show Figures

Figure 1

16 pages, 3452 KiB  
Article
Emotion Classification Based on Pulsatile Images Extracted from Short Facial Videos via Deep Learning
by Shlomi Talala, Shaul Shvimmer, Rotem Simhon, Michael Gilead and Yitzhak Yitzhaky
Sensors 2024, 24(8), 2620; https://doi.org/10.3390/s24082620 - 19 Apr 2024
Cited by 1 | Viewed by 1440
Abstract
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or [...] Read more.
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants’ emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera. Full article
Show Figures

Figure 1

Back to TopTop