sensors-logo

Journal Browser

Journal Browser

Biometrics Recognition Based on Sensor Technology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 October 2023) | Viewed by 29446

Special Issue Editors


E-Mail Website
Guest Editor
AI Research Center, Hon Hai Research Institute, Taipei 114699, Taiwan
Interests: artificial intelligence; deep learning; computer vision; machine learning; biometrics recognition; biomedical signal processing; smart manufacturing

E-Mail Website
Guest Editor
Institute of Electronics, National Yang Ming Chiao Tung University, Hsinchu City 300093, Taiwan
Interests: multimedia; artificial intelligence; computer vision; machine learning; social media; and financial technology

E-Mail Website
Guest Editor
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu City 300093, Taiwan
Interests: computer vision; image processing; machine learning; AIOT

E-Mail Website
Guest Editor
Department of Electronics and Electrical Engineering, National Yang Ming Chiao Tung University, Hsinchu City 300093, Taiwan
Interests: multimedia analysis; deep learning

Special Issue Information

Dear Colleagues,

Biometric recognition is about identifying a person based on his/her biological or behavioral characteristics. Recently, due to the rise of the trend of zero trust principle in cybersecurity system and the FIDO alliance, biometric recognition is becoming a very important factor in the chain of user authentication mechanism. In this special issue, we would like to focus on the biometric technology based on various sensors. Topics such as biometrics using 2D images, NIR images, 3D point cloud, or with time varying signals like ECG or PPG are highly welcomed.

The practicality of the research is highly valued in this special issue. Therefore, topics like biometrics for mobile devices (smart phone or tablet), intelligent vehicle, or even for metaverse environment are welcomed too.

In addition, due to the emerge of the power of deep learning, recently Deepfake has become an important issue for the cybersecurity and privacy. How to deal with attacks based on Deepfake is also an important topic in this special issue.

Potential topics include but are not limited to the following:

  • Biometric recognition, like face, fingerprint, iris, voice, etc.
  • Biometric recognition with 3D point cloud.
  • Biometric recognition with NIR images
  • Biometric recognition with biomedical signals (like ECG, EEG, PPG, etc.)
  • Biometric recognition with behavioral characteristics
  • Biometric recognition on mobile device (by utilizing various sensors on mobile device)
  • Biometric recognition on intelligent vehicle
  • Biometric recognition in metaverse environment
  • Deepfake or Anti-Deepfake methods

Dr. Yung-Hui Li
Prof. Dr. Wen-Huang Cheng
Dr. Ching-Chun Huang
Dr. Hong-Han Shuai
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 17622 KiB  
Article
Mathematical Camera Array Optimization for Face 3D Modeling Application
by Bashar Alsadik, Luuk Spreeuwers, Farzaneh Dadrass Javan and Nahuel Manterola
Sensors 2023, 23(24), 9776; https://doi.org/10.3390/s23249776 - 12 Dec 2023
Viewed by 1073
Abstract
Camera network design is a challenging task for many applications in photogrammetry, biomedical engineering, robotics, and industrial metrology, among other fields. Many driving factors are found in the camera network design including the camera specifications, object of interest, and type of application. One [...] Read more.
Camera network design is a challenging task for many applications in photogrammetry, biomedical engineering, robotics, and industrial metrology, among other fields. Many driving factors are found in the camera network design including the camera specifications, object of interest, and type of application. One of the interesting applications is 3D face modeling and recognition which involves recognizing an individual based on facial attributes derived from the constructed 3D model. Developers and researchers still face difficulty in reaching the required high level of accuracy and reliability needed for image-based 3D face models. This is caused among many factors by the hardware limitations and imperfection of the cameras and the lack of proficiency in designing the ideal camera-system configuration. Accordingly, for precise measurements, we still need engineering-based techniques to ascertain the specific level of deliverables quality. In this paper, an optimal geometric design methodology of the camera network is presented by investigating different multi-camera system configurations composed of four up to eight cameras. A mathematical nonlinear constrained optimization technique is applied to solve the problem and each camera system configuration is tested for a facial 3D model where a quality assessment is applied to conclude the best configuration. The optimal configuration is found to be a 7-camera array, comprising a pentagon shape enclosing two additional cameras, offering high accuracy. For those who prioritize point density, a 9-camera array with a pentagon and quadrilateral arrangement in the X-Z plane is a viable choice. However, a 5-camera array offers a balance between accuracy and the number of cameras. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

19 pages, 1545 KiB  
Article
Deep Learning-Based Wrist Vascular Biometric Recognition
by Felix Marattukalam, Waleed Abdulla, David Cole and Pranav Gulati
Sensors 2023, 23(6), 3132; https://doi.org/10.3390/s23063132 - 15 Mar 2023
Cited by 6 | Viewed by 3066
Abstract
The need for contactless vascular biometric systems has significantly increased. In recent years, deep learning has proven to be efficient for vein segmentation and matching. Palm and finger vein biometrics are well researched; however, research on wrist vein biometrics is limited. Wrist vein [...] Read more.
The need for contactless vascular biometric systems has significantly increased. In recent years, deep learning has proven to be efficient for vein segmentation and matching. Palm and finger vein biometrics are well researched; however, research on wrist vein biometrics is limited. Wrist vein biometrics is promising due to it not having finger or palm patterns on the skin surface making the image acquisition process easier. This paper presents a deep learning-based novel low-cost end-to-end contactless wrist vein biometric recognition system. FYO wrist vein dataset was used to train a novel U-Net CNN structure to extract and segment wrist vein patterns effectively. The extracted images were evaluated to have a Dice Coefficient of 0.723. A CNN and Siamese Neural Network were implemented to match wrist vein images obtaining the highest F1-score of 84.7%. The average matching time is less than 3 s on a Raspberry Pi. All the subsystems were integrated with the help of a designed GUI to form a functional end-to-end deep learning-based wrist biometric recognition system. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

17 pages, 2389 KiB  
Article
Adaptive Spatial Transformation Networks for Periocular Recognition
by Diana Laura Borza, Ehsan Yaghoubi, Simone Frintrop and Hugo Proença
Sensors 2023, 23(5), 2456; https://doi.org/10.3390/s23052456 - 23 Feb 2023
Cited by 3 | Viewed by 2031
Abstract
Periocular recognition has emerged as a particularly valuable biometric identification method in challenging scenarios, such as partially occluded faces due to COVID-19 protective masks masks, in which face recognition might not be applicable. This work presents a periocular recognition framework based on deep [...] Read more.
Periocular recognition has emerged as a particularly valuable biometric identification method in challenging scenarios, such as partially occluded faces due to COVID-19 protective masks masks, in which face recognition might not be applicable. This work presents a periocular recognition framework based on deep learning, which automatically localises and analyses the most important areas in the periocular region. The main idea is to derive several parallel local branches from a neural network architecture, which in a semi-supervised manner learn the most discriminative areas in the feature map and solve the identification problem solely upon the corresponding cues. Here, each local branch learns a transformation matrix that allows for basic geometrical transformations (cropping and scaling), which is used to select a region of interest in the feature map, further analysed by a set of shared convolutional layers. Finally, the information extracted by the local branches and the main global branch are fused together for recognition. The experiments carried out on the challenging UBIRIS-v2 benchmark show that by integrating the proposed framework with various ResNet architectures, we consistently obtain an improvement in mAP of more than 4% over the “vanilla” architecture. In addition, extensive ablation studies were performed to better understand the behavior of the network and how the spatial transformation and the local branches influence the overall performance of the model. The proposed method can be easily adapted to other computer vision problems, which is also regarded as one of its strengths. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

29 pages, 3670 KiB  
Article
Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices
by Dmitry Ryumin, Denis Ivanko and Elena Ryumina
Sensors 2023, 23(4), 2284; https://doi.org/10.3390/s23042284 - 17 Feb 2023
Cited by 56 | Viewed by 7120
Abstract
Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication [...] Read more.
Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication and can be used as a very important part of modern human–computer interaction systems. Currently, audio and video modalities are easily accessible by sensors of mobile devices. However, there is no out-of-the-box solution for automatic audio-visual speech and gesture recognition. This study introduces two deep neural network-based model architectures: one for AVSR and one for gesture recognition. The main novelty regarding audio-visual speech recognition lies in fine-tuning strategies for both visual and acoustic features and in the proposed end-to-end model, which considers three modality fusion approaches: prediction-level, feature-level, and model-level. The main novelty in gesture recognition lies in a unique set of spatio-temporal features, including those that consider lip articulation information. As there are no available datasets for the combined task, we evaluated our methods on two different large-scale corpora—LRW and AUTSL—and outperformed existing methods on both audio-visual speech recognition and gesture recognition tasks. We achieved AVSR accuracy for the LRW dataset equal to 98.76% and gesture recognition rate for the AUTSL dataset equal to 98.56%. The results obtained demonstrate not only the high performance of the proposed methodology, but also the fundamental possibility of recognizing audio-visual speech and gestures by sensors of mobile devices. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

27 pages, 5361 KiB  
Article
Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons
by Alexey Sulavko
Sensors 2022, 22(23), 9551; https://doi.org/10.3390/s22239551 - 6 Dec 2022
Cited by 3 | Viewed by 2155
Abstract
Trustworthy AI applications such as biometric authentication must be implemented in a secure manner so that a malefactor is not able to take advantage of the knowledge and use it to make decisions. The goal of the present work is to increase the [...] Read more.
Trustworthy AI applications such as biometric authentication must be implemented in a secure manner so that a malefactor is not able to take advantage of the knowledge and use it to make decisions. The goal of the present work is to increase the reliability of biometric-based key generation, which is used for remote authentication with the protection of biometric templates. Ear canal echograms were used as biometric images. Multilayer convolutional neural networks that belong to the autoencoder type were used to extract features from the echograms. A new class of neurons (correlation neurons) that analyzes correlations between features instead of feature values is proposed. A neuro-extractor model was developed to associate a feature vector with a cryptographic key or user password. An open data set of ear canal echograms to test the performance of the proposed model was used. The following indicators were achieved: EER = 0.0238 (FRR = 0.093, FAR < 0.001), with a key length of 8192 bits. The proposed model is superior to known analogues in terms of key length and probability of erroneous decisions. The ear canal parameters are hidden from direct observation and photography. This fact creates additional difficulties for the synthesis of adversarial examples. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

24 pages, 6024 KiB  
Article
ST-DeepGait: A Spatiotemporal Deep Learning Model for Human Gait Recognition
by Latisha Konz, Andrew Hill and Farnoush Banaei-Kashani
Sensors 2022, 22(20), 8075; https://doi.org/10.3390/s22208075 - 21 Oct 2022
Cited by 6 | Viewed by 3360
Abstract
Human gait analysis presents an opportunity to study complex spatiotemporal data transpiring as co-movement patterns of multiple moving objects (i.e., human joints). Such patterns are acknowledged as movement signatures specific to an individual, offering the possibility to identify each individual based on unique [...] Read more.
Human gait analysis presents an opportunity to study complex spatiotemporal data transpiring as co-movement patterns of multiple moving objects (i.e., human joints). Such patterns are acknowledged as movement signatures specific to an individual, offering the possibility to identify each individual based on unique gait patterns. We present a spatiotemporal deep learning model, dubbed ST-DeepGait, to featurize spatiotemporal co-movement patterns of human joints, and accordingly classify such patterns to enable human gait recognition. To this end, the ST-DeepGait model architecture is designed according to the spatiotemporal human skeletal graph in order to impose learning the salient local spatial dynamics of gait as they occur over time. Moreover, we employ a multi-layer RNN architecture to induce a sequential notion of gait cycles in the model. Our experimental results show that ST-DeepGait can achieve recognition accuracy rates over 90%. Furthermore, we qualitatively evaluate the model with the class embeddings to show interpretable separability of the features in geometric latent space. Finally, to evaluate the generalizability of our proposed model, we perform a zero-shot detection on 10 classes of data completely unseen during training and achieve a recognition accuracy rate of 88% overall. With this paper, we also contribute our gait dataset captured with an RGB-D sensor containing approximately 30 video samples of each subject for 100 subjects totaling 3087 samples. While we use human gait analysis as a motivating application to evaluate ST-DeepGait, we believe that this model can be simply adopted and adapted to study co-movement patterns of multiple moving objects in other applications such as in sports analytics and traffic pattern analysis. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

18 pages, 6531 KiB  
Article
Embedding Biometric Information in Interpolated Medical Images with a Reversible and Adaptive Strategy
by Heng-Xiao Chi, Ji-Hwei Horng, Chin-Chen Chang and Yung-Hui Li
Sensors 2022, 22(20), 7942; https://doi.org/10.3390/s22207942 - 18 Oct 2022
Cited by 2 | Viewed by 1652
Abstract
How to hide messages in digital images so that messages cannot be discovered and tampered with is a compelling topic in the research area of cybersecurity. The interpolation-based reversible data hiding (RDH) scheme is especially useful for the application of medical image management. [...] Read more.
How to hide messages in digital images so that messages cannot be discovered and tampered with is a compelling topic in the research area of cybersecurity. The interpolation-based reversible data hiding (RDH) scheme is especially useful for the application of medical image management. The biometric information of patients acquired by biosensors is embedded into an interpolated medical image for the purpose of authentication. The proposed scheme classifies pixel blocks into complex and smooth ones according to each block’s dynamic range of pixel values. For a complex block, the minimum-neighbor (MN) interpolation followed by DIM embedding is applied, where DIM denotes the difference between the block’s interpolated pixel values and the maximum pixel values. For a smooth block, the block mean (BM) interpolation is followed by a prediction error histogram (PEH) embedding and a difference expansion (DE) embedding is applied. Compared with previous methods, this adaptive strategy ensures low distortion due to embedding for smooth blocks while it provides a good payload for complex blocks. Our scheme is suitable for both medical and general images. Experimental results confirm the effectiveness of the proposed scheme. Performance comparisons with state-of-the-art schemes are also given. The peak signal to noise ratio (PSNR) of the proposed scheme is 10.32 dB higher than the relevant works in the best case. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

Other

Jump to: Research

31 pages, 1641 KiB  
Systematic Review
Biometric Recognition: A Systematic Review on Electrocardiogram Data Acquisition Methods
by Teresa M. C. Pereira, Raquel C. Conceição, Vitor Sencadas and Raquel Sebastião
Sensors 2023, 23(3), 1507; https://doi.org/10.3390/s23031507 - 29 Jan 2023
Cited by 21 | Viewed by 6141
Abstract
In the last decades, researchers have shown the potential of using Electrocardiogram (ECG) as a biometric trait due to its uniqueness and hidden nature. However, despite the great number of approaches found in the literature, no agreement exists on the most appropriate methodology. [...] Read more.
In the last decades, researchers have shown the potential of using Electrocardiogram (ECG) as a biometric trait due to its uniqueness and hidden nature. However, despite the great number of approaches found in the literature, no agreement exists on the most appropriate methodology. This paper presents a systematic review of data acquisition methods, aiming to understand the impact of some variables from the data acquisition protocol of an ECG signal in the biometric identification process. We searched for papers on the subject using Scopus, defining several keywords and restrictions, and found a total of 121 papers. Data acquisition hardware and methods vary widely throughout the literature. We reviewed the intrusiveness of acquisitions, the number of leads used, and the duration of acquisitions. Moreover, by analyzing the literature, we can conclude that the preferable solutions include: (1) the use of off-the-person acquisitions as they bring ECG biometrics closer to viable, unconstrained applications; (2) the use of a one-lead setup; and (3) short-term acquisitions as they required fewer numbers of contact points, making the data acquisition of benefit to user acceptance and allow faster acquisitions, resulting in a user-friendly biometric system. Thus, this paper reviews data acquisition methods, summarizes multiple perspectives, and highlights existing challenges and problems. In contrast, most reviews on ECG-based biometrics focus on feature extraction and classification methods. Full article
(This article belongs to the Special Issue Biometrics Recognition Based on Sensor Technology)
Show Figures

Figure 1

Back to TopTop