Application of Neural Networks in Biosignal Process

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Bioelectronics".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 61120

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electronics, Telecommunications and Informatics, Multimedia Systems Department, Gdansk University of Technology, 80-233 Gdańsk, Poland
Interests: multimedia systems; sound and vision engineering; signal processing; biomedical engineering; artificial intelligence

E-Mail Website1 Website2
Guest Editor
Multimedia Systems Department, Gdańsk University of Technology, Gabriela Narutowicza 11/12, 80-233 Gdańsk, Poland
Interests: processing of audio and video; computer animation; 3D visualization; inference methods; artificial intelligence; applications of rough sets theory; classification and perception of sounds and images; algorithms and methods of image analysis and understanding; applications of embedded systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Biosignal acquisition and classification is a multistep process which is crucial for better understanding and decision making in human-health-related applications. Researchers deal with multichannel signals such as EEG, 2D signals such as X-Ray images or gaze tracking heat maps and 3D MRI scans. Depending on the nature of the signal, and the applied sensor and its location, such data can be noisy, unstructured, biased or hampered in many ways. Current advancements in neural networks show their great applicability for supervised and unsupervised signal preprocessing and classification. Many phases of the biosignal process can be augmented with the use of ANN, deep learning, and many types of machine-learning-based methods: Signal denoising, unsupervised clustering, dimensionality reduction, latent featurs extraction, classification, and compression are only a few examples of the many possible applications, important for accurate and effective biosignals processing.

This Special Issue focuses on describing use cases of ANN in biosignal analysis, explaining innovative applications and new methods, and showing the benefits of neural networks in key phases of processing of signals or images.

Prof. Dr. Andrzej Czyżewski
Dr. Piotr Szczuko
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Neural networks
  • Deep learning
  • Unsupervised learning
  • Signal preprocessing
  • Feature extraction
  • Signal and image classification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 12745 KiB  
Article
Objective Video Quality Assessment Method for Face Recognition Tasks
by Mikołaj Leszczuk, Lucjan Janowski, Jakub Nawała and Atanas Boev
Electronics 2022, 11(8), 1167; https://doi.org/10.3390/electronics11081167 - 7 Apr 2022
Cited by 4 | Viewed by 2266
Abstract
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used [...] Read more.
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used in video processing systems to evaluate videos whose quality is degraded by different processing scenarios. However, they are not suitable for video sequences used for recognition tasks (Target Recognition Videos, TRV). Therefore, correctly estimating the performance of the video processing pipeline in both manual and Computer Vision (CV) recognition tasks is still a major research challenge. There is a need for objective methods to evaluate video quality for recognition tasks. In response to this need, we show in this paper that it is possible to develop the new concept of an objective model for evaluating video quality for face recognition tasks. The model is trained, tested and validated on a representative set of image sequences. The set of degradation scenarios is based on the model of a digital camera and how the luminous flux reflected from the scene eventually becomes a digital image. The resulting degraded images are evaluated using a CV library for face recognition as well as VQI. The measured accuracy of a model, expressed as the value of the F-measure parameter, is 0.87. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

27 pages, 4231 KiB  
Article
Time Series Segmentation Using Neural Networks with Cross-Domain Transfer Learning
by Pedro Matias, Duarte Folgado, Hugo Gamboa and André Carreiro
Electronics 2021, 10(15), 1805; https://doi.org/10.3390/electronics10151805 - 28 Jul 2021
Cited by 11 | Viewed by 6172
Abstract
Searching for characteristic patterns in time series is a topic addressed for decades by the research community. Conventional subsequence matching techniques usually rely on the definition of a target template pattern and a searching method for detecting similar patterns. However, the intrinsic variability [...] Read more.
Searching for characteristic patterns in time series is a topic addressed for decades by the research community. Conventional subsequence matching techniques usually rely on the definition of a target template pattern and a searching method for detecting similar patterns. However, the intrinsic variability of time series introduces changes in patterns, either morphologically and temporally, making such techniques not as accurate as desired. Intending to improve segmentation performances, in this paper, we proposed a Mask-based Neural Network (NN) which is capable of extracting desired patterns of interest from long time series, without using any predefined template. The proposed NN has been validated, alongside a subsequence matching algorithm, in two datasets: clinical (electrocardiogram) and human activity (inertial sensors). Moreover, the reduced dimension of the data in the latter dataset led to the application of transfer learning and data augmentation techniques to reach model convergence. The results have shown the proposed model achieved better segmentation performances than the baseline one, in both domains, reaching average Precision and Recall scores of 99.0% and 97.5% (clinical domain), along with 77.0% and 71.4% (human activity domain), introducing Neural Networks and Transfer Learning as promising alternatives for pattern searching in time series. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

17 pages, 3955 KiB  
Article
Pneumonia Detection from Chest X-ray Images Based on Convolutional Neural Network
by Dejun Zhang, Fuquan Ren, Yushuang Li, Lei Na and Yue Ma
Electronics 2021, 10(13), 1512; https://doi.org/10.3390/electronics10131512 - 23 Jun 2021
Cited by 46 | Viewed by 10003
Abstract
Pneumonia has caused significant deaths worldwide, and it is a challenging task to detect many lung diseases such as like atelectasis, cardiomegaly, lung cancer, etc., often due to limited professional radiologists in hospital settings. In this paper, we develop a straightforward VGG-based model [...] Read more.
Pneumonia has caused significant deaths worldwide, and it is a challenging task to detect many lung diseases such as like atelectasis, cardiomegaly, lung cancer, etc., often due to limited professional radiologists in hospital settings. In this paper, we develop a straightforward VGG-based model architecture with fewer layers. In addition, to tackle the inadequate contrast of chest X-ray images, which brings about ambiguous diagnosis, the Dynamic Histogram Enhancement technique is used to pre-process the images. The parameters of our model are reduced by 97.51% compared to VGG-16, 85.86% compared to Res-50, 83.94% compared to Xception, 51.92% compared to DenseNet121, but increased MobileNet by 4%. However, the proposed model’s performance (accuracy: 96.068%, AUC: 0.99107 with a 95% confidence interval of [0.984, 0.996], precision: 94.408%, recall: 90.823%, F1 score: 92.851%) is superior to the models mentioned above (VGG-16: accuracy, 94.359%, AUC: 0.98928; Res-50: accuracy, 92.821%, AUC, 0.98780; Xception: accuracy, 96.068%, AUC, 0.99623; DenseNet121: accuracy, 87.350%, AUC, 0.99347; MobileNet: accuracy, 95.473%, AUC, 0.99531). The original Pneumonia Classification Dataset in Kaggle is split into three sub-sets, training, validation and test sets randomly at ratios of 70%, 10% and 20%. The model’s performance in pneumonia detection shows that the proposed VGG-based model could effectively classify normal and abnormal X-rays in practice, hence reducing the burden of radiologists. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

18 pages, 6508 KiB  
Article
Handwritten Character Recognition on Android for Basic Education Using Convolutional Neural Network
by Thi Thi Zin, Shin Thant, Moe Zet Pwint and Tsugunobu Ogino
Electronics 2021, 10(8), 904; https://doi.org/10.3390/electronics10080904 - 10 Apr 2021
Cited by 22 | Viewed by 5063
Abstract
An international initiative called Education for All (EFA) aims to create an environment in which everyone in the world can get an education. Especially in developing countries, many children lack access to a quality education. Therefore, we propose an offline self-learning application to [...] Read more.
An international initiative called Education for All (EFA) aims to create an environment in which everyone in the world can get an education. Especially in developing countries, many children lack access to a quality education. Therefore, we propose an offline self-learning application to learn written English and basic calculation for primary level students. It can also be used as a supplement for teachers to make the learning environment more interactive and interesting. In our proposed system, handwritten characters or words written on tablets were saved as input images. Then, we performed character segmentation by using our proposed character segmentation methods. For the character recognition, the Convolutional Neural Network (CNN) was used for recognizing segmented characters. For building our own dataset, handwritten data were collected from primary level students in developing countries. The network model was trained on a high-end machine to reduce the workload on the Android tablet. Various types of classifiers (digit and special characters, uppercase letters, lowercase letters, etc.) were created in order to reduce the incorrect classification. According to our experimental results, the proposed system achieved 95.6% on the 1000 randomly selected words and 98.7% for each character. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

19 pages, 6821 KiB  
Article
An Automated Method for Biometric Handwritten Signature Authentication Employing Neural Networks
by Mariusz Kurowski, Andrzej Sroczyński, Georgis Bogdanis and Andrzej Czyżewski
Electronics 2021, 10(4), 456; https://doi.org/10.3390/electronics10040456 - 12 Feb 2021
Cited by 14 | Viewed by 5824
Abstract
Handwriting biometrics applications in e-Security and e-Health are addressed in the course of the conducted research. An automated analysis method for the dynamic electronic representation of handwritten signature authentication was researched. The developed algorithms are based on the dynamic analysis of electronically handwritten [...] Read more.
Handwriting biometrics applications in e-Security and e-Health are addressed in the course of the conducted research. An automated analysis method for the dynamic electronic representation of handwritten signature authentication was researched. The developed algorithms are based on the dynamic analysis of electronically handwritten signatures employing neural networks. The signatures were acquired with the use of the designed electronic pen described in the paper. The triplet loss method was used to train a neural network suitable for writer-invariant signature verification. For each signature, the same neural network calculates a fixed-length latent space representation. The hand-corrected dataset containing 10,622 signatures was used in order to train and evaluate the proposed neural network. After learning, the network was tested and evaluated based on a comparison with the results found in the literature. The use of the triplet loss algorithm to teach the neural network to generate embeddings has proven to give good results in aggregating similar signatures and separating them from signatures representing different people. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

17 pages, 7793 KiB  
Article
Recognition of Drivers’ Activity Based on 1D Convolutional Neural Network
by Rafał J. Doniec, Szymon Sieciński, Konrad M. Duraj, Natalia J. Piaseczna, Katarzyna Mocny-Pachońska and Ewaryst J. Tkacz
Electronics 2020, 9(12), 2002; https://doi.org/10.3390/electronics9122002 - 25 Nov 2020
Cited by 17 | Viewed by 3414
Abstract
Background and objective: Driving a car is a complex activity which involves movements of the whole body. Many studies on drivers’ behavior are conducted to improve road traffic safety. Such studies involve the registration and processing of multiple signals, such as electroencephalography (EEG), [...] Read more.
Background and objective: Driving a car is a complex activity which involves movements of the whole body. Many studies on drivers’ behavior are conducted to improve road traffic safety. Such studies involve the registration and processing of multiple signals, such as electroencephalography (EEG), electrooculography (EOG) and the images of the driver’s face. In our research, we attempt to develop a classifier of scenarios related to learning to drive based on the data obtained in real road traffic conditions via smart glasses. In our approach, we try to minimize the number of signals which can be used to recognize the activities performed while driving a car. Material and methods: We attempt to evaluate the drivers’ activities using both electrooculography (EOG) and a deep learning approach. To acquire data we used JINS MEME smart glasses furnished with 3-point EOG electrodes, 3-axial accelerometer and 3-axial gyroscope. Sensor data were acquired on 20 drivers (ten experienced and ten learner drivers) on the same 28.7 km route under real road conditions in southern Poland. The drivers performed several tasks while wearing the smart glasses and the tasks were linked to the signal during the drive. For the recognition of four activities (parking, driving through a roundabout, city traffic and driving through an intersection), we used one-dimensional convolutional neural network (1D CNN). Results: The maximum accuracy was 95.6% on validation set and 99.8% on training set. The results prove that the model based on 1D CNN can classify the actions performed by drivers accurately. Conclusions: We have proved the feasibility of recognizing drivers’ activity based solely on EOG data, regardless of the driving experience and style. Our findings may be useful in the objective assessment of driving skills and thus, improving driving safety. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

15 pages, 1905 KiB  
Article
Self-Supervised Learning to Increase the Performance of Skin Lesion Classification
by Arkadiusz Kwasigroch, Michał Grochowski and Agnieszka Mikołajczyk
Electronics 2020, 9(11), 1930; https://doi.org/10.3390/electronics9111930 - 17 Nov 2020
Cited by 23 | Viewed by 3535
Abstract
To successfully train a deep neural network, a large amount of human-labeled data is required. Unfortunately, in many areas, collecting and labeling data is a difficult and tedious task. Several ways have been developed to mitigate the problem associated with the shortage of [...] Read more.
To successfully train a deep neural network, a large amount of human-labeled data is required. Unfortunately, in many areas, collecting and labeling data is a difficult and tedious task. Several ways have been developed to mitigate the problem associated with the shortage of data, the most common of which is transfer learning. However, in many cases, the use of transfer learning as the only remedy is insufficient. In this study, we improve deep neural models training and increase the classification accuracy under a scarcity of data by the use of the self-supervised learning technique. Self-supervised learning allows an unlabeled dataset to be used for pretraining the network, as opposed to transfer learning that requires labeled datasets. The pretrained network can be then fine-tuned using the annotated data. Moreover, we investigated the effect of combining the self-supervised learning approach with transfer learning. It is shown that this strategy outperforms network training from scratch or with transfer learning. The tests were conducted on a very important and sensitive application (skin lesion classification), but the presented approach can be applied to a broader family of applications, especially in the medical domain where the scarcity of data is a real problem. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

17 pages, 1972 KiB  
Article
Temporal Auditory Coding Features for Causal Speech Enhancement
by Iordanis Thoidis, Lazaros Vrysis, Dimitrios Markou and George Papanikolaou
Electronics 2020, 9(10), 1698; https://doi.org/10.3390/electronics9101698 - 16 Oct 2020
Cited by 5 | Viewed by 3109
Abstract
Perceptually motivated audio signal processing and feature extraction have played a key role in the determination of high-level semantic processes and the development of emerging systems and applications, such as mobile phone telecommunication and hearing aids. In the era of deep learning, speech [...] Read more.
Perceptually motivated audio signal processing and feature extraction have played a key role in the determination of high-level semantic processes and the development of emerging systems and applications, such as mobile phone telecommunication and hearing aids. In the era of deep learning, speech enhancement methods based on neural networks have seen great success, mainly operating on the log-power spectra. Although these approaches surpass the need for exhaustive feature extraction and selection, it is still unclear whether they target the important sound characteristics related to speech perception. In this study, we propose a novel set of auditory-motivated features for single-channel speech enhancement by fusing temporal envelope and temporal fine structure information in the context of vocoder-like processing. A causal gated recurrent unit (GRU) neural network is employed to recover the low-frequency amplitude modulations of speech. Experimental results indicate that the exploited system achieves considerable gains for normal-hearing and hearing-impaired listeners, in terms of objective intelligibility and quality metrics. The proposed auditory-motivated feature set achieved better objective intelligibility results compared to the conventional log-magnitude spectrogram features, while mixed results were observed for simulated listeners with hearing loss. Finally, we demonstrate that the proposed analysis/synthesis framework provides satisfactory reconstruction accuracy of speech signals. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

14 pages, 2902 KiB  
Article
Automatic ECG Diagnosis Using Convolutional Neural Network
by Roberta Avanzato and Francesco Beritelli
Electronics 2020, 9(6), 951; https://doi.org/10.3390/electronics9060951 - 8 Jun 2020
Cited by 86 | Viewed by 16232
Abstract
Cardiovascular disease (CVD) is the most common class of chronic and life-threatening diseases and, therefore, considered to be one of the main causes of mortality. The proposed new neural architecture based on the recent popularity of convolutional neural networks (CNN) was a solution [...] Read more.
Cardiovascular disease (CVD) is the most common class of chronic and life-threatening diseases and, therefore, considered to be one of the main causes of mortality. The proposed new neural architecture based on the recent popularity of convolutional neural networks (CNN) was a solution for the development of automatic heart disease diagnosis systems using electrocardiogram (ECG) signals. More specifically, ECG signals were passed directly to a properly trained CNN network. The database consisted of more than 4000 ECG signal instances extracted from outpatient ECG examinations obtained from 47 subjects: 25 males and 22 females. The confusion matrix derived from the testing dataset indicated 99% accuracy for the “normal” class. For the “atrial premature beat” class, ECG segments were correctly classified 100% of the time. Finally, for the “premature ventricular contraction” class, ECG segments were correctly classified 96% of the time. In total, there was an average classification accuracy of 98.33%. The sensitivity (SNS) and the specificity (SPC) were, respectively, 98.33% and 98.35%. The new approach based on deep learning and, in particular, on a CNN network guaranteed excellent performance in automatic recognition and, therefore, prevention of cardiovascular diseases. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

11 pages, 3630 KiB  
Article
OCT Image Restoration Using Non-Local Deep Image Prior
by Wenshi Fan, Hancheng Yu, Tianming Chen and Sheng Ji
Electronics 2020, 9(5), 784; https://doi.org/10.3390/electronics9050784 - 11 May 2020
Cited by 17 | Viewed by 3915
Abstract
In recent years, convolutional neural networks (CNN) have been widely used in image denoising for their high performance. One difficulty in applying the CNN to medical image denoising such as speckle reduction in the optical coherence tomography (OCT) image is that a large [...] Read more.
In recent years, convolutional neural networks (CNN) have been widely used in image denoising for their high performance. One difficulty in applying the CNN to medical image denoising such as speckle reduction in the optical coherence tomography (OCT) image is that a large amount of high-quality data is required for training, which is an inherent limitation for OCT despeckling. Recently, deep image prior (DIP) networks have been proposed for image restoration without pre-training since the CNN structures have the intrinsic ability to capture the low-level statistics of a single image. However, the DIP has difficulty finding a good balance between maintaining details and suppressing speckle noise. Inspired by DIP, in this paper, a sorted non-local statics which measures the signal autocorrelation in the differences between the constructed image and the input image is proposed for OCT image restoration. By adding the sorted non-local statics as a regularization loss in the DIP learning, more low-level image statistics are captured by CNN networks in the process of OCT image restoration. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual quality. Full article
(This article belongs to the Special Issue Application of Neural Networks in Biosignal Process)
Show Figures

Figure 1

Back to TopTop