Recent Advances in Machine Learning and Explainable Artificial Intelligence in Biomedical Data Mining, and Disease Diagnosis Frameworks

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 8749

Special Issue Editor


E-Mail Website
Guest Editor
Department of Artificial Intelligent and Robotics, Sejong University, Seoul 05006, Republic of Korea
Interests: biomedical signal/image processing; computer-aided diagnostic; brain imaging; brain–computer interface; machine learning; artificial intelligence; EEG; fNIRS

Special Issue Information

Dear Colleagues,

The rapid evolution of artificial intelligence, data analytics, and technology has created new avenues for personalized healthcare approaches. This Special Issue focuses on the most recent advances in machine learning (ML) and explainable artificial intelligence (XAI) for biomedical data mining and disease diagnostic frameworks. This issue delves into the application of advanced ML methods like deep learning and ensemble learning for analyzing intricate biomedical data sets, particularly focusing on disease diagnosis and prognosis. Another central theme of the Special Issue is the importance of explainable AI in healthcare applications. XAI techniques aim to make the decision-making process of AI systems more transparent and understandable. The potential topics include, but are not limited to, the following: supervised and unsupervised learning, deep learning, XAI in healthcare, system modelling and system design, confidentiality and privacy of health data, biometrics, digital technologies, data mining, computer-aided diagnosis, brain–computer interfaces, etc. This Special Issue aims to bring together original research and review papers on current breakthroughs in MI and XAI in healthcare.

Prof. Dr. Amad Zafar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EEG
  • fNIRS
  • MRI
  • X-rays
  • biomedical signal and image processing
  • machine learning
  • explainable artificial intelligence
  • biomedical data mining
  • computer-aided diagnosis
  • brain–computer interfaces
  • healthcare

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 2282 KiB  
Article
Emotion Recognition Using EEG Signals and Audiovisual Features with Contrastive Learning
by Ju-Hwan Lee, Jin-Young Kim and Hyoung-Gook Kim
Bioengineering 2024, 11(10), 997; https://doi.org/10.3390/bioengineering11100997 - 3 Oct 2024
Viewed by 948
Abstract
Multimodal emotion recognition has emerged as a promising approach to capture the complex nature of human emotions by integrating information from various sources such as physiological signals, visual behavioral cues, and audio-visual content. However, current methods often struggle with effectively processing redundant or [...] Read more.
Multimodal emotion recognition has emerged as a promising approach to capture the complex nature of human emotions by integrating information from various sources such as physiological signals, visual behavioral cues, and audio-visual content. However, current methods often struggle with effectively processing redundant or conflicting information across modalities and may overlook implicit inter-modal correlations. To address these challenges, this paper presents a novel multimodal emotion recognition framework which integrates audio-visual features with viewers’ EEG data to enhance emotion classification accuracy. The proposed approach employs modality-specific encoders to extract spatiotemporal features, which are then aligned through contrastive learning to capture inter-modal relationships. Additionally, cross-modal attention mechanisms are incorporated for effective feature fusion across modalities. The framework, comprising pre-training, fine-tuning, and testing phases, is evaluated on multiple datasets of emotional responses. The experimental results demonstrate that the proposed multimodal approach, which combines audio-visual features with EEG data, is highly effective in recognizing emotions, highlighting its potential for advancing emotion recognition systems. Full article
Show Figures

Figure 1

13 pages, 2660 KiB  
Article
Enhancing Oral Squamous Cell Carcinoma Detection Using Histopathological Images: A Deep Feature Fusion and Improved Haris Hawks Optimization-Based Framework
by Amad Zafar, Majdi Khalid, Majed Farrash, Thamir M. Qadah, Hassan Fareed M. Lahza and Seong-Han Kim
Bioengineering 2024, 11(9), 913; https://doi.org/10.3390/bioengineering11090913 - 12 Sep 2024
Viewed by 3551
Abstract
Oral cancer, also known as oral squamous cell carcinoma (OSCC), is one of the most prevalent types of cancer and caused 177,757 deaths worldwide in 2020, as reported by the World Health Organization. Early detection and identification of OSCC are highly correlated with [...] Read more.
Oral cancer, also known as oral squamous cell carcinoma (OSCC), is one of the most prevalent types of cancer and caused 177,757 deaths worldwide in 2020, as reported by the World Health Organization. Early detection and identification of OSCC are highly correlated with survival rates. Therefore, this study presents an automatic image-processing-based machine learning approach for OSCC detection. Histopathological images were used to compute deep features using various pretrained models. Based on the classification performance, the best features (ResNet-101 and EfficientNet-b0) were merged using the canonical correlation feature fusion approach, resulting in an enhanced classification performance. Additionally, the binary-improved Haris Hawks optimization (b-IHHO) algorithm was used to eliminate redundant features and further enhance the classification performance, leading to a high classification rate of 97.78% for OSCC. The b-IHHO trained the k-nearest neighbors model with an average feature vector size of only 899. A comparison with other wrapper-based feature selection approaches showed that the b-IHHO results were statistically more stable, reliable, and significant (p < 0.01). Moreover, comparisons with those other state-of-the-art (SOTA) approaches indicated that the b-IHHO model offered better results, suggesting that the proposed framework may be applicable in clinical settings to aid doctors in OSCC detection. Full article
Show Figures

Graphical abstract

21 pages, 4198 KiB  
Article
Discriminant Input Processing Scheme for Self-Assisted Intelligent Healthcare Systems
by Mohamed Medani, Shtwai Alsubai, Hong Min, Ashit Kumar Dutta and Mohd Anjum
Bioengineering 2024, 11(7), 715; https://doi.org/10.3390/bioengineering11070715 - 14 Jul 2024
Cited by 1 | Viewed by 811
Abstract
Modern technology and analysis of emotions play a crucial role in enabling intelligent healthcare systems to provide diagnostics and self-assistance services based on observation. However, precise data predictions and computational models are critical for these systems to perform their jobs effectively. Traditionally, healthcare [...] Read more.
Modern technology and analysis of emotions play a crucial role in enabling intelligent healthcare systems to provide diagnostics and self-assistance services based on observation. However, precise data predictions and computational models are critical for these systems to perform their jobs effectively. Traditionally, healthcare monitoring has been the primary emphasis. However, there were a couple of negatives, including the pattern feature generating the method’s scalability and reliability, which was tested with different data sources. This paper delves into the Discriminant Input Processing Scheme (DIPS), a crucial instrument for resolving challenges. Data-segmentation-based complex processing techniques allow DIPS to merge many emotion analysis streams. The DIPS recommendation engine uses segmented data characteristics to sift through inputs from the emotion stream for patterns. The recommendation is more accurate and flexible since DIPS uses transfer learning to identify similar data across different streams. With transfer learning, this study can be sure that the previous recommendations and data properties will be available in future data streams, making the most of them. Data utilization ratio, approximation, accuracy, and false rate are some of the metrics used to assess the effectiveness of the advised approach. Self-assisted intelligent healthcare systems that use emotion-based analysis and state-of-the-art technology are crucial when managing healthcare. This study improves healthcare management’s accuracy and efficiency using computational models like DIPS to guarantee accurate data forecasts and recommendations. Full article
Show Figures

Graphical abstract

20 pages, 3470 KiB  
Article
Overt Word Reading and Visual Object Naming in Adults with Dyslexia: Electroencephalography Study in Transparent Orthography
by Maja Perkušić Čović, Igor Vujović, Joško Šoda, Marijan Palmović and Maja Rogić Vidaković
Bioengineering 2024, 11(5), 459; https://doi.org/10.3390/bioengineering11050459 - 4 May 2024
Viewed by 1409
Abstract
The study aimed to investigate overt reading and naming processes in adult people with dyslexia (PDs) in shallow (transparent) language orthography. The results of adult PDs are compared with adult healthy controls HCs. Comparisons are made in three phases: pre-lexical (150–260 ms), lexical [...] Read more.
The study aimed to investigate overt reading and naming processes in adult people with dyslexia (PDs) in shallow (transparent) language orthography. The results of adult PDs are compared with adult healthy controls HCs. Comparisons are made in three phases: pre-lexical (150–260 ms), lexical (280–700 ms), and post-lexical stage of processing (750–1000 ms) time window. Twelve PDs and HCs performed overt reading and naming tasks under EEG recording. The word reading and naming task consisted of sparse neighborhoods with closed phonemic onset (words/objects sharing the same onset). For the analysis of the mean ERP amplitude for pre-lexical, lexical, and post-lexical time window, a mixed design ANOVA was performed with the right (F4, FC2, FC6, C4, T8, CP2, CP6, P4) and left (F3, FC5, FC1, T7, C3, CP5, CP1, P7, P3) electrode sites, within-subject factors and group (PD vs. HC) as between-subject factor. Behavioral response latency results revealed significantly prolonged reading latency between HCs and PDs, while no difference was detected in naming response latency. ERP differences were found between PDs and HCs in the right hemisphere’s pre-lexical time window (160–200 ms) for word reading aloud. For visual object naming aloud, ERP differences were found between PDs and HCs in the right hemisphere’s post-lexical time window (900–1000 ms). The present study demonstrated different distributions of the electric field at the scalp in specific time windows between two groups in the right hemisphere in both word reading and visual object naming aloud, suggesting alternative processing strategies in adult PDs. These results indirectly support the view that adult PDs in shallow language orthography probably rely on the grapho-phonological route during overt word reading and have difficulties with phoneme and word retrieval during overt visual object naming in adulthood. Full article
Show Figures

Figure 1

18 pages, 4713 KiB  
Article
Attention-ProNet: A Prototype Network with Hybrid Attention Mechanisms Applied to Zero Calibration in Rapid Serial Visual Presentation-Based Brain–Computer Interface
by Baiwen Zhang, Meng Xu, Yueqi Zhang, Sicheng Ye and Yuanfang Chen
Bioengineering 2024, 11(4), 347; https://doi.org/10.3390/bioengineering11040347 - 2 Apr 2024
Viewed by 1490
Abstract
The rapid serial visual presentation-based brain–computer interface (RSVP-BCI) system achieves the recognition of target images by extracting event-related potential (ERP) features from electroencephalogram (EEG) signals and then building target classification models. Currently, how to reduce the training and calibration time for classification models [...] Read more.
The rapid serial visual presentation-based brain–computer interface (RSVP-BCI) system achieves the recognition of target images by extracting event-related potential (ERP) features from electroencephalogram (EEG) signals and then building target classification models. Currently, how to reduce the training and calibration time for classification models across different subjects is a crucial issue in the practical application of RSVP. To address this issue, a zero-calibration (ZC) method termed Attention-ProNet, which involves meta-learning with a prototype network integrating multiple attention mechanisms, was proposed in this study. In particular, multiscale attention mechanisms were used for efficient EEG feature extraction. Furthermore, a hybrid attention mechanism was introduced to enhance model generalization, and attempts were made to incorporate suitable data augmentation and channel selection methods to develop an innovative and high-performance ZC RSVP-BCI decoding model algorithm. The experimental results demonstrated that our method achieved a balance accuracy (BA) of 86.33% in the decoding task for new subjects. Moreover, appropriate channel selection and data augmentation methods further enhanced the performance of the network by affording an additional 2.3% increase in BA. The model generated by the meta-learning prototype network Attention-ProNet, which incorporates multiple attention mechanisms, allows for the efficient and accurate decoding of new subjects without the need for recalibration or retraining. Full article
Show Figures

Figure 1

Back to TopTop