Artificial Intelligence in Biomedical Diagnosis and Prognosis

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (15 January 2024) | Viewed by 25853

Special Issue Editor


E-Mail Website
Guest Editor
Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin 17104, Republic of Korea
Interests: AI deep learning; machine learning; pattern recognition; brain engineering; biomedical imaging/signal analysis; robot intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, the fields of biomedical diagnosis and prognosis are being revolutionized with Artificial Intelligence (AI). These fields are advancing along with the advancements of AI methodologies, especially deep learning methods for disease detection, segmentation, classification, diagnosis, and even prognosis, improving their accuracy and reliability. The AI methodologies are becoming pervasive in the systems of medical image analysis, computer-aided diagnosis, clinical decision support, health monitoring and even disease prognosis.

This Special Issue intends to share novel ideas and works of researchers and technical experts in the fields of biomedical diagnosis and prognosis.

This Special Issue is dedicated to high-quality, original research papers in the overlapping fields of: 

  • Medical diagnosis and prognosis;
  • Medical deep learning;
  • Medical AI;
  • Pervasive AI in biomedicine;
  • Explainable AI for diagnosis and prognosis;
  • Medical image analysis;
  • Health monitoring systems;
  • Clinical decision support systems;
  • Computer-aided diagnosis systems;
  • Robotics for medical diagnosis and prognosis.

Prof. Dr. Tae-Seong Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical diagnosis
  • medical prognosis
  • medical AI
  • machine learning
  • deep learning
  • pervasive ai medical systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 4117 KiB  
Article
COVID-19 Detection and Diagnosis Model on CT Scans Based on AI Techniques
by Maria-Alexandra Zolya, Cosmin Baltag, Dragoș-Vasile Bratu, Simona Coman and Sorin-Aurel Moraru
Bioengineering 2024, 11(1), 79; https://doi.org/10.3390/bioengineering11010079 - 14 Jan 2024
Cited by 1 | Viewed by 1640
Abstract
The end of 2019 could be mounted in a rudimentary framing of a new medical problem, which globally introduces into the discussion a fulminant outbreak of coronavirus, consequently spreading COVID-19 that conducted long-lived and persistent repercussions. Hence, the theme proposed to be solved [...] Read more.
The end of 2019 could be mounted in a rudimentary framing of a new medical problem, which globally introduces into the discussion a fulminant outbreak of coronavirus, consequently spreading COVID-19 that conducted long-lived and persistent repercussions. Hence, the theme proposed to be solved arises from the field of medical imaging, where a pulmonary CT-based standardized reporting system could be addressed as a solution. The core of it focuses on certain impediments such as the overworking of doctors, aiming essentially to solve a classification problem using deep learning techniques, namely, if a patient suffers from COVID-19, viral pneumonia, or is healthy from a pulmonary point of view. The methodology’s approach was a meticulous one, denoting an empirical character in which the initial stage, given using data processing, performs an extraction of the lung cavity from the CT scans, which is a less explored approach, followed by data augmentation. The next step is comprehended by developing a CNN in two scenarios, one in which there is a binary classification (COVID and non-COVID patients), and the other one is represented by a three-class classification. Moreover, viral pneumonia is addressed. To obtain an efficient version, architectural changes were gradually made, involving four databases during this process. Furthermore, given the availability of pre-trained models, the transfer learning technique was employed by incorporating the linear classifier from our own convolutional network into an existing model, with the result being much more promising. The experimentation encompassed several models including MobileNetV1, ResNet50, DenseNet201, VGG16, and VGG19. Through a more in-depth analysis, using the CAM technique, MobilneNetV1 differentiated itself via the detection accuracy of possible pulmonary anomalies. Interestingly, this model stood out as not being among the most used in the literature. As a result, the following values of evaluation metrics were reached: loss (0.0751), accuracy (0.9744), precision (0.9758), recall (0.9742), AUC (0.9902), and F1 score (0.9750), from 1161 samples allocated for each of the three individual classes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Graphical abstract

18 pages, 4649 KiB  
Article
An AI-Enabled Bias-Free Respiratory Disease Diagnosis Model Using Cough Audio
by Tabish Saeed, Aneeqa Ijaz, Ismail Sadiq, Haneya Naeem Qureshi, Ali Rizwan and Ali Imran
Bioengineering 2024, 11(1), 55; https://doi.org/10.3390/bioengineering11010055 - 5 Jan 2024
Cited by 4 | Viewed by 1931
Abstract
Cough-based diagnosis for respiratory diseases (RDs) using artificial intelligence (AI) has attracted considerable attention, yet many existing studies overlook confounding variables in their predictive models. These variables can distort the relationship between cough recordings (input data) and RD status (output variable), leading to [...] Read more.
Cough-based diagnosis for respiratory diseases (RDs) using artificial intelligence (AI) has attracted considerable attention, yet many existing studies overlook confounding variables in their predictive models. These variables can distort the relationship between cough recordings (input data) and RD status (output variable), leading to biased associations and unrealistic model performance. To address this gap, we propose the Bias-Free Network (RBF-Net), an end-to-end solution that effectively mitigates the impact of confounders in the training data distribution. RBF-Net ensures accurate and unbiased RD diagnosis features, emphasizing its relevance by incorporating a COVID-19 dataset in this study. This approach aims to enhance the reliability of AI-based RD diagnosis models by navigating the challenges posed by confounding variables. A hybrid of a Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks is proposed for the feature encoder module of RBF-Net. An additional bias predictor is incorporated in the classification scheme to formulate a conditional Generative Adversarial Network (c-GAN) that helps in decorrelating the impact of confounding variables from RD prediction. The merit of RBF-Net is demonstrated by comparing classification performance with a State-of-The-Art (SoTA) Deep Learning (DL) model (CNN-LSTM) after training on different unbalanced COVID-19 data sets, created by using a large-scale proprietary cough data set. RBF-Net proved its robustness against extremely biased training scenarios by achieving test set accuracies of 84.1%, 84.6%, and 80.5% for the following confounding variables—gender, age, and smoking status, respectively. RBF-Net outperforms the CNN-LSTM model test set accuracies by 5.5%, 7.7%, and 8.2%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

22 pages, 1674 KiB  
Article
A Boundary-Enhanced Liver Segmentation Network for Multi-Phase CT Images with Unsupervised Domain Adaptation
by Swathi Ananda, Rahul Kumar Jain, Yinhao Li, Yutaro Iwamoto, Xian-Hua Han, Shuzo Kanasaki, Hongjie Hu and Yen-Wei Chen
Bioengineering 2023, 10(8), 899; https://doi.org/10.3390/bioengineering10080899 - 28 Jul 2023
Cited by 1 | Viewed by 1616
Abstract
Multi-phase computed tomography (CT) images have gained significant popularity in the diagnosis of hepatic disease. There are several challenges in the liver segmentation of multi-phase CT images. (1) Annotation: due to the distinct contrast enhancements observed in different phases (i.e., each phase is [...] Read more.
Multi-phase computed tomography (CT) images have gained significant popularity in the diagnosis of hepatic disease. There are several challenges in the liver segmentation of multi-phase CT images. (1) Annotation: due to the distinct contrast enhancements observed in different phases (i.e., each phase is considered a different domain), annotating all phase images in multi-phase CT images for liver or tumor segmentation is a task that consumes substantial time and labor resources. (2) Poor contrast: some phase images may have poor contrast, making it difficult to distinguish the liver boundary. In this paper, we propose a boundary-enhanced liver segmentation network for multi-phase CT images with unsupervised domain adaptation. The first contribution is that we propose DD-UDA, a dual discriminator-based unsupervised domain adaptation, for liver segmentation on multi-phase images without multi-phase annotations, effectively tackling the annotation problem. To improve accuracy by reducing distribution differences between the source and target domains, we perform domain adaptation at two levels by employing two discriminators, one at the feature level and the other at the output level. The second contribution is that we introduce an additional boundary-enhanced decoder to the encoder–decoder backbone segmentation network to effectively recognize the boundary region, thereby addressing the problem of poor contrast. In our study, we employ the public LiTS dataset as the source domain and our private MPCT-FLLs dataset as the target domain. The experimental findings validate the efficacy of our proposed methods, producing substantially improved results when tested on each phase of the multi-phase CT image even without the multi-phase annotations. As evaluated on the MPCT-FLLs dataset, the existing baseline (UDA) method achieved IoU scores of 0.785, 0.796, and 0.772 for the PV, ART, and NC phases, respectively, while our proposed approach exhibited superior performance, surpassing both the baseline and other state-of-the-art methods. Notably, our method achieved remarkable IoU scores of 0.823, 0.811, and 0.800 for the PV, ART, and NC phases, respectively, emphasizing its effectiveness in achieving accurate image segmentation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

16 pages, 8774 KiB  
Article
A New Breakpoint to Classify 3D Voxels in MRI: A Space Transform Strategy with 3t2FTS-v2 and Its Application for ResNet50-Based Categorization of Brain Tumors
by Hasan Koyuncu and Mücahid Barstuğan
Bioengineering 2023, 10(6), 629; https://doi.org/10.3390/bioengineering10060629 - 23 May 2023
Cited by 1 | Viewed by 2093
Abstract
Three-dimensional (3D) image analyses are frequently applied to perform classification tasks. Herein, 3D-based machine learning systems are generally used/generated by examining two designs: a 3D-based deep learning model or a 3D-based task-specific framework. However, except for a new approach named 3t2FTS, a promising [...] Read more.
Three-dimensional (3D) image analyses are frequently applied to perform classification tasks. Herein, 3D-based machine learning systems are generally used/generated by examining two designs: a 3D-based deep learning model or a 3D-based task-specific framework. However, except for a new approach named 3t2FTS, a promising feature transform operating from 3D to two-dimensional (2D) space has not been efficiently investigated for classification applications in 3D magnetic resonance imaging (3D MRI). In other words, a state-of-the-art feature transform strategy is not available that achieves high accuracy and provides the adaptation of 2D-based deep learning models for 3D MRI-based classification. With this aim, this paper presents a new version of the 3t2FTS approach (3t2FTS-v2) to apply a transfer learning model for tumor categorization of 3D MRI data. For performance evaluation, the BraTS 2017/2018 dataset is handled that involves high-grade glioma (HGG) and low-grade glioma (LGG) samples in four different sequences/phases. 3t2FTS-v2 is proposed to effectively transform the features from 3D to 2D space by using two textural features: first-order statistics (FOS) and gray level run length matrix (GLRLM). In 3t2FTS-v2, normalization analyses are assessed to be different from 3t2FTS to accurately transform the space information apart from the usage of GLRLM features. The ResNet50 architecture is preferred to fulfill the HGG/LGG classification due to its remarkable performance in tumor grading. As a result, for the classification of 3D data, the proposed model achieves a 99.64% accuracy by guiding the literature about the importance of 3t2FTS-v2 that can be utilized not only for tumor grading but also for whole brain tissue-based disease classification. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

18 pages, 14858 KiB  
Article
Augmented Reality Surgical Navigation System Integrated with Deep Learning
by Shin-Yan Chiou, Li-Sheng Liu, Chia-Wei Lee, Dong-Hyun Kim, Mohammed A. Al-masni, Hao-Li Liu, Kuo-Chen Wei, Jiun-Lin Yan and Pin-Yuan Chen
Bioengineering 2023, 10(5), 617; https://doi.org/10.3390/bioengineering10050617 - 20 May 2023
Cited by 6 | Viewed by 4839
Abstract
Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented [...] Read more.
Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented reality (AR) to provide surgeons with intuitive imaging during surgery, through the use of planar and three-dimensional imagery. However, these studies have mainly focused on visual aids and have paid relatively little attention to real surgical guidance aids. Moreover, the use of augmented reality reduces system stability and accuracy, and optical navigation systems are costly. Therefore, this paper proposed an augmented reality surgical navigation system based on image positioning that achieves the desired system advantages with low cost, high stability, and high accuracy. This system also provides intuitive guidance for the surgical target point, entry point, and trajectory. Once the surgeon uses the navigation stick to indicate the position of the surgical entry point, the connection between the surgical target and the surgical entry point is immediately displayed on the AR device (tablet or HoloLens glasses), and a dynamic auxiliary line is shown to assist with incision angle and depth. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons confirmed the system’s overall benefit. A “virtual object automatic scanning” method is proposed to achieve a high accuracy of 1 ± 0.1 mm for the AR-based system. Furthermore, a deep learning-based U-Net segmentation network is incorporated to enable automatic identification of the hydrocephalus location by the system. The system achieves improved recognition accuracy, sensitivity, and specificity of 99.93%, 93.85%, and 95.73%, respectively, representing a significant improvement from previous studies. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Graphical abstract

37 pages, 7355 KiB  
Article
CNN-Based Identification of Parkinson’s Disease from Continuous Speech in Noisy Environments
by Paul Faragó, Sebastian-Aurelian Ștefănigă, Claudia-Georgiana Cordoș, Laura-Ioana Mihăilă, Sorin Hintea, Ana-Sorina Peștean, Michel Beyer, Lăcrămioara Perju-Dumbravă and Robert Radu Ileșan
Bioengineering 2023, 10(5), 531; https://doi.org/10.3390/bioengineering10050531 - 26 Apr 2023
Cited by 8 | Viewed by 2291
Abstract
Parkinson’s disease is a progressive neurodegenerative disorder caused by dopaminergic neuron degeneration. Parkinsonian speech impairment is one of the earliest presentations of the disease and, along with tremor, is suitable for pre-diagnosis. It is defined by hypokinetic dysarthria and accounts for respiratory, phonatory, [...] Read more.
Parkinson’s disease is a progressive neurodegenerative disorder caused by dopaminergic neuron degeneration. Parkinsonian speech impairment is one of the earliest presentations of the disease and, along with tremor, is suitable for pre-diagnosis. It is defined by hypokinetic dysarthria and accounts for respiratory, phonatory, articulatory, and prosodic manifestations. The topic of this article targets artificial-intelligence-based identification of Parkinson’s disease from continuous speech recorded in a noisy environment. The novelty of this work is twofold. First, the proposed assessment workflow performed speech analysis on samples of continuous speech. Second, we analyzed and quantified Wiener filter applicability for speech denoising in the context of Parkinsonian speech identification. We argue that the Parkinsonian features of loudness, intonation, phonation, prosody, and articulation are contained in the speech, speech energy, and Mel spectrograms. Thus, the proposed workflow follows a feature-based speech assessment to determine the feature variation ranges, followed by speech classification using convolutional neural networks. We report the best classification accuracies of 96% on speech energy, 93% on speech, and 92% on Mel spectrograms. We conclude that the Wiener filter improves both feature-based analysis and convolutional-neural-network-based classification performances. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

18 pages, 4530 KiB  
Article
Comparison of CT- and MRI-Based Quantification of Tumor Heterogeneity and Vascularity for Correlations with Prognostic Biomarkers and Survival Outcomes: A Single-Center Prospective Cohort Study
by Hyo-Young Kim, Min-Sun Bae, Bo-Kyoung Seo, Ji-Young Lee, Kyu-Ran Cho, Ok-Hee Woo, Sung-Eun Song and Jaehyung Cha
Bioengineering 2023, 10(5), 504; https://doi.org/10.3390/bioengineering10050504 - 22 Apr 2023
Cited by 1 | Viewed by 2048
Abstract
Background: Tumor heterogeneity and vascularity can be noninvasively quantified using histogram and perfusion analyses on computed tomography (CT) and magnetic resonance imaging (MRI). We compared the association of histogram and perfusion features with histological prognostic factors and progression-free survival (PFS) in breast cancer [...] Read more.
Background: Tumor heterogeneity and vascularity can be noninvasively quantified using histogram and perfusion analyses on computed tomography (CT) and magnetic resonance imaging (MRI). We compared the association of histogram and perfusion features with histological prognostic factors and progression-free survival (PFS) in breast cancer patients on low-dose CT and MRI. Methods: This prospective study enrolled 147 women diagnosed with invasive breast cancer who simultaneously underwent contrast-enhanced MRI and CT before treatment. We extracted histogram and perfusion parameters from each tumor on MRI and CT, assessed associations between imaging features and histological biomarkers, and estimated PFS using the Kaplan–Meier analysis. Results: Out of 54 histogram and perfusion parameters, entropy on T2- and postcontrast T1-weighted MRI and postcontrast CT, and perfusion (blood flow) on CT were significantly associated with the status of subtypes, hormone receptors, and human epidermal growth factor receptor 2 (p < 0.05). Patients with high entropy on postcontrast CT showed worse PFS than patients with low entropy (p = 0.053) and high entropy on postcontrast CT negatively affected PFS in the Ki67-positive group (p = 0.046). Conclusions: Low-dose CT histogram and perfusion analysis were comparable to MRI, and the entropy of postcontrast CT could be a feasible parameter to predict PFS in breast cancer patients. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

29 pages, 5992 KiB  
Article
Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans
by Hassaan Malik, Tayyaba Anees, Ahmad Naeem, Rizwan Ali Naqvi and Woong-Kee Loh
Bioengineering 2023, 10(2), 203; https://doi.org/10.3390/bioengineering10020203 - 3 Feb 2023
Cited by 17 | Viewed by 2708
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of [...] Read more.
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 1572 KiB  
Review
Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today?
by Matthew Kim, Jen-Yeu Wang, Weiguo Lu, Hao Jiang, Strahinja Stojadinovic, Zabi Wardak, Tu Dan, Robert Timmerman, Lei Wang, Cynthia Chuang, Gregory Szalkowski, Lianli Liu, Erqi Pollom, Elham Rahimy, Scott Soltys, Mingli Chen and Xuejun Gu
Bioengineering 2024, 11(5), 454; https://doi.org/10.3390/bioengineering11050454 - 3 May 2024
Cited by 1 | Viewed by 1744
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It [...] Read more.
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician’s manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

26 pages, 1901 KiB  
Review
Fuzzy Cognitive Map Applications in Medicine over the Last Two Decades: A Review Study
by Ioannis D. Apostolopoulos, Nikolaos I. Papandrianos, Nikolaos D. Papathanasiou and Elpiniki I. Papageorgiou
Bioengineering 2024, 11(2), 139; https://doi.org/10.3390/bioengineering11020139 - 30 Jan 2024
Cited by 4 | Viewed by 2677
Abstract
Fuzzy Cognitive Maps (FCMs) have become an invaluable tool for healthcare providers because they can capture intricate associations among variables and generate precise predictions. FCMs have demonstrated their utility in diverse medical applications, from disease diagnosis to treatment planning and prognosis prediction. Their [...] Read more.
Fuzzy Cognitive Maps (FCMs) have become an invaluable tool for healthcare providers because they can capture intricate associations among variables and generate precise predictions. FCMs have demonstrated their utility in diverse medical applications, from disease diagnosis to treatment planning and prognosis prediction. Their ability to model complex relationships between symptoms, biomarkers, risk factors, and treatments has enabled healthcare providers to make informed decisions, leading to better patient outcomes. This review article provides a thorough synopsis of using FCMs within the medical domain. A systematic examination of pertinent literature spanning the last two decades forms the basis of this overview, specifically delineating the diverse applications of FCMs in medical realms, including decision-making, diagnosis, prognosis, treatment optimisation, risk assessment, and pharmacovigilance. The limitations inherent in FCMs are also scrutinised, and avenues for potential future research and application are explored. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Figure 1

Back to TopTop