Next Issue
Volume 4, December
Previous Issue
Volume 4, June
 
 

BioMedInformatics, Volume 4, Issue 3 (September 2024) – 26 articles

Cover Story (view full-size image): In modern cancer genomics, panels analyze ~500 genes, although the human genome has 20,000. While whole genome sequencing (WGS) now costs ~$1,000, interpreting vast data remains a challenge. If tumor panels fail to identify actionable genes, extra genomic data may not aid in treatment decisions. However, advances in ML and DL architectures offer new insights. AI has enabled progress in fields such as Radiomics, Pathomics, and Surgomics, integrating them with WGS to create AI-augmented tumor boards, improving treatment by minimizing ineffective therapies. Advanced imaging also aids in surgical decisions. AI-driven multi-omics integration—the combination of genomics, transcriptomics, and proteomics—promises better diagnosis and treatment, with deep learning poised to revolutionize biomarker identification. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 926 KiB  
Article
Cross-National Analysis of Opioid Prescribing Patterns: Enhancements and Insights from the OralOpioids R Package in Canada and the United States
by Ankona Banerjee, Kenneth Nobleza, Duc T. Nguyen and Erik Stricker
BioMedInformatics 2024, 4(3), 2107-2116; https://doi.org/10.3390/biomedinformatics4030112 - 16 Sep 2024
Viewed by 711
Abstract
Background: The opioid crisis remains a significant public health challenge in North America, highlighted by the substantial need for tools to analyze and understand opioid potency and prescription patterns. Methods: The OralOpioids package automates the retrieval, processing, and analysis of opioid data from [...] Read more.
Background: The opioid crisis remains a significant public health challenge in North America, highlighted by the substantial need for tools to analyze and understand opioid potency and prescription patterns. Methods: The OralOpioids package automates the retrieval, processing, and analysis of opioid data from Health Canada’s Drug Product Database (DPD) and the U.S. Food and Drug Administration’s (FDA) National Drug Code (NDC) database. It includes functions such as load_Opioid_Table, which integrates country-specific data processing and Morphine Equivalent Dose (MED) calculations, providing a comprehensive dataset for analysis. The package facilitates a comprehensive examination of opioid prescriptions, allowing researchers to identify high-risk opioids and patterns that could inform policy and healthcare practices. Results: The integration of MED calculations with Canadian and U.S. data provides a robust tool for assessing opioid potency and prescribing practices. The OralOpioids R package is an essential tool for public health researchers, enabling a detailed analysis of North American opioid prescriptions. Conclusions: By providing easy access to opioid potency data and supporting cross-national studies, the package plays a critical role in addressing the opioid crisis. It suggests a model for similar tools that could be adapted for global use, enhancing our capacity to manage and mitigate opioid misuse effectively. Full article
Show Figures

Figure 1

64 pages, 6249 KiB  
Review
Pulmonary Nodule Detection, Segmentation and Classification Using Deep Learning: A Comprehensive Literature Review
by Ioannis Marinakis, Konstantinos Karampidis and Giorgos Papadourakis
BioMedInformatics 2024, 4(3), 2043-2106; https://doi.org/10.3390/biomedinformatics4030111 - 13 Sep 2024
Cited by 1 | Viewed by 1176
Abstract
Lung cancer is a leading cause of cancer-related deaths worldwide, emphasizing the significance of early detection. Computer-aided diagnostic systems have emerged as valuable tools for aiding radiologists in the analysis of medical images, particularly in the context of lung cancer screening. A typical [...] Read more.
Lung cancer is a leading cause of cancer-related deaths worldwide, emphasizing the significance of early detection. Computer-aided diagnostic systems have emerged as valuable tools for aiding radiologists in the analysis of medical images, particularly in the context of lung cancer screening. A typical pipeline for lung cancer diagnosis involves pulmonary nodule detection, segmentation, and classification. Although traditional machine learning methods have been deployed in the previous years with great success, this literature review focuses on state-of-the-art deep learning methods. The objective is to extract key insights and methodologies from deep learning studies that exhibit high experimental results in this domain. This paper delves into the databases utilized, preprocessing steps applied, data augmentation techniques employed, and proposed methods deployed in studies with exceptional outcomes. The reviewed studies predominantly harness cutting-edge deep learning methodologies, encompassing traditional convolutional neural networks (CNNs) and advanced variants such as 3D CNNs, alongside other innovative approaches such as Capsule networks and transformers. The methods examined in these studies reflect the continuous evolution of deep learning techniques for pulmonary nodule detection, segmentation, and classification. The methodologies, datasets, and techniques discussed here collectively contribute to the development of more efficient computer-aided diagnostic systems, empowering radiologists and dfhealthcare professionals in the fight against this deadly disease. Full article
Show Figures

Figure 1

21 pages, 831 KiB  
Review
Computational Strategies to Enhance Cell-Free Protein Synthesis Efficiency
by Iyappan Kathirvel and Neela Gayathri Ganesan
BioMedInformatics 2024, 4(3), 2022-2042; https://doi.org/10.3390/biomedinformatics4030110 - 10 Sep 2024
Viewed by 981
Abstract
Cell-free protein synthesis (CFPS) has emerged as a powerful tool for protein production, with applications ranging from basic research to biotechnology and pharmaceutical development. However, enhancing the efficiency of CFPS systems remains a crucial challenge for realizing their full potential. Computational strategies offer [...] Read more.
Cell-free protein synthesis (CFPS) has emerged as a powerful tool for protein production, with applications ranging from basic research to biotechnology and pharmaceutical development. However, enhancing the efficiency of CFPS systems remains a crucial challenge for realizing their full potential. Computational strategies offer promising avenues for optimizing CFPS efficiency by providing insights into complex biological processes and enabling rational design approaches. This review provides a comprehensive overview of the computational approaches aimed at enhancing CFPS efficiency. The introduction outlines the significance of CFPS and the role of computational methods in addressing efficiency limitations. It discusses mathematical modeling and simulation-based approaches for predicting protein synthesis kinetics and optimizing CFPS reactions. The review also delves into the design of DNA templates, including codon optimization strategies and mRNA secondary structure prediction tools, to improve protein synthesis efficiency. Furthermore, it explores computational techniques for engineering cell-free transcription and translation machinery, such as the rational design of expression systems and the predictive modeling of ribosome dynamics. The predictive modeling of metabolic pathways and the energy utilization in CFPS systems is also discussed, highlighting metabolic flux analysis and resource allocation strategies. Machine learning and artificial intelligence approaches are being increasingly employed for CFPS optimization, including neural network models, deep learning algorithms, and reinforcement learning for adaptive control. This review presents case studies showcasing successful CFPS optimization using computational methods and discusses applications in synthetic biology, biotechnology, and pharmaceuticals. The challenges and limitations of current computational approaches are addressed, along with future perspectives and emerging trends, such as the integration of multi-omics data and advances in high-throughput screening. The conclusion summarizes key findings, discusses implications for future research directions and applications, and emphasizes opportunities for interdisciplinary collaboration. This review offers valuable insights and prospects regarding computational strategies to enhance CFPS efficiency. It serves as a comprehensive resource, consolidating current knowledge in the field and guiding further advancements. Full article
Show Figures

Figure 1

20 pages, 2695 KiB  
Article
Optimizing Lung Condition Categorization through a Deep Learning Approach to Chest X-ray Image Analysis
by Theodora Sanida, Maria Vasiliki Sanida, Argyrios Sideris and Minas Dasygenis
BioMedInformatics 2024, 4(3), 2002-2021; https://doi.org/10.3390/biomedinformatics4030109 - 10 Sep 2024
Viewed by 872
Abstract
Background: Evaluating chest X-rays is a complex and high-demand task due to the intrinsic challenges associated with diagnosing a wide range of pulmonary conditions. Therefore, advanced methodologies are required to categorize multiple conditions from chest X-ray images accurately. Methods: This study introduces an [...] Read more.
Background: Evaluating chest X-rays is a complex and high-demand task due to the intrinsic challenges associated with diagnosing a wide range of pulmonary conditions. Therefore, advanced methodologies are required to categorize multiple conditions from chest X-ray images accurately. Methods: This study introduces an optimized deep learning approach designed for the multi-label categorization of chest X-ray images, covering a broad spectrum of conditions, including lung opacity, normative pulmonary states, COVID-19, bacterial pneumonia, viral pneumonia, and tuberculosis. An optimized deep learning model based on the modified VGG16 architecture with SE blocks was developed and applied to a large dataset of chest X-ray images. The model was evaluated against state-of-the-art techniques using metrics such as accuracy, F1-score, precision, recall, and area under the curve (AUC). Results: The modified VGG16-SE model demonstrated superior performance across all evaluated metrics. The model achieved an accuracy of 98.49%, an F1-score of 98.23%, a precision of 98.41%, a recall of 98.07% and an AUC of 98.86%. Conclusion: This study provides an effective deep learning approach for categorizing chest X-rays. The model’s high performance across various lung conditions suggests its potential for integration into clinical workflows, enhancing the accuracy and speed of pulmonary disease diagnosis. Full article
(This article belongs to the Special Issue Editor-in-Chief's Choices in Biomedical Informatics)
Show Figures

Figure 1

23 pages, 1660 KiB  
Article
Using Large Language Models for Microbiome Findings Reports in Laboratory Diagnostics
by Thomas Krause, Laura Glau, Patrick Newels, Thoralf Reis, Marco X. Bornschlegl, Michael Kramer and Matthias L. Hemmje
BioMedInformatics 2024, 4(3), 1979-2001; https://doi.org/10.3390/biomedinformatics4030108 - 5 Sep 2024
Viewed by 944
Abstract
Background: Advancements in genomic technologies are rapidly evolving, with the potential to transform laboratory diagnostics by enabling high-throughput analysis of complex biological data, such as microbiome data. Large Language Models (LLMs) have shown significant promise in extracting actionable insights from vast datasets, but [...] Read more.
Background: Advancements in genomic technologies are rapidly evolving, with the potential to transform laboratory diagnostics by enabling high-throughput analysis of complex biological data, such as microbiome data. Large Language Models (LLMs) have shown significant promise in extracting actionable insights from vast datasets, but their application in generating microbiome findings reports with clinical interpretations and lifestyle recommendations has not been explored yet. Methods: This article introduces an innovative framework that utilizes LLMs to automate the generation of findings reports in the context of microbiome diagnostics. The proposed model integrates LLMs within an event-driven, workflow-based architecture, designed to enhance scalability and adaptability in clinical laboratory environments. Special focus is given to aligning the model with clinical standards and regulatory guidelines such as the In-Vitro Diagnostic Regulation (IVDR) and the guidelines published by the High-Level Expert Group on Artificial Intelligence (HLEG AI). The implementation of this model was demonstrated through a prototype called “MicroFlow”. Results: The implementation of MicroFlow indicates the viability of automating findings report generation using LLMs. Initial evaluation by laboratory expert users indicated that the integration of LLMs is promising, with the generated reports being plausible and useful, although further testing on real-world data is necessary to assess the model’s accuracy and reliability. Conclusions: This work presents a potential approach for using LLMs to support the generation of findings reports in microbiome diagnostics. While the initial results seem promising, further evaluation and refinement are needed to ensure the model’s effectiveness and adherence to clinical standards. Future efforts will focus on improvements based on feedback from laboratory experts and comprehensive testing on real patient data. Full article
Show Figures

Figure 1

30 pages, 27197 KiB  
Article
Finite Element Analysis of the Bearing Component of Total Ankle Replacement Implants during the Stance Phase of the Gait Cycle
by Timothy S. Jain, Mohammad Noori, Joseph J. Rencis, Amanda Anderson, Naudereh Noori and Scott Hazelwood
BioMedInformatics 2024, 4(3), 1949-1978; https://doi.org/10.3390/biomedinformatics4030107 - 3 Sep 2024
Viewed by 845
Abstract
Total ankle arthroplasty (TAA) is a motion-preserving treatment for end-stage ankle arthritis. An effective tool for analyzing these implants’ mechanical performance and longevity in silico is finite element analysis (FEA). An FEA in ABAQUS was used to statically analyze the mechanical behavior of [...] Read more.
Total ankle arthroplasty (TAA) is a motion-preserving treatment for end-stage ankle arthritis. An effective tool for analyzing these implants’ mechanical performance and longevity in silico is finite element analysis (FEA). An FEA in ABAQUS was used to statically analyze the mechanical behavior of the ultra-high-molecular-weight polyethylene (UHMWPE) bearing component at varying dorsiflexion/plantarflexion ankle angles and axial loading conditions during the stance phase of the gait cycle for a single cycle. The von Mises stress and contact pressure were examined on the articulating surface of the bearing component in two newly installed fixed-bearing TAA implants (Wright Medical INBONE II and Exactech Vantage). Six different FEA models of variable ankle compressive load levels and ankle angle positions, for the varying subphases of the stance phase of the gait cycle, were created. The components in these models were constrained to be conducive to the bone–implant interface, where implant loosening occurs. Our results showed that the von Mises stress and contact pressure distributions increased as the compressive load increased. The highest stress was noted at dorsiflexion angles > 15°, in areas where the UHMWPE liner was thinnest, at the edges of the talar and UHMWPE components, and during the terminal stance phase of the gait cycle. This static structural analysis highlighted these failure regions are susceptible to yielding and wear and indicated stress magnitudes that are in agreement (within 25%) with those in previous static structural TAA FEAs. The mechanical wear of the UHMWPE bearing component in TAA can lead to aseptic loosening and peri-implant cyst formation over time, requiring surgical revision. This study provides ankle replacement manufacturers and orthopedic surgeons with a better understanding of the stress response and contact pressure sustained by TAA implants, which is critical to optimizing implant longevity and improving patient care. Full article
Show Figures

Figure 1

15 pages, 8050 KiB  
Article
Diffusion-Based Image Synthesis or Traditional Augmentation for Enriching Musculoskeletal Ultrasound Datasets
by Benedek Balla, Atsuhiro Hibi and Pascal N. Tyrrell
BioMedInformatics 2024, 4(3), 1934-1948; https://doi.org/10.3390/biomedinformatics4030106 - 29 Aug 2024
Viewed by 880
Abstract
Background: Machine learning models can provide quick and reliable assessments in place of medical practitioners. With over 50 million adults in the United States suffering from osteoarthritis, there is a need for models capable of interpreting musculoskeletal ultrasound images. However, machine learning requires [...] Read more.
Background: Machine learning models can provide quick and reliable assessments in place of medical practitioners. With over 50 million adults in the United States suffering from osteoarthritis, there is a need for models capable of interpreting musculoskeletal ultrasound images. However, machine learning requires lots of data, which poses significant challenges in medical imaging. Therefore, we explore two strategies for enriching a musculoskeletal ultrasound dataset independent of these limitations: traditional augmentation and diffusion-based image synthesis. Methods: First, we generate augmented and synthetic images to enrich our dataset. Then, we compare the images qualitatively and quantitatively, and evaluate their effectiveness in training a deep learning model for detecting thickened synovium and knee joint recess distension. Results: Our results suggest that synthetic images exhibit some anatomical fidelity, diversity, and help a model learn representations consistent with human opinion. In contrast, augmented images may impede model generalizability. Finally, a model trained on synthetically enriched data outperforms models trained on un-enriched and augmented datasets. Conclusions: We demonstrate that diffusion-based image synthesis is preferable to traditional augmentation. Our study underscores the importance of leveraging dataset enrichment strategies to address data scarcity in medical imaging and paves the way for the development of more advanced diagnostic tools. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

32 pages, 994 KiB  
Article
ORASIS-MAE Harnesses the Potential of Self-Learning from Partially Annotated Clinical Eye Movement Records
by Alae Eddine El Hmimdi, Themis Palpanas and Zoï Kapoula
BioMedInformatics 2024, 4(3), 1902-1933; https://doi.org/10.3390/biomedinformatics4030105 - 26 Aug 2024
Viewed by 553
Abstract
Self-supervised learning (SSL) has gained significant attention in the past decade for its capacity to utilize non-annotated datasets to learn meaningful data representations. In the medical domain, the challenge of constructing large annotated datasets presents a significant limitation, rendering SSL an ideal approach [...] Read more.
Self-supervised learning (SSL) has gained significant attention in the past decade for its capacity to utilize non-annotated datasets to learn meaningful data representations. In the medical domain, the challenge of constructing large annotated datasets presents a significant limitation, rendering SSL an ideal approach to address this constraint. In this study, we introduce a novel pretext task tailored to stimulus-driven eye movement data, along with a denoising task to improve the robustness against simulated eye tracking failures. Our proposed task aims to capture both the characteristics of the pilot (brain) and the motor (eye) by learning to reconstruct the eye movement position signal using up to 12.5% of the unmasked eye movement signal patches, along with the entire REMOBI target signal. Thus, the encoder learns a high-dimensional representation using a multivariate time series of length 8192 points, corresponding to approximately 40 s. We evaluate the learned representation on screening eight distinct groups of pathologies, including dyslexia, reading disorder, and attention deficit disorder, across four datasets of varying complexity and size. Furthermore, we explore various head architecture designs along with different transfer learning methods, demonstrating promising results with improvements of up to approximately 15%, leading to an overall macro F1 score of 61% and 61.5% on the Saccade and the Vergence datasets, respectively. Notably, our method achieves macro F1 scores of 64.7%, 66.1%, and 61.1% for screening dyslexia, reading disorder, and attention deficit disorder, respectively, on clinical data. These findings underscore the potential of self-learning algorithms in pathology screening, particularly in domains involving complex data such as stimulus-driven eye movement analysis. Full article
Show Figures

Figure 1

1 pages, 143 KiB  
Retraction
RETRACTED: Sankar et al. Utilizing Generative Adversarial Networks for Acne Dataset Generation in Dermatology. BioMedInformatics 2024, 4, 1059–1070
by Aravinthan Sankar, Kunal Chaturvedi, Al-Akhir Nayan, Mohammad Hesam Hesamian, Ali Braytee and Mukesh Prasad
BioMedInformatics 2024, 4(3), 1901; https://doi.org/10.3390/biomedinformatics4030104 - 12 Aug 2024
Viewed by 629
Abstract
The journal retracts the article, “Utilizing Generative Adversarial Networks for Acne Dataset Generation in Dermatology” [...] Full article
17 pages, 2426 KiB  
Article
Approaches to Extracting Patterns of Service Utilization for Patients with Complex Conditions: Graph Community Detection vs. Natural Language Processing Clustering
by Jonas Bambi, Hanieh Sadri, Ken Moselle, Ernie Chang, Yudi Santoso, Joseph Howie, Abraham Rudnick, Lloyd T. Elliott and Alex Kuo
BioMedInformatics 2024, 4(3), 1884-1900; https://doi.org/10.3390/biomedinformatics4030103 - 9 Aug 2024
Cited by 1 | Viewed by 2390
Abstract
Background: As patients interact with a healthcare service system, patterns of service utilization (PSUs) emerge. These PSUs are embedded in the sparse high-dimensional space of longitudinal cross-continuum health service encounter data. Once extracted, PSUs can provide quality assurance/quality improvement (QA/QI) efforts with the [...] Read more.
Background: As patients interact with a healthcare service system, patterns of service utilization (PSUs) emerge. These PSUs are embedded in the sparse high-dimensional space of longitudinal cross-continuum health service encounter data. Once extracted, PSUs can provide quality assurance/quality improvement (QA/QI) efforts with the information required to optimize service system structures and functions. This may improve outcomes for complex patients with chronic diseases. Method: Working with longitudinal cross-continuum encounter data from a regional health service system, various pattern detection analyses were conducted, employing (1) graph community detection algorithms, (2) natural language processing (NLP) clustering, and (3) a hybrid NLP–graph method. Result: These approaches produced similar PSUs, as determined from a clinical perspective by clinical subject matter experts and service system operations experts. Conclusions: The similarity in the results provides validation for the methodologies. Moreover, the results stress the need to engage with clinical or service system operations experts, both in providing the taxonomies and ontologies of the service system, the cohort definitions, and determining the level of granularity that produces the most clinically meaningful results. Finally, the uniqueness of each approach provides an opportunity to take advantage of the various analytical capabilities that each approach brings, which will be further explored in our future research. Full article
(This article belongs to the Section Clinical Informatics)
Show Figures

Graphical abstract

19 pages, 4934 KiB  
Article
Cinco de Bio: A Low-Code Platform for Domain-Specific Workflows for Biomedical Imaging Research
by Colm Brandon, Steve Boßelmann, Amandeep Singh, Stephen Ryan, Alexander Schieweck, Eanna Fennell, Bernhard Steffen and Tiziana Margaria
BioMedInformatics 2024, 4(3), 1865-1883; https://doi.org/10.3390/biomedinformatics4030102 - 9 Aug 2024
Cited by 2 | Viewed by 815
Abstract
Background: In biomedical imaging research, experimental biologists generate vast amounts of data that require advanced computational analysis. Breakthroughs in experimental techniques, such as multiplex immunofluorescence tissue imaging, enable detailed proteomic analysis, but most biomedical researchers lack the programming and Artificial Intelligence (AI) expertise [...] Read more.
Background: In biomedical imaging research, experimental biologists generate vast amounts of data that require advanced computational analysis. Breakthroughs in experimental techniques, such as multiplex immunofluorescence tissue imaging, enable detailed proteomic analysis, but most biomedical researchers lack the programming and Artificial Intelligence (AI) expertise to leverage these innovations effectively. Methods: Cinco de Bio (CdB) is a web-based, collaborative low-code/no-code modelling and execution platform designed to address this challenge. It is designed along Model-Driven Development (MDD) and Service-Orientated Architecture (SOA) to enable modularity and scalability, and it is underpinned by formal methods to ensure correctness. The pre-processing of immunofluorescence images illustrates the ease of use and ease of modelling with CdB in comparison with the current, mostly manual, approaches. Results: CdB simplifies the deployment of data processing services that may use heterogeneous technologies. User-designed models support both a collaborative and user-centred design for biologists. Domain-Specific Languages for the Application domain (A-DSLs) are supported through data and process ontologies/taxonomies. They allow biologists to effectively model workflows in the terminology of their field. Conclusions: Comparative analysis of similar platforms in the literature illustrates the superiority of CdB along a number of comparison dimensions. We are expanding the platform’s capabilities and applying it to other domains of biomedical research. Full article
(This article belongs to the Special Issue Feature Papers in Computational Biology and Medicine)
Show Figures

Figure 1

30 pages, 1329 KiB  
Review
Understanding and Therapeutic Application of Immune Response in Major Histocompatibility Complex (MHC) Diversity Using Multimodal Artificial Intelligence
by Yasunari Matsuzaka and Ryu Yashiro
BioMedInformatics 2024, 4(3), 1835-1864; https://doi.org/10.3390/biomedinformatics4030101 - 5 Aug 2024
Viewed by 1142
Abstract
Human Leukocyte Antigen (HLA) is like a device that monitors the internal environment of the body. T lymphocytes immediately recognize the HLA molecules that are expressed on the surface of the cells of the different individual, attacking it defeats microorganisms that is one [...] Read more.
Human Leukocyte Antigen (HLA) is like a device that monitors the internal environment of the body. T lymphocytes immediately recognize the HLA molecules that are expressed on the surface of the cells of the different individual, attacking it defeats microorganisms that is one of the causes of rejection in organ transplants performed between people with unmatched HLA types. Over 2850 and 3580 different polymorphisms have been reported for HLA-A and HLA-B respectively, around the world. HLA genes are associated with the risk of developing a variety of diseases, including autoimmune diseases, and play an important role in pathological conditions. By using a deep learning method called multi-task learning to simultaneously predict the gene sequences of multiple HLA genes, it is possible to improve accuracy and shorten execution time. Some new systems use a model called convolutional neural network (CNNs) in deep learning, which uses neural networks consisting of many layers and can learn complex correlations between SNP information and HLA gene sequences based on reference data for HLA imputation, which serves as training data. The learned model can output predicted values of HLA gene sequences with high accuracy using SNP information as input. To investigate which part of the input information surrounding the HLA gene is used to make learning predictions, predictions were made using not only a small number of nearby SNP information but also many SNP information distributed over a wider area by visualizing the learning information of the model. While conventional methods are strong at learning using nearly SNP information and not good at learning using SNP information located at distant locations, some new systems are thought that prediction accuracy may have improved because this problem was overcome. HLA genes are involved in the onset of a variety of diseases and are attracting attention. As an important area from the perspective of elucidating pathological conditions and realizing personalized medicine. The applied multi-task learning to two different HLA imputation reference panels—a Japanese panel (n = 1118) and type I diabetes genetics consortium panel (n = 5122). Through 10-fold cross-validation on these panels, the multi-task learning achieved higher imputation accuracy than conventional methods, especially for imputing low-frequency and rare HLA alleles. The increased prediction accuracy of HLA gene sequences is expected to increase the reliability of HLA analysis, including integrated analysis between different racial populations, and is expected to greatly contribute to the identification of HLA gene sequences associated with diseases and further elucidation of pathological conditions. Full article
(This article belongs to the Special Issue Feature Papers on Methods in Biomedical Informatics)
Show Figures

Figure 1

13 pages, 1060 KiB  
Article
A Computational Approach to Demonstrate the Control of Gene Expression via Chromosomal Access in Colorectal Cancer
by Caleb J. Pecka, Ishwor Thapa, Amar B. Singh and Dhundy Bastola
BioMedInformatics 2024, 4(3), 1822-1834; https://doi.org/10.3390/biomedinformatics4030100 - 2 Aug 2024
Viewed by 885
Abstract
Background: Improved technologies for chromatin accessibility sequencing such as ATAC-seq have increased our understanding of gene regulation mechanisms, particularly in disease conditions such as cancer. Methods: This study introduces a computational tool that quantifies and establishes connections between chromatin accessibility, transcription factor binding, [...] Read more.
Background: Improved technologies for chromatin accessibility sequencing such as ATAC-seq have increased our understanding of gene regulation mechanisms, particularly in disease conditions such as cancer. Methods: This study introduces a computational tool that quantifies and establishes connections between chromatin accessibility, transcription factor binding, transcription factor mutations, and gene expression using publicly available colorectal cancer data. The tool has been packaged using a workflow management system to allow biologists and researchers to reproduce the results of this study. Results: We present compelling evidence linking chromatin accessibility to gene expression, with particular emphasis on SNP mutations and the accessibility of transcription factor genes. Furthermore, we have identified significant upregulation of key transcription factor interactions in colon cancer patients, including the apoptotic regulation facilitated by E2F1, MYC, and MYCN, as well as activation of the BCL-2 protein family facilitated by TP73. Conclusion: This study demonstrates the effectiveness of the computational tool in linking chromatin accessibility to gene expression and highlights significant transcription factor interactions in colorectal cancer. The code for this project is openly available on GitHub. Full article
Show Figures

Figure 1

15 pages, 2611 KiB  
Article
ELIPF: Explicit Learning Framework for Pre-Emptive Forecasting, Early Detection and Curtailment of Idiopathic Pulmonary Fibrosis Disease
by Tagne Poupi Theodore Armand, Md Ariful Islam Mozumder, Kouayep Sonia Carole, Opeyemi Deji-Oloruntoba, Hee-Cheol Kim and Simeon Okechukwu Ajakwe
BioMedInformatics 2024, 4(3), 1807-1821; https://doi.org/10.3390/biomedinformatics4030099 - 1 Aug 2024
Viewed by 788
Abstract
(1) Background: Among lung diseases, idiopathic pulmonary fibrosis (IPF) appears to be the most common type and causes scarring (fibrosis) of the lungs. IPF disease patients are recommended to undergo lung transplants, or they may witness progressive and irreversible lung damage that will [...] Read more.
(1) Background: Among lung diseases, idiopathic pulmonary fibrosis (IPF) appears to be the most common type and causes scarring (fibrosis) of the lungs. IPF disease patients are recommended to undergo lung transplants, or they may witness progressive and irreversible lung damage that will subsequently lead to death. In cases of irreversible damage, it becomes important to predict the patient’s mortality status. Traditional healthcare does not provide sophisticated tools for such predictions. Still, because artificial intelligence has effectively shown its capability to manage crucial healthcare situations, it is possible to predict patients’ mortality using machine learning techniques. (2) Methods: This research proposed a soft voting ensemble model applied to the top 30 best-fit clinical features to predict mortality risk for patients with idiopathic pulmonary fibrosis. Five machine learning algorithms were used for it, namely random forest (RF), support vector machine (SVM), gradient boosting machine (GBM), XGboost (XGB), and multi-layer perceptron (MLP). (3) Results: A soft voting ensemble method applied with the combined results of the classifiers showed an accuracy of 79.58%, sensitivity of 86%, F1-score of 84%, prediction error of 0.19, and responsiveness of 0.47. (4) Conclusions: Our proposed model will be helpful for physicians to make the right decision and keep track of the disease, thus reducing the mortality risk, improving the overall health condition of patients, and managing patient stratification. Full article
Show Figures

Figure 1

24 pages, 566 KiB  
Review
Recent Computational Approaches in Understanding the Links between Molecular Stress and Cancer Metastasis
by Eugenia Papadaki, Petros Paplomatas, Panagiotis Vlamos and Aristidis G. Vrahatis
BioMedInformatics 2024, 4(3), 1783-1806; https://doi.org/10.3390/biomedinformatics4030098 - 31 Jul 2024
Viewed by 927
Abstract
In the modern era of medicine, advancements in data science and biomedical technologies have revolutionized our understanding of diseases. Cancer, being a complex disease, has particularly benefited from the wealth of molecular data available, which can now be analyzed using cutting-edge artificial intelligence [...] Read more.
In the modern era of medicine, advancements in data science and biomedical technologies have revolutionized our understanding of diseases. Cancer, being a complex disease, has particularly benefited from the wealth of molecular data available, which can now be analyzed using cutting-edge artificial intelligence (AI) and information science methods. In this context, recent studies have increasingly recognized chronic stress as a significant factor in cancer progression. Utilizing computational methods to address this matter has demonstrated encouraging advancements, providing a hopeful outlook in our efforts to combat cancer. This review focuses on recent computational approaches in understanding the molecular links between stress and cancer metastasis. Specifically, we explore the utilization of single-cell data, an innovative technique in DNA sequencing that allows for detailed analysis. Additionally, we explore the application of AI and data mining techniques to these complex and large-scale datasets. Our findings underscore the potential of these computational pipelines to unravel the intricate relationship between stress and cancer metastasis. However, it is important to note that this field is still in its early stages, and we anticipate a proliferation of similar approaches in the near future, further advancing our understanding and treatment of cancer. Full article
Show Figures

Figure 1

10 pages, 2432 KiB  
Article
Replies to Queries in Gynecologic Oncology by Bard, Bing and the Google Assistant
by Edward J. Pavlik, Dharani D. Ramaiah, Taylor A. Rives, Allison L. Swiecki-Sikora and Jamie M. Land
BioMedInformatics 2024, 4(3), 1773-1782; https://doi.org/10.3390/biomedinformatics4030097 - 24 Jul 2024
Cited by 1 | Viewed by 791
Abstract
When women receive a diagnosis of a gynecologic malignancy, they can have questions about their diagnosis or treatment that can result in voice queries to virtual assistants for more information. Recent advancement in artificial intelligence (AI) has transformed the landscape of medical information [...] Read more.
When women receive a diagnosis of a gynecologic malignancy, they can have questions about their diagnosis or treatment that can result in voice queries to virtual assistants for more information. Recent advancement in artificial intelligence (AI) has transformed the landscape of medical information accessibility. The Google virtual assistant (VA) outperformed Siri, Alexa and Cortana in voice queries presented prior to the explosive implementation of AI in early 2023. The efforts presented here focus on determining if advances in AI in the last 12 months have improved the accuracy of Google VA responses related to gynecologic oncology. Previous questions were utilized to form a common basis for queries prior to 2023 and responses in 2024. Correct answers were obtained from the UpToDate medical resource. Responses related to gynecologic oncology were obtained using Google VA, as well as the generative AI chatbots Google Bard/Gemini and Microsoft Bing-Copilot. The AI narrative responses varied in length and positioning of answers within the response. Google Bard/Gemini achieved an 87.5% accuracy rate, while Microsoft Bing-Copilot reached 83.3%. In contrast, the Google VA’s accuracy in audible responses improved from 18% prior to 2023 to 63% in 2024. While the accuracy of the Google VA has improved in the last year, it underperformed Google Bard/Gemini and Microsoft Bing-Copilot so there is considerable room for further improved accuracy. Full article
(This article belongs to the Special Issue Feature Papers in Computational Biology and Medicine)
Show Figures

Figure 1

16 pages, 1024 KiB  
Review
Should AI-Powered Whole-Genome Sequencing Be Used Routinely for Personalized Decision Support in Surgical Oncology—A Scoping Review
by Kokiladevi Alagarswamy, Wenjie Shi, Aishwarya Boini, Nouredin Messaoudi, Vincent Grasso, Thomas Cattabiani, Bruce Turner, Roland Croner, Ulf D. Kahlert and Andrew Gumbs
BioMedInformatics 2024, 4(3), 1757-1772; https://doi.org/10.3390/biomedinformatics4030096 - 24 Jul 2024
Cited by 1 | Viewed by 1179
Abstract
In this scoping review, we delve into the transformative potential of artificial intelligence (AI) in addressing challenges inherent in whole-genome sequencing (WGS) analysis, with a specific focus on its implications in oncology. Unveiling the limitations of existing sequencing technologies, the review illuminates how [...] Read more.
In this scoping review, we delve into the transformative potential of artificial intelligence (AI) in addressing challenges inherent in whole-genome sequencing (WGS) analysis, with a specific focus on its implications in oncology. Unveiling the limitations of existing sequencing technologies, the review illuminates how AI-powered methods emerge as innovative solutions to surmount these obstacles. The evolution of DNA sequencing technologies, progressing from Sanger sequencing to next-generation sequencing, sets the backdrop for AI’s emergence as a potent ally in processing and analyzing the voluminous genomic data generated. Particularly, deep learning methods play a pivotal role in extracting knowledge and discerning patterns from the vast landscape of genomic information. In the context of oncology, AI-powered methods exhibit considerable potential across diverse facets of WGS analysis, including variant calling, structural variation identification, and pharmacogenomic analysis. This review underscores the significance of multimodal approaches in diagnoses and therapies, highlighting the importance of ongoing research and development in AI-powered WGS techniques. Integrating AI into the analytical framework empowers scientists and clinicians to unravel the intricate interplay of genomics within the realm of multi-omics research, paving the way for more successful personalized and targeted treatments. Full article
(This article belongs to the Special Issue Feature Papers in Applied Biomedical Data Science)
Show Figures

Figure 1

12 pages, 1796 KiB  
Article
Transfer-Learning Approach for Enhanced Brain Tumor Classification in MRI Imaging
by Amarnath Amarnath, Ali Al Bataineh and Jeremy A. Hansen
BioMedInformatics 2024, 4(3), 1745-1756; https://doi.org/10.3390/biomedinformatics4030095 - 22 Jul 2024
Cited by 1 | Viewed by 1161
Abstract
Background: Intracranial neoplasm, often referred to as a brain tumor, is an abnormal growth or mass of tissues in the brain. The complexity of the brain and the associated diagnostic delays cause significant stress for patients. This study aims to enhance the efficiency [...] Read more.
Background: Intracranial neoplasm, often referred to as a brain tumor, is an abnormal growth or mass of tissues in the brain. The complexity of the brain and the associated diagnostic delays cause significant stress for patients. This study aims to enhance the efficiency of MRI analysis for brain tumors using deep transfer learning. Methods: We developed and evaluated the performance of five pre-trained deep learning models—ResNet50, Xception, EfficientNetV2-S, ResNet152V2, and VGG16—using a publicly available MRI scan dataset to classify images as glioma, meningioma, pituitary, or no tumor. Various classification metrics were used for evaluation. Results: Our findings indicate that these models can improve the accuracy of MRI analysis for brain tumor classification, with the Xception model achieving the highest performance with a test F1 score of 0.9817, followed by EfficientNetV2-S with a test F1 score of 0.9629. Conclusions: Implementing pre-trained deep learning models can enhance MRI accuracy for detecting brain tumors. Full article
(This article belongs to the Section Computational Biology and Medicine)
Show Figures

Figure 1

20 pages, 1519 KiB  
Article
Flow Analysis of Mastectomy Patients Using Length of Stay: A Single-Center Study
by Teresa Angela Trunfio and Giovanni Improta
BioMedInformatics 2024, 4(3), 1725-1744; https://doi.org/10.3390/biomedinformatics4030094 - 19 Jul 2024
Viewed by 755
Abstract
Background: Malignant breast cancer is the most common cancer affecting women worldwide. The COVID-19 pandemic appears to have slowed the diagnostic process, leading to an enhanced use of invasive approaches such as mastectomy. The increased use of a surgical procedure pushes towards an [...] Read more.
Background: Malignant breast cancer is the most common cancer affecting women worldwide. The COVID-19 pandemic appears to have slowed the diagnostic process, leading to an enhanced use of invasive approaches such as mastectomy. The increased use of a surgical procedure pushes towards an objective analysis of patient flow with measurable quality indicators such as length of stay (LOS) in order to optimize it. Methods: In this work, different regression and classification models were implemented to analyze the total LOS as a function of a set of independent variables (age, gender, pre-op LOS, discharge ward, year of discharge, type of procedure, presence of hypertension, diabetes, cardiovascular disease, respiratory disease, secondary tumors, and surgery with complications) extracted from the discharge records of patients undergoing mastectomy at the ‘San Giovanni di Dio e Ruggi d’Aragona’ University Hospital of Salerno (Italy) in the years 2011–2021. In addition, the impact of COVID-19 was assessed by statistically comparing data from patients discharged in 2018–2019 with those discharged in 2020–2021. Results: The results obtained generally show the good performance of the regression models in characterizing the particular case studies. Among the models, the best at predicting the LOS from the set of variables described above was polynomial regression, with an R2 value above 0.689. The classification algorithms that operated on a LOS divided into 3 arbitrary classes also proved to be good tools, reaching 79% accuracy with the voting classifier. Among the independent variables, both implemented models showed that the ward of discharge, year of discharge, type of procedure and complications during surgery had the greatest impact on LOS. The final focus to assess the impact of COVID-19 showed a statically significant increase in surgical complications. Conclusion: Through this study, it was possible to validate the use of regression and classification models to characterize the total LOS of mastectomy patients. LOS proves to be an excellent indicator of performance, and through its analysis with advanced methods, such as machine learning algorithms, it is possible to understand which of the demographic and organizational variables collected have a significant impact and thus build simple predictors to support healthcare management. Full article
Show Figures

Figure 1

12 pages, 4106 KiB  
Article
Drug Repurposing for Amyotrophic Lateral Sclerosis Based on Gene Expression Similarity and Structural Similarity: A Cheminformatics, Genomic and Network-Based Analysis
by Katerina Kadena and Eleftherios Ouzounoglou
BioMedInformatics 2024, 4(3), 1713-1724; https://doi.org/10.3390/biomedinformatics4030093 - 18 Jul 2024
Viewed by 752
Abstract
Background: Amyotrophic Lateral Sclerosis (ALS) is a devastating neurological disorder with increasing prevalence rates. Currently, only 8 FDA-approved drugs and 44 clinical trials exist for ALS treatment specifying the lacuna in disease-specific treatment. Drug repurposing, an alternative approach, is gaining huge importance. This [...] Read more.
Background: Amyotrophic Lateral Sclerosis (ALS) is a devastating neurological disorder with increasing prevalence rates. Currently, only 8 FDA-approved drugs and 44 clinical trials exist for ALS treatment specifying the lacuna in disease-specific treatment. Drug repurposing, an alternative approach, is gaining huge importance. This study aims to identify potential repurposable compounds using gene expression analysis and structural similarity approaches. Methods: GSE833 and GSE3307 were analysed to retrieve Differentially Expressed Genes (DEGs) which were utilized to identify compounds reversing the gene signatures from LINCS. SMILES of ALS-specific FDA-approved and clinical trial compounds were used to retrieve structurally similar drugs from DrugBank. Drug-Target-Network (DTN) was constructed for the identified compounds to retrieve drug targets which were further subjected to functional enrichment analysis. Results: GSE833 retrieved 13 & 5 whereas GSE3307 retrieved 280 & 430 significant upregulated and downregulated DEGs respectively. Gene expression similarity identified 213 approved drugs. Structural similarity analysis of 44 compounds resulted in 411 approved and investigational compounds. DTN was constructed for 266 compounds to identify drug targets. Functional enrichment analysis resulted in neuroinflammatory response, cAMP signaling, PI3K-AKT signaling, and oxidative stress pathways. A preliminary relevancy check identified previous association of 105 compounds in ALS research, validating the approach, with 172 potential repurposable compounds. Full article
Show Figures

Figure 1

10 pages, 3736 KiB  
Article
Chauhan Weighted Trajectory Analysis Reduces Sample Size Requirements and Expedites Time-to-Efficacy Signals in Advanced Cancer Clinical Trials
by Utkarsh Chauhan, Daylen Mackey and John R. Mackey
BioMedInformatics 2024, 4(3), 1703-1712; https://doi.org/10.3390/biomedinformatics4030092 - 11 Jul 2024
Viewed by 846
Abstract
(1) Background: As Kaplan–Meier (KM) analysis is limited to single unidirectional endpoints, most advanced cancer randomized clinical trials (RCTs) are powered for either progression-free survival (PFS) or overall survival (OS). This discards efficacy information carried by partial responses, complete responses, and stable disease [...] Read more.
(1) Background: As Kaplan–Meier (KM) analysis is limited to single unidirectional endpoints, most advanced cancer randomized clinical trials (RCTs) are powered for either progression-free survival (PFS) or overall survival (OS). This discards efficacy information carried by partial responses, complete responses, and stable disease that frequently precede progressive disease and death. Chauhan Weighted Trajectory Analysis (CWTA) is a generalization of KM that simultaneously assesses multiple rank-ordered endpoints. We hypothesized that CWTA could use this efficacy information to reduce sample size requirements and expedite efficacy signals in advanced cancer trials. (2) Methods: We performed 100-fold and 1000-fold simulations of solid tumor systemic therapy RCTs with health statuses rank-ordered from complete response (Stage 0) to death (Stage 4). At increments of the sample size and hazard ratio, we compared KM PFS and OS with CWTA for (i) sample size requirements to achieve a power of 0.8 and (ii) the time to first significant efficacy signal. (3) Results: CWTA consistently demonstrated greater power, and it reduced the sample size requirements by 18% to 35% compared to KM PFS and 14% to 20% compared to KM OS. CWTA also expedited time-to-efficacy signals by 2- to 6-fold. (4) Conclusions: CWTA, by incorporating all efficacy signals in the cancer treatment trajectory, provides a clinically relevant reduction in the required sample size and meaningfully expedites the efficacy signals of cancer treatments compared to KM PFS and KM OS. Using CWTA rather than KM as the primary trial outcome has the potential to meaningfully reduce the numbers of patients, trial duration, and costs to evaluate therapies in advanced cancer. Full article
(This article belongs to the Section Medical Statistics and Data Science)
Show Figures

Figure 1

11 pages, 2127 KiB  
Article
Automated Classification of Collateral Circulation for Ischemic Stroke in Cone-Beam CT Images Using VGG11: A Deep Learning Approach
by Nur Hasanah Ali, Abdul Rahim Abdullah, Norhashimah Mohd Saad, Ahmad Sobri Muda and Ervina Efzan Mhd Noor
BioMedInformatics 2024, 4(3), 1692-1702; https://doi.org/10.3390/biomedinformatics4030091 - 8 Jul 2024
Viewed by 713
Abstract
Background: Ischemic stroke poses significant challenges in diagnosis and treatment, necessitating efficient and accurate methods for assessing collateral circulation, a critical determinant of patient prognosis. Manual classification of collateral circulation in ischemic stroke using traditional imaging techniques is labor-intensive and prone to subjectivity. [...] Read more.
Background: Ischemic stroke poses significant challenges in diagnosis and treatment, necessitating efficient and accurate methods for assessing collateral circulation, a critical determinant of patient prognosis. Manual classification of collateral circulation in ischemic stroke using traditional imaging techniques is labor-intensive and prone to subjectivity. This study presented the automated classification of collateral circulation patterns in cone-beam CT (CBCT) images, utilizing the VGG11 architecture. Methods: The study utilized a dataset of CBCT images from ischemic stroke patients, accurately labeled with their respective collateral circulation status. To ensure uniformity and comparability, image normalization was executed during the preprocessing phase to standardize pixel values to a consistent scale or range. Then, the VGG11 model is trained using an augmented dataset and classifies collateral circulation patterns. Results: Performance evaluation of the proposed approach demonstrates promising results, with the model achieving an accuracy of 58.32%, a sensitivity of 75.50%, a specificity of 44.10%, a precision of 52.70%, and an F1 score of 62.10% in classifying collateral circulation patterns. Conclusions: This approach automates classification, potentially reducing diagnostic delays and improving patient outcomes. It also lays the groundwork for future research in using deep learning for better stroke diagnosis and management. This study is a significant advancement toward developing practical tools to assist doctors in making informed decisions for ischemic stroke patients. Full article
Show Figures

Figure 1

20 pages, 742 KiB  
Article
Ensemble of HMMs for Sequence Prediction on Multivariate Biomedical Data
by Richard Fechner, Jens Dörpinghaus, Robert Rockenfeller and Jennifer Faber
BioMedInformatics 2024, 4(3), 1672-1691; https://doi.org/10.3390/biomedinformatics4030090 - 3 Jul 2024
Viewed by 763
Abstract
Background: Biomedical data are usually collections of longitudinal data assessed at certain points in time. Clinical observations assess the presences and severity of symptoms, which are the basis for the description and modeling of disease progression. Deciphering potential underlying unknowns from the distinct [...] Read more.
Background: Biomedical data are usually collections of longitudinal data assessed at certain points in time. Clinical observations assess the presences and severity of symptoms, which are the basis for the description and modeling of disease progression. Deciphering potential underlying unknowns from the distinct observation would substantially improve the understanding of pathological cascades. Hidden Markov Models (HMMs) have been successfully applied to the processing of possibly noisy continuous signals. We apply ensembles of HMMs to categorically distributed multivariate time series data, leaving space for expert domain knowledge in the prediction process. Methods: We use an ensemble of HMMs to predict the loss of free walking ability as one major clinical deterioration in the most common autosomal dominantly inherited ataxia disorder worldwide. Results: We present a prediction pipeline that processes data paired with a configuration file, enabling us to train, validate and query an ensemble of HMMs. In particular, we provide a theoretical and practical framework for multivariate time-series inference based on HMMs that includes constructing multiple HMMs, each to predict a particular observable variable. Our analysis is conducted on pseudo-data, but also on biomedical data based on Spinocerebellar ataxia type 3 disease. Conclusions: We find that the model shows promising results for the data we tested. The strength of this approach is that HMMs are well understood, probabilistic and interpretable models, setting it apart from most Deep Learning approaches. We publish all code and evaluation pseudo-data in an open-source repository. Full article
Show Figures

Figure 1

34 pages, 3635 KiB  
Article
Machine Learning for Extraction of Image Features Associated with Progression of Geographic Atrophy
by Janan Arslan and Kurt Benke
BioMedInformatics 2024, 4(3), 1638-1671; https://doi.org/10.3390/biomedinformatics4030089 - 2 Jul 2024
Viewed by 863
Abstract
Background: Several studies have investigated various features and models in order to understand the growth and progression of the ocular disease geographic atrophy (GA). Commonly assessed features include age, sex, smoking, alcohol consumption, sedentary lifestyle, hypertension, and diabetes. There have been inconsistencies regarding [...] Read more.
Background: Several studies have investigated various features and models in order to understand the growth and progression of the ocular disease geographic atrophy (GA). Commonly assessed features include age, sex, smoking, alcohol consumption, sedentary lifestyle, hypertension, and diabetes. There have been inconsistencies regarding which features correlate with GA progression. Chief amongst these inconsistencies is whether the investigated features are readily available for analysis across various ophthalmic institutions. Methods:In this study, we focused our attention on the association of fundus autofluorescence (FAF) imaging features and GA progression. Our method included feature extraction using radiomic processes and feature ranking by machine learning incorporating the algorithm XGBoost to determine the best-ranked features. This led to the development of an image-based linear mixed-effects model, which was designed to account for slope change based on within-subject variability and inter-eye correlation. Metrics used to assess the linear mixed-effects model included marginal and conditional R2, Pearson’s correlation coefficient (r), root mean square error (RMSE), mean error (ME), mean absolute error (MAE), mean absolute deviation (MAD), the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and loglikelihood. Results: We developed a linear mixed-effects model with 15 image-based features. The model results were as follows: R2 = 0.96, r = 0.981, RMSE = 1.32, ME = −7.3 × 10−15, MAE = 0.94, MAD = 0.999, AIC = 2084.93, BIC = 2169.97, and log likelihood = −1022.46. Conclusions: The advantage of our method is that it relies on the inherent properties of the image itself, rather than the availability of clinical or demographic data. Thus, the image features discovered in this study are universally and readily available across the board. Full article
(This article belongs to the Special Issue Editor's Choices Series for Methods in Biomedical Informatics Section)
Show Figures

Figure 1

18 pages, 4357 KiB  
Article
Harnessing Immunoinformatics for Precision Vaccines: Designing Epitope-Based Subunit Vaccines against Hepatitis E Virus
by Elijah Kolawole Oladipo, Emmanuel Oluwatobi Dairo, Comfort Olukemi Bamigboye, Ayodeji Folorunsho Ajayi, Olugbenga Samson Onile, Olumuyiwa Elijah Ariyo, Esther Moradeyo Jimah, Olubukola Monisola Oyawoye, Julius Kola Oloke, Bamidele Abiodun Iwalokun, Olumide Faith Ajani and Helen Onyeaka
BioMedInformatics 2024, 4(3), 1620-1637; https://doi.org/10.3390/biomedinformatics4030088 - 26 Jun 2024
Cited by 1 | Viewed by 1526
Abstract
Background/Objectives: Hepatitis E virus (HEV) is an RNA virus recognized to be spread mainly by fecal-contaminated water. Its infection is known to be a serious threat to public health globally, mostly in developing countries, in which Africa is one of the regions sternly [...] Read more.
Background/Objectives: Hepatitis E virus (HEV) is an RNA virus recognized to be spread mainly by fecal-contaminated water. Its infection is known to be a serious threat to public health globally, mostly in developing countries, in which Africa is one of the regions sternly affected. An African-based vaccine is necessary to actively prevent HEV infection. Methods: This study developed an in silico epitope-based subunit vaccine, incorporating CTL, HTL, and BL epitopes with suitable linkers and adjuvants. Results: The in silico-designed vaccine construct proved immunogenic, non-allergenic, and non-toxic and displayed appropriate physicochemical properties with high solubility. The 3D structure was modeled and subjected to protein docking with Toll-like receptors 2, 3, 4, 6, 8, and 9, which showed a stable binding efficacy, and the dynamics simulation indicated steady interaction. Furthermore, the immune simulation predicted that the designed vaccine would instigate immune responses when administered to humans. Lastly, using a codon adaptation for the E. coli K12 bacterium produced optimum GC content and a high CAI value, which was followed by in silico integration into a pET28 b (+) cloning vector. Conclusions: Generally, these results propose that the design of an epitope-based subunit vaccine can function as an outstanding preventive vaccine candidate against HEV, although validation techniques via in vitro and in vivo approaches are required to justify this statement. Full article
Show Figures

Figure 1

31 pages, 4382 KiB  
Article
AR Platform for Indoor Navigation: New Potential Approach Extensible to Older People with Cognitive Impairment
by Luigi Bibbò, Alessia Bramanti, Jatin Sharma and Francesco Cotroneo
BioMedInformatics 2024, 4(3), 1589-1619; https://doi.org/10.3390/biomedinformatics4030087 - 24 Jun 2024
Cited by 1 | Viewed by 1534
Abstract
Background: Cognitive loss is one of the biggest health problems for older people. The incidence of dementia increases with age, so Alzheimer’s disease (AD), the most prevalent type of dementia, is expected to increase. Patients with dementia find it difficult to cope with [...] Read more.
Background: Cognitive loss is one of the biggest health problems for older people. The incidence of dementia increases with age, so Alzheimer’s disease (AD), the most prevalent type of dementia, is expected to increase. Patients with dementia find it difficult to cope with their daily activities and resort to family members or caregivers. However, aging generally leads to a loss of orientation and navigation skills. This phenomenon creates great inconvenience for autonomous walking, especially in individuals with Mild Cognitive Impairment (MCI) or those suffering from Alzheimer’s disease. The loss of orientation and navigation skills is most felt when old people move from their usual environments to nursing homes or residential facilities. This necessarily involves a person’s constant presence to prevent the patient from moving without a defined destination or incurring dangerous situations. Methods: A navigation system is a support to allow older patients to move without resorting to their caregivers. This application meets the need for helping older people to move without incurring dangers. The aim of the study was to verify the possibility of applying the technology normally used for video games for the development of an indoor navigation system. There is no evidence of this in the literature. Results: We have developed an easy-to-use solution that can be extended to patients with MCI, easing the workload of caregivers and improving patient safety. The method applied was the use of the Unity Vuforia platform, with which an augmented reality APK application was produced on a smartphone. Conclusions: The model differs from traditional techniques because it does not use arrows or labels to identify the desired destination. The solution was tested in the laboratory with staff members. No animal species have been used. The destinations were successfully reached, with an error of 2%. A test was conducted against some evaluation parameters on the use of the model. The values are all close to the maximum expected value. Future developments include testing the application with a predefined protocol in a real-world environment with MCI patients. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop