Topic Editors

Department of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuo-ku, Kobe 650-0017, Japan
Dr. Koji Fujimoto
Department of Advanced Imaging in Medical Magnetic Resonance, Kyoto University, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan

Deep Learning for Medical Image Analysis and Medical Natural Language Processing

Abstract submission deadline
closed (20 August 2024)
Manuscript submission deadline
closed (20 November 2024)
Viewed by
33084

Topic Information

Dear Colleagues,

This Special Issue mainly focuses on the application of deep learning to medical image analysis and medical natural language processing. We welcome original papers and review papers related to the topics below. In particular, this Special Issue welcomes the papers where both medical image analysis and medical natural language processing are used as multi-modal deep learning.
Research Topics:

  • Cutting-edge methodology/algorithm of deep learning for medical image analysis and medical natural language processing.
  • Clinical application of deep learning to medical image analysis and medical natural language processing which mainly focuses on cancer diagnosis and treatment.
  • Open-source software of deep learning which is used for cancer diagnosis and treatment.
  • Open data for medical image analysis and medical natural language processing which are useful for development and validation of deep learning.
  • Reproducibility/validation study for open-source software of deep learning used for cancer diagnosis and treatment.

Dr. Mizuho Nishio
Dr. Koji Fujimoto
Topic Editors

Keywords

  • deep learning
  • medical image analysis
  • natural language process
  • medical imaging
  • cancer

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 18.4 Days CHF 2400
Cancers
cancers
4.5 8.0 2009 17.4 Days CHF 2900
Diagnostics
diagnostics
3.0 4.7 2011 20.3 Days CHF 2600
Tomography
tomography
2.2 2.7 2015 23.8 Days CHF 2400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (14 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
20 pages, 3238 KiB  
Article
Enhanced Disc Herniation Classification Using Grey Wolf Optimization Based on Hybrid Feature Extraction and Deep Learning Methods
by Yasemin Sarı and Nesrin Aydın Atasoy
Tomography 2025, 11(1), 1; https://doi.org/10.3390/tomography11010001 - 26 Dec 2024
Viewed by 611
Abstract
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to [...] Read more.
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to be treated before it develops further. The aim of this study was to classify lumbar disc herniations in a computer-aided, fully automated manner using magnetic resonance images (MRIs). Methods: This study presents a hybrid method integrating residual network (ResNet50), grey wolf optimization (GWO), and machine learning classifiers such as multi-layer perceptron (MLP) and support vector machine (SVM) to improve classification performance. The proposed approach begins with feature extraction using ResNet50, a deep convolutional neural network known for its robust feature representation capabilities. ResNet50’s residual connections allow for effective training and high-quality feature extraction from input images. Following feature extraction, the GWO algorithm, inspired by the social hierarchy and hunting behavior of grey wolves, is employed to optimize the feature set by selecting the most relevant features. Finally, the optimized feature set is fed into machine learning classifiers (MLP and SVM) for classification. The use of various activation functions (e.g., ReLU, identity, logistic, and tanh) in MLP and various kernel functions (e.g., linear, rbf, sigmoid, and polynomial) in SVM allows for a thorough evaluation of the classifiers’ performance. Results: The proposed methodology demonstrates significant improvements in metrics such as accuracy, precision, recall, and F1 score, outperforming traditional approaches in several cases. These results highlight the effectiveness of combining deep learning-based feature extraction with optimization and machine learning classifiers. Conclusions: Compared to other methods, such as capsule networks (CapsNet), EfficientNetB6, and DenseNet169, the proposed ResNet50-GWO-SVM approach achieved superior performance across all metrics, including accuracy, precision, recall, and F1 score, demonstrating its robustness and effectiveness in classification tasks. Full article
Show Figures

Figure 1

18 pages, 685 KiB  
Article
An Efficient Deep Learning Approach for Malaria Parasite Detection in Microscopic Images
by Sorio Boit and Rajvardhan Patil
Diagnostics 2024, 14(23), 2738; https://doi.org/10.3390/diagnostics14232738 - 5 Dec 2024
Viewed by 935
Abstract
Background: Malaria is a life-threatening disease spread by infected mosquitoes, affecting both humans and animals. Its symptoms range from mild to severe, including fever, muscle discomfort, coma, and kidney failure. Accurate diagnosis is crucial but challenging, relying on expert technicians to examine blood [...] Read more.
Background: Malaria is a life-threatening disease spread by infected mosquitoes, affecting both humans and animals. Its symptoms range from mild to severe, including fever, muscle discomfort, coma, and kidney failure. Accurate diagnosis is crucial but challenging, relying on expert technicians to examine blood smears under a microscope. Conventional methods are inefficient, while machine learning approaches struggle with complex tasks and require extensive feature engineering. Deep learning, however, excels in complex tasks and automatic feature extraction. Objective: This paper presents EDRI, which is a novel hybrid deep learning model that integrates multiple architectures for malaria detection from red blood cell images. The EDRI model is designed to capture diverse features and leverage multi-scale analysis. Methods: The proposed EDRI model is trained and evaluated on the NIH Malaria dataset comprising 27,558 labeled microscopic red blood cell images. Results: Experiments demonstrate its effectiveness, achieving an accuracy of 97.68% in detecting malaria, making it a valuable tool for clinicians and public health professionals. Conclusions: The results demonstrate the effectiveness of proposed model’s ability to detect malaria parasite from red blood cell images, offering a robust tool for rapid and reliable malaria diagnosis. Full article
Show Figures

Figure 1

15 pages, 5093 KiB  
Article
Automated Distal Radius and Ulna Skeletal Maturity Grading from Hand Radiographs with an Attention Multi-Task Learning Method
by Xiaowei Liu, Rulan Wang, Wenting Jiang, Zhaohua Lu, Ningning Chen and Hongfei Wang
Tomography 2024, 10(12), 1915-1929; https://doi.org/10.3390/tomography10120139 - 28 Nov 2024
Viewed by 746
Abstract
Background: Assessment of skeletal maturity is a common clinical practice to investigate adolescent growth and endocrine disorders. The distal radius and ulna (DRU) maturity classification is a practical and easy-to-use scheme that was designed for adolescent idiopathic scoliosis clinical management and presents high [...] Read more.
Background: Assessment of skeletal maturity is a common clinical practice to investigate adolescent growth and endocrine disorders. The distal radius and ulna (DRU) maturity classification is a practical and easy-to-use scheme that was designed for adolescent idiopathic scoliosis clinical management and presents high sensitivity in predicting the growth peak and cessation among adolescents. However, time-consuming and error-prone manual assessment limits DRU in clinical application. Methods: In this study, we propose a multi-task learning framework with an attention mechanism for the joint segmentation and classification of the distal radius and ulna in hand X-ray images. The proposed framework consists of two sub-networks: an encoder–decoder structure with attention gates for segmentation and a slight convolutional network for classification. Results: With a transfer learning strategy, the proposed framework improved DRU segmentation and classification over the single task learning counterparts and previously reported methods, achieving an accuracy of 94.3% and 90.8% for radius and ulna maturity grading. Findings: Our automatic DRU assessment platform covers the whole process of growth acceleration and cessation during puberty. Upon incorporation into advanced scoliosis progression prognostic tools, clinical decision making will be potentially improved in the conservative and operative management of scoliosis patients. Full article
Show Figures

Figure 1

11 pages, 2527 KiB  
Article
Exploring Multilingual Large Language Models for Enhanced TNM Classification of Radiology Report in Lung Cancer Staging
by Hidetoshi Matsuo, Mizuho Nishio, Takaaki Matsunaga, Koji Fujimoto and Takamichi Murakami
Cancers 2024, 16(21), 3621; https://doi.org/10.3390/cancers16213621 - 26 Oct 2024
Cited by 2 | Viewed by 1078
Abstract
Background/Objectives: This study aimed to investigate the accuracy of Tumor, Node, Metastasis (TNM) classification based on radiology reports using GPT3.5-turbo (GPT3.5) and the utility of multilingual large language models (LLMs) in both Japanese and English. Methods: Utilizing GPT3.5, we developed a system to [...] Read more.
Background/Objectives: This study aimed to investigate the accuracy of Tumor, Node, Metastasis (TNM) classification based on radiology reports using GPT3.5-turbo (GPT3.5) and the utility of multilingual large language models (LLMs) in both Japanese and English. Methods: Utilizing GPT3.5, we developed a system to automatically generate TNM classifications from chest computed tomography reports for lung cancer and evaluate its performance. We statistically analyzed the impact of providing full or partial TNM definitions in both languages using a generalized linear mixed model. Results: The highest accuracy was attained with full TNM definitions and radiology reports in English (M = 94%, N = 80%, T = 47%, and TNM combined = 36%). Providing definitions for each of the T, N, and M factors statistically improved their respective accuracies (T: odds ratio [OR] = 2.35, p < 0.001; N: OR = 1.94, p < 0.01; M: OR = 2.50, p < 0.001). Japanese reports exhibited decreased N and M accuracies (N accuracy: OR = 0.74 and M accuracy: OR = 0.21). Conclusions: This study underscores the potential of multilingual LLMs for automatic TNM classification in radiology reports. Even without additional model training, performance improvements were evident with the provided TNM definitions, indicating LLMs’ relevance in radiology contexts. Full article
Show Figures

Figure 1

21 pages, 37600 KiB  
Article
A Multi-Hierarchical Complementary Feature Interaction Network for Accelerated Multi-Modal MR Imaging
by Haotian Zhang, Qiaoyu Ma, Yiran Qiu and Zongying Lai
Appl. Sci. 2024, 14(21), 9764; https://doi.org/10.3390/app14219764 - 25 Oct 2024
Viewed by 787
Abstract
Magnetic resonance (MR) imaging is widely used in the clinical field due to its non-invasiveness, but the long scanning time is still a bottleneck for its popularization. Using the complementary information between multi-modal imaging to accelerate imaging provides a novel and effective MR [...] Read more.
Magnetic resonance (MR) imaging is widely used in the clinical field due to its non-invasiveness, but the long scanning time is still a bottleneck for its popularization. Using the complementary information between multi-modal imaging to accelerate imaging provides a novel and effective MR fast imaging solution. However, previous technologies mostly use simple fusion methods and fail to fully utilize their potential sharable knowledge. In this study, we introduced a novel multi-hierarchical complementary feature interaction network (MHCFIN) to realize joint reconstruction of multi-modal MR images with undersampled data and thus accelerate multi-modal imaging. Firstly, multiple attention mechanisms are integrated with a dual-branch encoder–decoder network to represent shared features and complementary features of different modalities. In the decoding stage, the multi-modal feature interaction module (MMFIM) acts as a bridge between the two branches, realizing complementary knowledge transfer between different modalities through cross-level fusion. The single-modal feature fusion module (SMFFM) carries out multi-scale feature representation and optimization of the single modality, preserving better anatomical details. Extensive experiments are conducted under different sampling patterns and acceleration factors. The results show that this proposed method achieves obvious improvement compared with existing state-of-the-art reconstruction methods in both visual quality and quantity. Full article
Show Figures

Figure 1

24 pages, 1240 KiB  
Article
Hospital Re-Admission Prediction Using Named Entity Recognition and Explainable Machine Learning
by Safaa Dafrallah and Moulay A. Akhloufi
Diagnostics 2024, 14(19), 2151; https://doi.org/10.3390/diagnostics14192151 - 27 Sep 2024
Viewed by 709
Abstract
Early hospital readmission refers to unplanned emergency admission of patients within 30 days of discharge. Predicting early readmission risk before discharge can help to reduce the cost of readmissions for hospitals and decrease the death rate for Intensive Care Unit patients. In this [...] Read more.
Early hospital readmission refers to unplanned emergency admission of patients within 30 days of discharge. Predicting early readmission risk before discharge can help to reduce the cost of readmissions for hospitals and decrease the death rate for Intensive Care Unit patients. In this paper, we propose a novel approach for prediction of unplanned hospital readmissions using discharge notes from the MIMIC-III database. This approach is based on first extracting relevant information from clinical reports using a pretrained Named Entity Recognition model called BioMedical-NER, which is built on Bidirectional Encoder Representations from Transformers architecture, with the extracted features then used to train machine learning models to predict unplanned readmissions. Our proposed approach achieves better results on clinical reports compared to the state-of-the-art methods, with an average precision of 88.4% achieved by the Gradient Boosting algorithm. In addition, explainable Artificial Intelligence techniques are applied to provide deeper comprehension of the predictive results. Full article
Show Figures

Figure 1

21 pages, 24110 KiB  
Article
Magnifying Networks for Histopathological Images with Billions of Pixels
by Neofytos Dimitriou, Ognjen Arandjelović and David J. Harrison
Diagnostics 2024, 14(5), 524; https://doi.org/10.3390/diagnostics14050524 - 1 Mar 2024
Viewed by 1352
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, [...] Read more.
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature—which rely on the splitting of the original images into small patches—and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets—as well as the proposed optimization framework—in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches. Full article
Show Figures

Figure 1

16 pages, 4554 KiB  
Article
Identifying Diabetic Retinopathy in the Human Eye: A Hybrid Approach Based on a Computer-Aided Diagnosis System Combined with Deep Learning
by Şükran Yaman Atcı, Ali Güneş, Metin Zontul and Zafer Arslan
Tomography 2024, 10(2), 215-230; https://doi.org/10.3390/tomography10020017 - 5 Feb 2024
Cited by 9 | Viewed by 2125
Abstract
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of [...] Read more.
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of medical imagery from a patient’s eye to identify the damage caused to blood vessels. Issues with unbalanced datasets, incorrect annotations, a lack of sample images, and improper performance evaluation measures have negatively impacted the performance of deep learning models. Using three benchmark datasets of diabetic retinopathy, we conducted a detailed comparison study comparing various state-of-the-art approaches to address the effect caused by class imbalance, with precision scores of 93%, 89%, 81%, 76%, and 96%, respectively, for normal, mild, moderate, severe, and DR phases. The analyses of the hybrid modeling, including CNN analysis and SHAP model derivation results, are compared at the end of the paper, and ideal hybrid modeling strategies for deep learning classification models for automated DR detection are identified. Full article
Show Figures

Figure 1

13 pages, 2117 KiB  
Article
Real-Time Protozoa Detection from Microscopic Imaging Using YOLOv4 Algorithm
by İdris Kahraman, İsmail Rakıp Karaş and Muhammed Kamil Turan
Appl. Sci. 2024, 14(2), 607; https://doi.org/10.3390/app14020607 - 10 Jan 2024
Viewed by 3901
Abstract
Protozoa detection and classification from freshwaters and microscopic imaging are critical components in environmental monitoring, parasitology, science, biological processes, and scientific research. Bacterial and parasitic contamination of water plays an important role in society health. Conventional methods often rely on manual identification, resulting [...] Read more.
Protozoa detection and classification from freshwaters and microscopic imaging are critical components in environmental monitoring, parasitology, science, biological processes, and scientific research. Bacterial and parasitic contamination of water plays an important role in society health. Conventional methods often rely on manual identification, resulting in time-consuming analyses and limited scalability. In this study, we propose a real-time protozoa detection framework using the YOLOv4 algorithm, a state-of-the-art deep learning model known for its exceptional speed and accuracy. Our dataset consists of objects of the protozoa species, such as Bdelloid Rotifera, Stylonychia Pustulata, Paramecium, Hypotrich Ciliate, Colpoda, Lepocinclis Acus, and Clathrulina Elegans, which are in freshwaters and have different shapes, sizes, and movements. One of the major properties of our work is to create a dataset by forming different cultures from various water sources like rainwater and puddles. Our network architecture is carefully tailored to optimize the detection of protozoa, ensuring precise localization and classification of individual organisms. To validate our approach, extensive experiments are conducted using real-world microscopic image datasets. The results demonstrate that the YOLOv4-based model achieves outstanding detection accuracy and significantly outperforms traditional methods in terms of speed and precision. The real-time capabilities of our framework enable rapid analysis of large-scale datasets, making it highly suitable for dynamic environments and time-sensitive applications. Furthermore, we introduce a user-friendly interface that allows researchers and environmental professionals to effortlessly deploy our YOLOv4-based protozoa detection tool. We conducted f1-score 0.95, precision 0.92, sensitivity 0.98, and mAP 0.9752 as evaluating metrics. The proposed model achieved 97% accuracy. After reaching high efficiency, a desktop application was developed to provide testing of the model. The proposed framework’s speed and accuracy have significant implications for various fields, ranging from a support tool for paramesiology/parasitology studies to water quality assessments, offering a powerful tool to enhance our understanding and preservation of ecosystems. Full article
Show Figures

Figure 1

12 pages, 895 KiB  
Article
Artificial Intelligence and Panendoscopy—Automatic Detection of Clinically Relevant Lesions in Multibrand Device-Assisted Enteroscopy
by Francisco Mendes, Miguel Mascarenhas, Tiago Ribeiro, João Afonso, Pedro Cardoso, Miguel Martins, Hélder Cardoso, Patrícia Andrade, João P. S. Ferreira, Miguel Mascarenhas Saraiva and Guilherme Macedo
Cancers 2024, 16(1), 208; https://doi.org/10.3390/cancers16010208 - 1 Jan 2024
Cited by 2 | Viewed by 1990
Abstract
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE’s diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their [...] Read more.
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE’s diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their application in DAE. Our group aimed to develop a multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. In total, 338 exams performed in two specialized centers were retrospectively evaluated, with 152 single-balloon enteroscopies (Fujifilm®, Porto, Portugal), 172 double-balloon enteroscopies (Olympus®, Porto, Portugal) and 14 motorized spiral enteroscopies (Olympus®, Porto, Portugal); then, 40,655 images were divided in a training dataset (90% of the images, n = 36,599) and testing dataset (10% of the images, n = 4066) used to evaluate the model. The CNN’s output was compared to an expert consensus classification. The model was evaluated by its sensitivity, specificity, positive (PPV) and negative predictive values (NPV), accuracy and area under the precision recall curve (AUC-PR). The CNN had an 88.9% sensitivity, 98.9% specificity, 95.8% PPV, 97.1% NPV, 96.8% accuracy and an AUC-PR of 0.97. Our group developed the first multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. The development of accurate deep learning models is of utmost importance for increasing the diagnostic yield of DAE-based panendoscopy. Full article
Show Figures

Figure 1

18 pages, 32324 KiB  
Article
DTR-GAN: An Unsupervised Bidirectional Translation Generative Adversarial Network for MRI-CT Registration
by Aolin Yang, Tiejun Yang, Xiang Zhao, Xin Zhang, Yanghui Yan and Chunxia Jiao
Appl. Sci. 2024, 14(1), 95; https://doi.org/10.3390/app14010095 - 21 Dec 2023
Cited by 2 | Viewed by 1680
Abstract
Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. [...] Read more.
Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. Most of the current methods based on image-to-image translation cannot fully leverage the available information, which will affect the subsequent registration performance. To solve the problem, we develop an unsupervised multimodal image registration method named DTR-GAN. Firstly, we design a multimodal registration framework via a bidirectional translation network to transform the multimodal image registration into a unimodal registration, which can effectively use the complementary information of different modalities. Then, to enhance the quality of the transformed images in the translation network, we design a multiscale encoder–decoder network that effectively captures both local and global features in images. Finally, we propose a mixed similarity loss to encourage the warped image to be closer to the target image in deep features. We extensively evaluate methods for MRI-CT image registration tasks of the abdominal cavity with advanced unsupervised multimodal image registration approaches. The results indicate that DTR-GAN obtains a competitive performance compared to other methods in MRI-CT registration. Compared with DFR, DTR-GAN has not only obtained performance improvements of 2.35% and 2.08% in the dice similarity coefficient (DSC) of MRI-CT registration and CT-MRI registration on the Learn2Reg dataset but has also decreased the average symmetric surface distance (ASD) by 0.33 mm and 0.12 mm on the Learn2Reg dataset. Full article
Show Figures

Figure 1

13 pages, 5402 KiB  
Review
Assessment of Computed Tomography Perfusion Research Landscape: A Topic Modeling Study
by Burak B. Ozkara, Mert Karabacak, Konstantinos Margetis, Vivek S. Yedavalli, Max Wintermark and Sotirios Bisdas
Tomography 2023, 9(6), 2016-2028; https://doi.org/10.3390/tomography9060158 - 1 Nov 2023
Viewed by 3993
Abstract
The number of scholarly articles continues to rise. The continuous increase in scientific output poses a challenge for researchers, who must devote considerable time to collecting and analyzing these results. The topic modeling approach emerges as a novel response to this need. Considering [...] Read more.
The number of scholarly articles continues to rise. The continuous increase in scientific output poses a challenge for researchers, who must devote considerable time to collecting and analyzing these results. The topic modeling approach emerges as a novel response to this need. Considering the swift advancements in computed tomography perfusion (CTP), we deem it essential to launch an initiative focused on topic modeling. We conducted a comprehensive search of the Scopus database from 1 January 2000 to 16 August 2023, to identify relevant articles about CTP. Using the BERTopic model, we derived a group of topics along with their respective representative articles. For the 2020s, linear regression models were used to identify and interpret trending topics. From the most to the least prevalent, the topics that were identified include “Tumor Vascularity”, “Stroke Assessment”, “Myocardial Perfusion”, “Intracerebral Hemorrhage”, “Imaging Optimization”, “Reperfusion Therapy”, “Postprocessing”, “Carotid Artery Disease”, “Seizures”, “Hemorrhagic Transformation”, “Artificial Intelligence”, and “Moyamoya Disease”. The model provided insights into the trends of the current decade, highlighting “Postprocessing” and “Artificial Intelligence” as the most trending topics. Full article
Show Figures

Figure 1

13 pages, 1034 KiB  
Article
Real-World Evidence on the Clinical Characteristics and Management of Patients with Chronic Lymphocytic Leukemia in Spain Using Natural Language Processing: The SRealCLL Study
by Javier Loscertales, Pau Abrisqueta-Costa, Antonio Gutierrez, José Ángel Hernández-Rivas, Rafael Andreu-Lapiedra, Alba Mora, Carolina Leiva-Farré, María Dolores López-Roda, Ángel Callejo-Mellén, Esther Álvarez-García and José Antonio García-Marco
Cancers 2023, 15(16), 4047; https://doi.org/10.3390/cancers15164047 - 10 Aug 2023
Cited by 4 | Viewed by 3711
Abstract
The SRealCLL study aimed to obtain real-world evidence on the clinical characteristics and treatment patterns of patients with chronic lymphocytic leukemia (CLL) using natural language processing (NLP). Electronic health records (EHRs) from seven Spanish hospitals (January 2016–December 2018) were analyzed using EHRead® [...] Read more.
The SRealCLL study aimed to obtain real-world evidence on the clinical characteristics and treatment patterns of patients with chronic lymphocytic leukemia (CLL) using natural language processing (NLP). Electronic health records (EHRs) from seven Spanish hospitals (January 2016–December 2018) were analyzed using EHRead® technology, based on NLP and machine learning. A total of 534 CLL patients were assessed. No treatment was detected in 270 (50.6%) patients (watch-and-wait, W&W). First-line (1L) treatment was identified in 230 (43.1%) patients and relapsed/refractory (2L) treatment was identified in 58 (10.9%). The median age ranged from 71 to 75 years, with a uniform male predominance (54.8–63.8%). The main comorbidities included hypertension (W&W: 35.6%; 1L: 38.3%; 2L: 39.7%), diabetes mellitus (W&W: 24.4%; 1L: 24.3%; 2L: 31%), cardiac arrhythmia (W&W: 16.7%; 1L: 17.8%; 2L: 17.2%), heart failure (W&W 16.3%, 1L 17.4%, 2L 17.2%), and dyslipidemia (W&W: 13.7%; 1L: 18.7%; 2L: 19.0%). The most common antineoplastic treatment was ibrutinib in 1L (64.8%) and 2L (62.1%), followed by bendamustine + rituximab (12.6%), obinutuzumab + chlorambucil (5.2%), rituximab + chlorambucil (4.8%), and idelalisib + rituximab (3.9%) in 1L and venetoclax (15.5%), idelalisib + rituximab (6.9%), bendamustine + rituximab (3.5%), and venetoclax + rituximab (3.5%) in 2L. This study expands the information available on patients with CLL in Spain, describing the diversity in patient characteristics and therapeutic approaches in clinical practice. Full article
Show Figures

Figure 1

24 pages, 11744 KiB  
Review
A Review of Machine Learning Techniques for the Classification and Detection of Breast Cancer from Medical Images
by Reem Jalloul, H. K. Chethan and Ramez Alkhatib
Diagnostics 2023, 13(14), 2460; https://doi.org/10.3390/diagnostics13142460 - 24 Jul 2023
Cited by 20 | Viewed by 6968
Abstract
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The [...] Read more.
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The best information for identifying and diagnosing breast cancer comes from medical pictures. This paper reviews the history of the discipline and examines how deep learning and machine learning are applied to detect breast cancer. The classification of breast cancer, using several medical imaging modalities, is covered in this paper. Numerous medical imaging modalities’ classification systems for tumors, non-tumors, and dense masses are thoroughly explained. The differences between various medical image types are initially examined using a variety of study datasets. Following that, numerous machine learning and deep learning methods exist for diagnosing and classifying breast cancer. Finally, this review addressed the challenges of categorization and detection and the best results of different approaches. Full article
Show Figures

Figure 1

Back to TopTop