Artificial Intelligence (AI) and Machine Learning (ML) in Medical Imaging Informatics towards Diagnostic Decision Making

A special issue of Healthcare (ISSN 2227-9032). This special issue belongs to the section "Artificial Intelligence in Medicine".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 47168

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Morgan State University, Baltimore, MD 21251, USA
Interests: computer vision; image processing; information retrieval; machine learning; deep learning; classification, retrieval and interpretation of medical images; medical caption generation; explainable AI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging informatics and image-based medical diagnosis is one of the important service areas in the healthcare sector. A large number of medical images of various modalities (CT, MRI, X-ray, ultrasound, etc.) are generated by hospitals and clinics every day. Such images constitute an important source of anatomical and functional information for diagnosis of diseases, medical research, and education. According to the Society for Imaging Informatics in Medicine (SIIM), “Imaging informatics touches every aspect of the imaging chain from image creation and acquisition to image distribution and management, to image storage and retrieval, to image processing, analysis and understanding, to image visualization and data navigation, to image interpretation, reporting, and communications. The field serves as the integrative catalyst for these processes and forms a bridge with imaging and other medical disciplines.”

It has emerged as one of the fastest growing research areas in recent years given the evolution of techniques in radiology, molecular imaging, anatomical imaging, and functional imaging and advancements in imaging biomarker generation. Especially, for the past decade, research in this field has increasingly been dominated by Artificial Intelligence (AI) and Machine Learning (ML). Currently, substantial efforts are developed for the enrichment of medical imaging applications using Deep Learning (DL) for detection, segmentation, diagnosis, annotation, summarization and prediction.

This Special Issue (SI), invites manuscripts (research, review, and case studies) on AI and ML (especially DL techniques) based ongoing progress and related development in medical imaging informatics to influence human health and healthcare systems through diagnostic decision making process. The SI will cover topics across the spectrum of medical imaging informatics by considering the breadth of imaging modalities (e.g., optical, molecular, in addition to traditional diagnostic modalities) and the diversity of specialties that depend on imaging information (e.g., radiology, dermatology, pathology, surgery, etc.).

Research areas may include (but not limited to) the following:

  • AI/ML-based classification, object (concept) detection in medical imaging;
  • Pattern recognition and reasoning for the specific disease in medical imaging;
  • AI/ML based disease detection and diagnosis;
  • ML/DL in radiology;
  • Skin cancer diagnosis in dermoscopic images;
  • AI/ML in Oncology;
  • AI Based analysis of histopathological images;
  • Retinal imaging and image analysis;
  • AI-based peripheral blood smear image analysis;
  • Application of ML in multimodal medical data;
  • AI/ML-based decision support systems (DSSs);
  • AI/ML-based screening systems;
  • DL-based biomedical medical image classification;
  • Multi-class and multi-label classification of biomedical images;
  • ML/DL-based biomedical image retrieval systems;
  • DL-based automatic medical image annotation and summarization;
  • Explainable AI in medical imaging and DSS;
  • Addressing bias in biomedical image data and imaging informatics.

Dr. Md Mahmudur Rahmanon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Healthcare is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • medical imaging informatics
  • decision support system
  • diagnostic aid
  • computer aided diagnostic (CAD) systems
  • AI-based screening system
  • medical image classification
  • biomedical image retrieval
  • medical image annotation
  • biomedical image summarization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 5630 KiB  
Article
An International Non-Inferiority Study for the Benchmarking of AI for Routine Radiology Cases: Chest X-ray, Fluorography and Mammography
by Kirill Arzamasov, Yuriy Vasilev, Anton Vladzymyrskyy, Olga Omelyanskaya, Igor Shulkin, Darya Kozikhina, Inna Goncharova, Pavel Gelezhe, Yury Kirpichev, Tatiana Bobrovskaya and Anna Andreychenko
Healthcare 2023, 11(12), 1684; https://doi.org/10.3390/healthcare11121684 - 8 Jun 2023
Cited by 1 | Viewed by 2026
Abstract
An international reader study was conducted to gauge an average diagnostic accuracy of radiologists interpreting chest X-ray images, including those from fluorography and mammography, and establish requirements for stand-alone radiological artificial intelligence (AI) models. The retrospective studies in the datasets were labelled as [...] Read more.
An international reader study was conducted to gauge an average diagnostic accuracy of radiologists interpreting chest X-ray images, including those from fluorography and mammography, and establish requirements for stand-alone radiological artificial intelligence (AI) models. The retrospective studies in the datasets were labelled as containing or not containing target pathological findings based on a consensus of two experienced radiologists, and the results of a laboratory test and follow-up examination, where applicable. A total of 204 radiologists from 11 countries with various experience performed an assessment of the dataset with a 5-point Likert scale via a web platform. Eight commercial radiological AI models analyzed the same dataset. The AI AUROC was 0.87 (95% CI:0.83–0.9) versus 0.96 (95% CI 0.94–0.97) for radiologists. The sensitivity and specificity of AI versus radiologists were 0.71 (95% CI 0.64–0.78) versus 0.91 (95% CI 0.86–0.95) and 0.93 (95% CI 0.89–0.96) versus 0.9 (95% CI 0.85–0.94) for AI. The overall diagnostic accuracy of radiologists was superior to AI for chest X-ray and mammography. However, the accuracy of AI was noninferior to the least experienced radiologists for mammography and fluorography, and to all radiologists for chest X-ray. Therefore, an AI-based first reading could be recommended to reduce the workload burden of radiologists for the most common radiological studies such as chest X-ray and mammography. Full article
Show Figures

Figure 1

17 pages, 5641 KiB  
Article
Deep Learning-Based Prediction of Diabetic Retinopathy Using CLAHE and ESRGAN for Enhancement
by Ghadah Alwakid, Walaa Gouda and Mamoona Humayun
Healthcare 2023, 11(6), 863; https://doi.org/10.3390/healthcare11060863 - 15 Mar 2023
Cited by 36 | Viewed by 7290
Abstract
Vision loss can be avoided if diabetic retinopathy (DR) is diagnosed and treated promptly. The main five DR stages are none, moderate, mild, proliferate, and severe. In this study, a deep learning (DL) model is presented that diagnoses all five stages of DR [...] Read more.
Vision loss can be avoided if diabetic retinopathy (DR) is diagnosed and treated promptly. The main five DR stages are none, moderate, mild, proliferate, and severe. In this study, a deep learning (DL) model is presented that diagnoses all five stages of DR with more accuracy than previous methods. The suggested method presents two scenarios: case 1 with image enhancement using a contrast limited adaptive histogram equalization (CLAHE) filtering algorithm in conjunction with an enhanced super-resolution generative adversarial network (ESRGAN), and case 2 without image enhancement. Augmentation techniques were then performed to generate a balanced dataset utilizing the same parameters for both cases. Using Inception-V3 applied to the Asia Pacific Tele-Ophthalmology Society (APTOS) datasets, the developed model achieved an accuracy of 98.7% for case 1 and 80.87% for case 2, which is greater than existing methods for detecting the five stages of DR. It was demonstrated that using CLAHE and ESRGAN improves a model’s performance and learning ability. Full article
Show Figures

Figure 1

16 pages, 8694 KiB  
Article
Automatic Detection and Measurement of Renal Cysts in Ultrasound Images: A Deep Learning Approach
by Yurie Kanauchi, Masahiro Hashimoto, Naoki Toda, Saori Okamoto, Hasnine Haque, Masahiro Jinzaki and Yasubumi Sakakibara
Healthcare 2023, 11(4), 484; https://doi.org/10.3390/healthcare11040484 - 7 Feb 2023
Cited by 9 | Viewed by 3551
Abstract
Ultrasonography is widely used for diagnosis of diseases in internal organs because it is nonradioactive, noninvasive, real-time, and inexpensive. In ultrasonography, a set of measurement markers is placed at two points to measure organs and tumors, then the position and size of the [...] Read more.
Ultrasonography is widely used for diagnosis of diseases in internal organs because it is nonradioactive, noninvasive, real-time, and inexpensive. In ultrasonography, a set of measurement markers is placed at two points to measure organs and tumors, then the position and size of the target finding are measured on this basis. Among the measurement targets of abdominal ultrasonography, renal cysts occur in 20–50% of the population regardless of age. Therefore, the frequency of measurement of renal cysts in ultrasound images is high, and the effect of automating measurement would be high as well. The aim of this study was to develop a deep learning model that can automatically detect renal cysts in ultrasound images and predict the appropriate position of a pair of salient anatomical landmarks to measure their size. The deep learning model adopted fine-tuned YOLOv5 for detection of renal cysts and fine-tuned UNet++ for prediction of saliency maps, representing the position of salient landmarks. Ultrasound images were input to YOLOv5, and images cropped inside the bounding box and detected from the input image by YOLOv5 were input to UNet++. For comparison with human performance, three sonographers manually placed salient landmarks on 100 unseen items of the test data. These salient landmark positions annotated by a board-certified radiologist were used as the ground truth. We then evaluated and compared the accuracy of the sonographers and the deep learning model. Their performances were evaluated using precision–recall metrics and the measurement error. The evaluation results show that the precision and recall of our deep learning model for detection of renal cysts are comparable to standard radiologists; the positions of the salient landmarks were predicted with an accuracy close to that of the radiologists, and in a shorter time. Full article
Show Figures

Figure 1

19 pages, 20290 KiB  
Article
Lung and Infection CT-Scan-Based Segmentation with 3D UNet Architecture and Its Modification
by Mohammad Hamid Asnawi, Anindya Apriliyanti Pravitasari, Gumgum Darmawan, Triyani Hendrawati, Intan Nurma Yulita, Jadi Suprijadi and Farid Azhar Lutfi Nugraha
Healthcare 2023, 11(2), 213; https://doi.org/10.3390/healthcare11020213 - 10 Jan 2023
Cited by 6 | Viewed by 3437
Abstract
COVID-19 is the disease that has spread over the world since December 2019. This disease has a negative impact on individuals, governments, and even the global economy, which has caused the WHO to declare COVID-19 as a PHEIC (Public Health Emergency of International [...] Read more.
COVID-19 is the disease that has spread over the world since December 2019. This disease has a negative impact on individuals, governments, and even the global economy, which has caused the WHO to declare COVID-19 as a PHEIC (Public Health Emergency of International Concern). Until now, there has been no medicine that can completely cure COVID-19. Therefore, to prevent the spread and reduce the negative impact of COVID-19, an accurate and fast test is needed. The use of chest radiography imaging technology, such as CXR and CT-scan, plays a significant role in the diagnosis of COVID-19. In this study, CT-scan segmentation will be carried out using the 3D version of the most recommended segmentation algorithm for bio-medical images, namely 3D UNet, and three other architectures from the 3D UNet modifications, namely 3D ResUNet, 3D VGGUNet, and 3D DenseUNet. These four architectures will be used in two cases of segmentation: binary-class segmentation, where each architecture will segment the lung area from a CT scan; and multi-class segmentation, where each architecture will segment the lung and infection area from a CT scan. Before entering the model, the dataset is preprocessed first by applying a minmax scaler to scale the pixel value to a range of zero to one, and the CLAHE method is also applied to eliminate intensity in homogeneity and noise from the data. Of the four models tested in this study, surprisingly, the original 3D UNet produced the most satisfactory results compared to the other three architectures, although it requires more iterations to obtain the maximum results. For the binary-class segmentation case, 3D UNet produced IoU scores, Dice scores, and accuracy of 94.32%, 97.05%, and 99.37%, respectively. For the case of multi-class segmentation, 3D UNet produced IoU scores, Dice scores, and accuracy of 81.58%, 88.61%, and 98.78%, respectively. The use of 3D segmentation architecture will be very helpful for medical personnel because, apart from helping the process of diagnosing someone with COVID-19, they can also find out the severity of the disease through 3D infection projections. Full article
Show Figures

Figure 1

16 pages, 11971 KiB  
Article
Artificial-Intelligence-Based Decision Making for Oral Potentially Malignant Disorder Diagnosis in Internet of Medical Things Environment
by Rana Alabdan, Abdulrahman Alruban, Anwer Mustafa Hilal and Abdelwahed Motwakel
Healthcare 2023, 11(1), 113; https://doi.org/10.3390/healthcare11010113 - 30 Dec 2022
Cited by 10 | Viewed by 2156
Abstract
Oral cancer is considered one of the most common cancer types in several counties. Earlier-stage identification is essential for better prognosis, treatment, and survival. To enhance precision medicine, Internet of Medical Things (IoMT) and deep learning (DL) models can be developed for automated [...] Read more.
Oral cancer is considered one of the most common cancer types in several counties. Earlier-stage identification is essential for better prognosis, treatment, and survival. To enhance precision medicine, Internet of Medical Things (IoMT) and deep learning (DL) models can be developed for automated oral cancer classification to improve detection rate and decrease cancer-specific mortality. This article focuses on the design of an optimal Inception-Deep Convolution Neural Network for Oral Potentially Malignant Disorder Detection (OIDCNN-OPMDD) technique in the IoMT environment. The presented OIDCNN-OPMDD technique mainly concentrates on identifying and classifying oral cancer by using an IoMT device-based data collection process. In this study, the feature extraction and classification process are performed using the IDCNN model, which integrates the Inception module with DCNN. To enhance the classification performance of the IDCNN model, the moth flame optimization (MFO) technique can be employed. The experimental results of the OIDCNN-OPMDD technique are investigated, and the results are inspected under specific measures. The experimental outcome pointed out the enhanced performance of the OIDCNN-OPMDD model over other DL models. Full article
Show Figures

Figure 1

16 pages, 5317 KiB  
Article
Equilibrium Optimization Algorithm with Ensemble Learning Based Cervical Precancerous Lesion Classification Model
by Rasha A. Mansouri and Mahmoud Ragab
Healthcare 2023, 11(1), 55; https://doi.org/10.3390/healthcare11010055 - 25 Dec 2022
Cited by 8 | Viewed by 2499
Abstract
Recently, artificial intelligence (AI) with deep learning (DL) and machine learning (ML) has been extensively used to automate labor-intensive and time-consuming work and to help in prognosis and diagnosis. AI’s role in biomedical and biological imaging is an emerging field of research and [...] Read more.
Recently, artificial intelligence (AI) with deep learning (DL) and machine learning (ML) has been extensively used to automate labor-intensive and time-consuming work and to help in prognosis and diagnosis. AI’s role in biomedical and biological imaging is an emerging field of research and reveals future trends. Cervical cell (CCL) classification is crucial in screening cervical cancer (CC) at an earlier stage. Unlike the traditional classification method, which depends on hand-engineered or crafted features, convolution neural network (CNN) usually categorizes CCLs through learned features. Moreover, the latent correlation of images might be disregarded in CNN feature learning and thereby influence the representative capability of the CNN feature. This study develops an equilibrium optimizer with ensemble learning-based cervical precancerous lesion classification on colposcopy images (EOEL-PCLCCI) technique. The presented EOEL-PCLCCI technique mainly focuses on identifying and classifying cervical cancer on colposcopy images. In the presented EOEL-PCLCCI technique, the DenseNet-264 architecture is used for the feature extractor, and the EO algorithm is applied as a hyperparameter optimizer. An ensemble of weighted voting classifications, namely long short-term memory (LSTM) and gated recurrent unit (GRU), is used for the classification process. A widespread simulation analysis is performed on a benchmark dataset to depict the superior performance of the EOEL-PCLCCI approach, and the results demonstrated the betterment of the EOEL-PCLCCI algorithm over other DL models. Full article
Show Figures

Figure 1

18 pages, 5590 KiB  
Article
Melanoma Detection Using Deep Learning-Based Classifications
by Ghadah Alwakid, Walaa Gouda, Mamoona Humayun and Najm Us Sama
Healthcare 2022, 10(12), 2481; https://doi.org/10.3390/healthcare10122481 - 8 Dec 2022
Cited by 48 | Viewed by 5044
Abstract
One of the most prevalent cancers worldwide is skin cancer, and it is becoming more common as the population ages. As a general rule, the earlier skin cancer can be diagnosed, the better. As a result of the success of deep learning (DL) [...] Read more.
One of the most prevalent cancers worldwide is skin cancer, and it is becoming more common as the population ages. As a general rule, the earlier skin cancer can be diagnosed, the better. As a result of the success of deep learning (DL) algorithms in other industries, there has been a substantial increase in automated diagnosis systems in healthcare. This work proposes DL as a method for extracting a lesion zone with precision. First, the image is enhanced using Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) to improve the image’s quality. Then, segmentation is used to segment Regions of Interest (ROI) from the full image. We employed data augmentation to rectify the data disparity. The image is then analyzed with a convolutional neural network (CNN) and a modified version of Resnet-50 to classify skin lesions. This analysis utilized an unequal sample of seven kinds of skin cancer from the HAM10000 dataset. With an accuracy of 0.86, a precision of 0.84, a recall of 0.86, and an F-score of 0.86, the proposed CNN-based Model outperformed the earlier study’s results by a significant margin. The study culminates with an improved automated method for diagnosing skin cancer that benefits medical professionals and patients. Full article
Show Figures

Figure 1

12 pages, 2494 KiB  
Article
Using Deep Neural Network Approach for Multiple-Class Assessment of Digital Mammography
by Shih-Yen Hsu, Chi-Yuan Wang, Yi-Kai Kao, Kuo-Ying Liu, Ming-Chia Lin, Li-Ren Yeh, Yi-Ming Wang, Chih-I Chen and Feng-Chen Kao
Healthcare 2022, 10(12), 2382; https://doi.org/10.3390/healthcare10122382 - 27 Nov 2022
Cited by 2 | Viewed by 1934
Abstract
According to the Health Promotion Administration in the Ministry of Health and Welfare statistics in Taiwan, over ten thousand women have breast cancer every year. Mammography is widely used to detect breast cancer. However, it is limited by the operator’s technique, the cooperation [...] Read more.
According to the Health Promotion Administration in the Ministry of Health and Welfare statistics in Taiwan, over ten thousand women have breast cancer every year. Mammography is widely used to detect breast cancer. However, it is limited by the operator’s technique, the cooperation of the subjects, and the subjective interpretation by the physician. It results in inconsistent identification. Therefore, this study explores the use of a deep neural network algorithm for the classification of mammography images. In the experimental design, a retrospective study was used to collect imaging data from actual clinical cases. The mammography images were collected and classified according to the breast image reporting and data-analyzing system (BI-RADS). In terms of model building, a fully convolutional dense connection network (FC-DCN) is used for the network backbone. All the images were obtained through image preprocessing, a data augmentation method, and transfer learning technology to build a mammography image classification model. The research results show the model’s accuracy, sensitivity, and specificity were 86.37%, 100%, and 72.73%, respectively. Based on the FC-DCN model framework, it can effectively reduce the number of training parameters and successfully obtain a reasonable image classification model for mammography. Full article
Show Figures

Figure 1

14 pages, 3412 KiB  
Article
Dysarthria Speech Detection Using Convolutional Neural Networks with Gated Recurrent Unit
by Dong-Her Shih, Ching-Hsien Liao, Ting-Wei Wu, Xiao-Yin Xu and Ming-Hung Shih
Healthcare 2022, 10(10), 1956; https://doi.org/10.3390/healthcare10101956 - 7 Oct 2022
Cited by 13 | Viewed by 2950
Abstract
In recent years, due to the rise in the population and aging, the prevalence of neurological diseases is also increasing year by year. Among these patients with Parkinson’s disease, stroke, cerebral palsy, and other neurological symptoms, dysarthria often appears. If these dysarthria patients [...] Read more.
In recent years, due to the rise in the population and aging, the prevalence of neurological diseases is also increasing year by year. Among these patients with Parkinson’s disease, stroke, cerebral palsy, and other neurological symptoms, dysarthria often appears. If these dysarthria patients are not quickly detected and treated, it is easy to cause difficulties in disease course management. When the symptoms worsen, they can also affect the patient’s psychology and physiology. Most of the past studies on dysarthria detection used machine learning or deep learning models as classification models. This study proposes an integrated CNN-GRU model with convolutional neural networks and gated recurrent units to detect dysarthria. The experimental results show that the CNN-GRU model proposed in this study has the highest accuracy of 98.38%, which is superior to other research models. Full article
Show Figures

Figure 1

16 pages, 4317 KiB  
Article
Customized Deep Learning Classifier for Detection of Acute Lymphoblastic Leukemia Using Blood Smear Images
by Niranjana Sampathila, Krishnaraj Chadaga, Neelankit Goswami, Rajagopala P. Chadaga, Mayur Pandya, Srikanth Prabhu, Muralidhar G. Bairy, Swathi S. Katta, Devadas Bhat and Sudhakara P. Upadya
Healthcare 2022, 10(10), 1812; https://doi.org/10.3390/healthcare10101812 - 20 Sep 2022
Cited by 44 | Viewed by 4757
Abstract
Acute lymphoblastic leukemia (ALL) is a rare type of blood cancer caused due to the overproduction of lymphocytes by the bone marrow in the human body. It is one of the common types of cancer in children, which has a fair chance of [...] Read more.
Acute lymphoblastic leukemia (ALL) is a rare type of blood cancer caused due to the overproduction of lymphocytes by the bone marrow in the human body. It is one of the common types of cancer in children, which has a fair chance of being cured. However, this may even occur in adults, and the chances of a cure are slim if diagnosed at a later stage. To aid in the early detection of this deadly disease, an intelligent method to screen the white blood cells is proposed in this study. The proposed intelligent deep learning algorithm uses the microscopic images of blood smears as the input data. This algorithm is implemented with a convolutional neural network (CNN) to predict the leukemic cells from the healthy blood cells. The custom ALLNET model was trained and tested using the microscopic images available as open-source data. The model training was carried out on Google Collaboratory using the Nvidia Tesla P-100 GPU method. Maximum accuracy of 95.54%, specificity of 95.81%, sensitivity of 95.91%, F1-score of 95.43%, and precision of 96% were obtained by this accurate classifier. The proposed technique may be used during the pre-screening to detect the leukemia cells during complete blood count (CBC) and peripheral blood tests. Full article
Show Figures

Figure 1

9 pages, 1938 KiB  
Article
Rapid Polyp Classification in Colonoscopy Using Textural and Convolutional Features
by Chung-Ming Lo, Yu-Hsuan Yeh, Jui-Hsiang Tang, Chun-Chao Chang and Hsing-Jung Yeh
Healthcare 2022, 10(8), 1494; https://doi.org/10.3390/healthcare10081494 - 8 Aug 2022
Cited by 14 | Viewed by 3246
Abstract
Colorectal cancer is the leading cause of cancer-associated morbidity and mortality worldwide. One of the causes of developing colorectal cancer is untreated colon adenomatous polyps. Clinically, polyps are detected in colonoscopy and the malignancies are determined according to the biopsy. To provide a [...] Read more.
Colorectal cancer is the leading cause of cancer-associated morbidity and mortality worldwide. One of the causes of developing colorectal cancer is untreated colon adenomatous polyps. Clinically, polyps are detected in colonoscopy and the malignancies are determined according to the biopsy. To provide a quick and objective assessment to gastroenterologists, this study proposed a quantitative polyp classification via various image features in colonoscopy. The collected image database was composed of 1991 images including 1053 hyperplastic polyps and 938 adenomatous polyps and adenocarcinomas. From each image, textural features were extracted and combined in machine learning classifiers and machine-generated features were automatically selected in deep convolutional neural networks (DCNN). The DCNNs included AlexNet, Inception-V3, ResNet-101, and DenseNet-201. AlexNet trained from scratch achieved the best performance of 96.4% accuracy which is better than transfer learning and textural features. Using the prediction models, the malignancy level of polyps can be evaluated during a colonoscopy to provide a rapid treatment plan. Full article
Show Figures

Figure 1

11 pages, 1573 KiB  
Article
Osteoporosis Pre-Screening Using Ensemble Machine Learning in Postmenopausal Korean Women
by Youngihn Kwon, Juyeon Lee, Joo Hee Park, Yoo Mee Kim, Se Hwa Kim, Young Jun Won and Hyung-Yong Kim
Healthcare 2022, 10(6), 1107; https://doi.org/10.3390/healthcare10061107 - 14 Jun 2022
Cited by 18 | Viewed by 2692
Abstract
As osteoporosis is a degenerative disease related to postmenopausal aging, early diagnosis is vital. This study used data from the Korea National Health and Nutrition Examination Surveys to predict a patient’s risk of osteoporosis using machine learning algorithms. Data from 1431 postmenopausal women [...] Read more.
As osteoporosis is a degenerative disease related to postmenopausal aging, early diagnosis is vital. This study used data from the Korea National Health and Nutrition Examination Surveys to predict a patient’s risk of osteoporosis using machine learning algorithms. Data from 1431 postmenopausal women aged 40–69 years were used, including 20 features affecting osteoporosis, chosen by feature importance and recursive feature elimination. Random Forest (RF), AdaBoost, and Gradient Boosting (GBM) machine learning algorithms were each used to train three models: A, checkup features; B, survey features; and C, both checkup and survey features, respectively. Of the three models, Model C generated the best outcomes with an accuracy of 0.832 for RF, 0.849 for AdaBoost, and 0.829 for GBM. Its area under the receiver operating characteristic curve (AUROC) was 0.919 for RF, 0.921 for AdaBoost, and 0.908 for GBM. By utilizing multiple feature selection methods, the ensemble models of this study achieved excellent results with an AUROC score of 0.921 with AdaBoost, which is 0.1–0.2 higher than those of the best performing models from recent studies. Our model can be further improved as a practical medical tool for the early diagnosis of osteoporosis after menopause. Full article
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 1272 KiB  
Review
Artificial Intelligence Applied to Pancreatic Imaging: A Narrative Review
by Maria Elena Laino, Angela Ammirabile, Ludovica Lofino, Lorenzo Mannelli, Francesco Fiz, Marco Francone, Arturo Chiti, Luca Saba, Matteo Agostino Orlandi and Victor Savevski
Healthcare 2022, 10(8), 1511; https://doi.org/10.3390/healthcare10081511 - 11 Aug 2022
Cited by 2 | Viewed by 2503
Abstract
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of [...] Read more.
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of medical imaging and has been applied to various radiological techniques for different purposes, such as segmentation, lesion detection, characterization, risk stratification, or prediction of response to treatments. The aim of the present narrative review is to assess the available literature on the role of AI applied to pancreatic imaging. Up to now, the use of computer-aided diagnosis (CAD) and radiomics in pancreatic imaging has proven to be useful for both non-oncological and oncological purposes and represents a promising tool for personalized approaches to patients. Although great developments have occurred in recent years, it is important to address the obstacles that still need to be overcome before these technologies can be implemented into our clinical routine, mainly considering the heterogeneity among studies. Full article
Show Figures

Figure 1

Back to TopTop