Image Analysis and Machine Learning in Cancers

A special issue of Cancers (ISSN 2072-6694). This special issue belongs to the section "Methods and Technologies Development".

Deadline for manuscript submissions: closed (15 September 2024) | Viewed by 19731

Special Issue Editors


E-Mail Website
Guest Editor
Southern Alberta Institute of Technology (SAIT), Calgary, AB, Canada
Interests: medical imaging (mammography and digital breast tomosynthesis); machine learning and computer vision

E-Mail Website
Guest Editor
Dipartimento di Ingegneria Elettronica, University of Rome Tor Vergata, Rome, Italy
Interests: image analysis; machine learning; medical applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the years, we have seen a tremendous advance in image analysis and machine learning (ML) techniques for cancer detection. This phenomenon has been powered mainly by better equipment capturing data with higher quality, availability of public datasets, and the advances of computer technology that enable us to use methods that years ago were not possible. For example, we have seen many papers dealing with super-resolution images, fusing images from different modalities towards a better diagnosis and even generating more data based on the limited amount available.

Imaging processing techniques are the basis of all Artificial Intelligence (AI)-based systems. It has been proven that pre-processing methods cause a huge impact on the next steps in an ML/AI pipeline. Therefore, it is common to see many papers proposing new methods to enhance images even further towards a better result.

Machine learning approaches, also known as conventional approaches, have been essential in pushing the boundaries in the detection and diagnosis of cancers. Since they can work with limited datasets and more modest computers, as opposed to the deep learning approaches, many methods have been proposed since the popularization of AI. Nowadays, these methods compete equally with the DL-based ones in terms of accuracy, specificity, and sensitivity.

Deep learning (DL) techniques, followed by the advances and accessibility of more powerful hardware, have played an important role in this scenario. Since its basis lies in imaging analysis and machine learning, recent advances have shown the many possibilities to provide a good diagnosis reducing the follow-ups of patients. However, DL models are known for their lack of explainability, although some studies have proposed ways to overcome this.

This Special Issue is dedicated to covering the most recent advances in image analysis and machine learning techniques toward a better detection/diagnosis of cancers in general.

Dr. Helder C. R. De Oliveira
Dr. Arianna Mencattini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Cancers is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • cancer detection and diagnosis system
  • machine learning
  • deep learning
  • medical imaging analysis
  • few-shot deep learning
  • attention segmentation
  • feature extraction
  • probabilistic models
  • explainability
  • image fusion
  • generative adversarial network

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 6796 KiB  
Article
A Hybrid Deep Learning and Machine Learning Approach with Mobile-EfficientNet and Grey Wolf Optimizer for Lung and Colon Cancer Histopathology Classification
by Raquel Ochoa-Ornelas, Alberto Gudiño-Ochoa and Julio Alberto García-Rodríguez
Cancers 2024, 16(22), 3791; https://doi.org/10.3390/cancers16223791 - 11 Nov 2024
Viewed by 744
Abstract
Background: Lung and colon cancers are among the most prevalent and lethal malignancies worldwide, underscoring the urgent need for advanced diagnostic methodologies. This study aims to develop a hybrid deep learning and machine learning framework for the classification of Colon Adenocarcinoma, Colon Benign [...] Read more.
Background: Lung and colon cancers are among the most prevalent and lethal malignancies worldwide, underscoring the urgent need for advanced diagnostic methodologies. This study aims to develop a hybrid deep learning and machine learning framework for the classification of Colon Adenocarcinoma, Colon Benign Tissue, Lung Adenocarcinoma, Lung Benign Tissue, and Lung Squamous Cell Carcinoma from histopathological images. Methods: Current approaches primarily rely on the LC25000 dataset, which, due to image augmentation, lacks the generalizability required for real-time clinical applications. To address this, Contrast Limited Adaptive Histogram Equalization (CLAHE) was applied to enhance image quality, and 1000 new images from the National Cancer Institute GDC Data Portal were introduced into the Colon Adenocarcinoma, Lung Adenocarcinoma, and Lung Squamous Cell Carcinoma classes, replacing augmented images to increase dataset diversity. A hybrid feature extraction model combining MobileNetV2 and EfficientNetB3 was optimized using the Grey Wolf Optimizer (GWO), resulting in the Lung and Colon histopathological classification technique (MEGWO-LCCHC). Cross-validation and hyperparameter tuning with Optuna were performed on various machine learning models, including XGBoost, LightGBM, and CatBoost. Results: The MEGWO-LCCHC technique achieved high classification accuracy, with the lightweight DNN model reaching 94.8%, LightGBM at 93.9%, XGBoost at 93.5%, and CatBoost at 93.3% on the test set. Conclusions: The findings suggest that our approach enhances classification performance and offers improved generalizability for real-world clinical applications. The proposed MEGWO-LCCHC framework shows promise as a robust tool in cancer diagnostics, advancing the application of AI in oncology. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

18 pages, 1849 KiB  
Article
Comparative Analysis of Deep Learning Methods on CT Images for Lung Cancer Specification
by Muruvvet Kalkan, Mehmet S. Guzel, Fatih Ekinci, Ebru Akcapinar Sezer and Tunc Asuroglu
Cancers 2024, 16(19), 3321; https://doi.org/10.3390/cancers16193321 - 28 Sep 2024
Viewed by 1226
Abstract
Background: Lung cancer is the leading cause of cancer-related deaths worldwide, ranking first in men and second in women. Due to its aggressive nature, early detection and accurate localization of tumors are crucial for improving patient outcomes. This study aims to apply advanced [...] Read more.
Background: Lung cancer is the leading cause of cancer-related deaths worldwide, ranking first in men and second in women. Due to its aggressive nature, early detection and accurate localization of tumors are crucial for improving patient outcomes. This study aims to apply advanced deep learning techniques to identify lung cancer in its early stages using CT scan images. Methods: Pre-trained convolutional neural networks (CNNs), including MobileNetV2, ResNet152V2, InceptionResNetV2, Xception, VGG-19, and InceptionV3, were used for lung cancer detection. Once the disease was identified, the tumor’s region was segmented using models such as UNet, SegNet, and InceptionUNet. Results: The InceptionResNetV2 model achieved the highest detection accuracy of 98.5%, while UNet produced the best segmentation results, with a Jaccard index of 95.3%. Conclusions: The study demonstrates the effectiveness of deep learning models, particularly InceptionResNetV2 and UNet, in both detecting and segmenting lung cancer, showing significant potential for aiding early diagnosis and treatment. Future work could focus on refining these models and exploring their application in other medical domains. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

20 pages, 3457 KiB  
Article
Non-Invasive Endometrial Cancer Screening through Urinary Fluorescent Metabolome Profile Monitoring and Machine Learning Algorithms
by Monika Švecová, Katarína Dubayová, Anna Birková, Peter Urdzík and Mária Mareková
Cancers 2024, 16(18), 3155; https://doi.org/10.3390/cancers16183155 - 14 Sep 2024
Viewed by 849
Abstract
Endometrial cancer is becoming increasingly common, highlighting the need for improved diagnostic methods that are both effective and non-invasive. This study investigates the use of urinary fluorescence spectroscopy as a potential diagnostic tool for endometrial cancer. Urine samples were collected from endometrial cancer [...] Read more.
Endometrial cancer is becoming increasingly common, highlighting the need for improved diagnostic methods that are both effective and non-invasive. This study investigates the use of urinary fluorescence spectroscopy as a potential diagnostic tool for endometrial cancer. Urine samples were collected from endometrial cancer patients (n = 77), patients with benign uterine tumors (n = 23), and control gynecological patients attending regular checkups or follow-ups (n = 96). These samples were analyzed using synchronous fluorescence spectroscopy to measure the total fluorescent metabolome profile, and specific fluorescence ratios were created to differentiate between control, benign, and malignant samples. These spectral markers demonstrated potential clinical applicability with AUC as high as 80%. Partial Least Squares Discriminant Analysis (PLS-DA) was employed to reduce data dimensionality and enhance class separation. Additionally, machine learning models, including Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), and Stochastic Gradient Descent (SGD), were utilized to distinguish between controls and endometrial cancer patients. PLS-DA achieved an overall accuracy of 79% and an AUC of 90%. These promising results indicate that urinary fluorescence spectroscopy, combined with advanced machine learning models, has the potential to revolutionize endometrial cancer diagnostics, offering a rapid, accurate, and non-invasive alternative to current methods. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Graphical abstract

25 pages, 5119 KiB  
Article
AI Applied to Volatile Organic Compound (VOC) Profiles from Exhaled Breath Air for Early Detection of Lung Cancer
by Manuel Vinhas, Pedro M. Leitão, Bernardo S. Raimundo, Nuno Gil, Pedro D. Vaz and Fernando Luis-Ferreira
Cancers 2024, 16(12), 2200; https://doi.org/10.3390/cancers16122200 - 12 Jun 2024
Viewed by 1410
Abstract
Volatile organic compounds (VOCs) are an increasingly meaningful method for the early detection of various types of cancers, including lung cancer, through non-invasive methods. Traditional cancer detection techniques such as biopsies, imaging, and blood tests, though effective, often involve invasive procedures or are [...] Read more.
Volatile organic compounds (VOCs) are an increasingly meaningful method for the early detection of various types of cancers, including lung cancer, through non-invasive methods. Traditional cancer detection techniques such as biopsies, imaging, and blood tests, though effective, often involve invasive procedures or are costly, time consuming, and painful. Recent advancements in technology have led to the exploration of VOC detection as a promising non-invasive and comfortable alternative. VOCs are organic chemicals that have a high vapor pressure at room temperature, making them readily detectable in breath, urine, and skin. The present study leverages artificial intelligence (AI) and machine learning algorithms to enhance classification accuracy and efficiency in detecting lung cancer through VOC analysis collected from exhaled breath air. Unlike other studies that primarily focus on identifying specific compounds, this study takes an agnostic approach, maximizing detection efficiency over the identification of specific compounds focusing on the overall compositional profiles and their differences across groups of patients. The results reported hereby uphold the potential of AI-driven techniques in revolutionizing early cancer detection methodologies towards their implementation in a clinical setting. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

20 pages, 14990 KiB  
Article
Three-Dimension Epithelial Segmentation in Optical Coherence Tomography of the Oral Cavity Using Deep Learning
by Chloe Hill, Jeanie Malone, Kelly Liu, Samson Pak-Yan Ng, Calum MacAulay, Catherine Poh and Pierre Lane
Cancers 2024, 16(11), 2144; https://doi.org/10.3390/cancers16112144 - 5 Jun 2024
Cited by 3 | Viewed by 1348
Abstract
This paper aims to simplify the application of optical coherence tomography (OCT) for the examination of subsurface morphology in the oral cavity and reduce barriers towards the adoption of OCT as a biopsy guidance device. The aim of this work was to develop [...] Read more.
This paper aims to simplify the application of optical coherence tomography (OCT) for the examination of subsurface morphology in the oral cavity and reduce barriers towards the adoption of OCT as a biopsy guidance device. The aim of this work was to develop automated software tools for the simplified analysis of the large volume of data collected during OCT. Imaging and corresponding histopathology were acquired in-clinic using a wide-field endoscopic OCT system. An annotated dataset (n = 294 images) from 60 patients (34 male and 26 female) was assembled to train four unique neural networks. A deep learning pipeline was built using convolutional and modified u-net models to detect the imaging field of view (network 1), detect artifacts (network 2), identify the tissue surface (network 3), and identify the presence and location of the epithelial–stromal boundary (network 4). The area under the curve of the image and artifact detection networks was 1.00 and 0.94, respectively. The Dice similarity score for the surface and epithelial–stromal boundary segmentation networks was 0.98 and 0.83, respectively. Deep learning (DL) techniques can identify the location and variations in the epithelial surface and epithelial–stromal boundary in OCT images of the oral mucosa. Segmentation results can be synthesized into accessible en face maps to allow easier visualization of changes. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

35 pages, 5607 KiB  
Article
Radiomics Machine Learning Analysis of Clear Cell Renal Cell Carcinoma for Tumour Grade Prediction Based on Intra-Tumoural Sub-Region Heterogeneity
by Abeer J. Alhussaini, J. Douglas Steele, Adel Jawli and Ghulam Nabi
Cancers 2024, 16(8), 1454; https://doi.org/10.3390/cancers16081454 - 10 Apr 2024
Cited by 2 | Viewed by 3328
Abstract
Background: Renal cancers are among the top ten causes of cancer-specific mortality, of which the ccRCC subtype is responsible for most cases. The grading of ccRCC is important in determining tumour aggressiveness and clinical management. Objectives: The objectives of this research were to [...] Read more.
Background: Renal cancers are among the top ten causes of cancer-specific mortality, of which the ccRCC subtype is responsible for most cases. The grading of ccRCC is important in determining tumour aggressiveness and clinical management. Objectives: The objectives of this research were to predict the WHO/ISUP grade of ccRCC pre-operatively and characterise the heterogeneity of tumour sub-regions using radiomics and ML models, including comparison with pre-operative biopsy-determined grading in a sub-group. Methods: Data were obtained from multiple institutions across two countries, including 391 patients with pathologically proven ccRCC. For analysis, the data were separated into four cohorts. Cohorts 1 and 2 included data from the respective institutions from the two countries, cohort 3 was the combined data from both cohort 1 and 2, and cohort 4 was a subset of cohort 1, for which both the biopsy and subsequent histology from resection (partial or total nephrectomy) were available. 3D image segmentation was carried out to derive a voxel of interest (VOI) mask. Radiomics features were then extracted from the contrast-enhanced images, and the data were normalised. The Pearson correlation coefficient and the XGBoost model were used to reduce the dimensionality of the features. Thereafter, 11 ML algorithms were implemented for the purpose of predicting the ccRCC grade and characterising the heterogeneity of sub-regions in the tumours. Results: For cohort 1, the 50% tumour core and 25% tumour periphery exhibited the best performance, with an average AUC of 77.9% and 78.6%, respectively. The 50% tumour core presented the highest performance in cohorts 2 and 3, with average AUC values of 87.6% and 76.9%, respectively. With the 25% periphery, cohort 4 showed AUC values of 95.0% and 80.0% for grade prediction when using internal and external validation, respectively, while biopsy histology had an AUC of 31.0% for the classification with the final grade of resection histology as a reference standard. The CatBoost classifier was the best for each of the four cohorts with an average AUC of 80.0%, 86.5%, 77.0% and 90.3% for cohorts 1, 2, 3 and 4 respectively. Conclusions: Radiomics signatures combined with ML have the potential to predict the WHO/ISUP grade of ccRCC with superior performance, when compared to pre-operative biopsy. Moreover, tumour sub-regions contain useful information that should be analysed independently when determining the tumour grade. Therefore, it is possible to distinguish the grade of ccRCC pre-operatively to improve patient care and management. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

22 pages, 1094 KiB  
Article
Improving Skin Lesion Segmentation with Self-Training
by Aleksandra Dzieniszewska, Piotr Garbat and Ryszard Piramidowicz
Cancers 2024, 16(6), 1120; https://doi.org/10.3390/cancers16061120 - 11 Mar 2024
Cited by 1 | Viewed by 1702
Abstract
Skin lesion segmentation plays a key role in the diagnosis of skin cancer; it can be a component in both traditional algorithms and end-to-end approaches. The quality of segmentation directly impacts the accuracy of classification; however, attaining optimal segmentation necessitates a substantial amount [...] Read more.
Skin lesion segmentation plays a key role in the diagnosis of skin cancer; it can be a component in both traditional algorithms and end-to-end approaches. The quality of segmentation directly impacts the accuracy of classification; however, attaining optimal segmentation necessitates a substantial amount of labeled data. Semi-supervised learning allows for employing unlabeled data to enhance the results of the machine learning model. In the case of medical image segmentation, acquiring detailed annotation is time-consuming and costly and requires skilled individuals so the utilization of unlabeled data allows for a significant mitigation of manual segmentation efforts. This study proposes a novel approach to semi-supervised skin lesion segmentation using self-training with a Noisy Student. This approach allows for utilizing large amounts of available unlabeled images. It consists of four steps—first, training the teacher model on labeled data only, then generating pseudo-labels with the teacher model, training the student model on both labeled and pseudo-labeled data, and lastly, training the student* model on pseudo-labels generated with the student model. In this work, we implemented DeepLabV3 architecture as both teacher and student models. As a final result, we achieved a mIoU of 88.0% on the ISIC 2018 dataset and a mIoU of 87.54% on the PH2 dataset. The evaluation of the proposed approach shows that Noisy Student training improves the segmentation performance of neural networks in a skin lesion segmentation task while using only small amounts of labeled data. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

16 pages, 4516 KiB  
Article
Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification
by Mohammad Alamgeer, Nuha Alruwais, Haya Mesfer Alshahrani, Abdullah Mohamed and Mohammed Assiri
Cancers 2023, 15(15), 3982; https://doi.org/10.3390/cancers15153982 - 5 Aug 2023
Cited by 14 | Viewed by 2021
Abstract
Lung cancer is the main cause of cancer deaths all over the world. An important reason for these deaths was late analysis and worse prediction. With the accelerated improvement of deep learning (DL) approaches, DL can be effectively and widely executed for several [...] Read more.
Lung cancer is the main cause of cancer deaths all over the world. An important reason for these deaths was late analysis and worse prediction. With the accelerated improvement of deep learning (DL) approaches, DL can be effectively and widely executed for several real-world applications in healthcare systems, like medical image interpretation and disease analysis. Medical imaging devices can be vital in primary-stage lung tumor analysis and the observation of lung tumors from the treatment. Many medical imaging modalities like computed tomography (CT), chest X-ray (CXR), molecular imaging, magnetic resonance imaging (MRI), and positron emission tomography (PET) systems are widely analyzed for lung cancer detection. This article presents a new dung beetle optimization modified deep feature fusion model for lung cancer detection and classification (DBOMDFF-LCC) technique. The presented DBOMDFF-LCC technique mainly depends upon the feature fusion and hyperparameter tuning process. To accomplish this, the DBOMDFF-LCC technique uses a feature fusion process comprising three DL models, namely residual network (ResNet), densely connected network (DenseNet), and Inception-ResNet-v2. Furthermore, the DBO approach was employed for the optimum hyperparameter selection of three DL approaches. For lung cancer detection purposes, the DBOMDFF-LCC system utilizes a long short-term memory (LSTM) approach. The simulation result analysis of the DBOMDFF-LCC technique of the medical dataset is investigated using different evaluation metrics. The extensive comparative results highlighted the betterment of the DBOMDFF-LCC technique of lung cancer classification. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

13 pages, 2075 KiB  
Article
Machine Learning Integrating 99mTc Sestamibi SPECT/CT and Radiomics Data Achieves Optimal Characterization of Renal Oncocytic Tumors
by Michail E. Klontzas, Emmanouil Koltsakis, Georgios Kalarakis, Kiril Trpkov, Thomas Papathomas, Apostolos H. Karantanas and Antonios Tzortzakakis
Cancers 2023, 15(14), 3553; https://doi.org/10.3390/cancers15143553 - 9 Jul 2023
Cited by 6 | Viewed by 3251
Abstract
The increasing evidence of oncocytic renal tumors positive in 99mTc Sestamibi Single Photon Emission Tomography/Computed Tomography (SPECT/CT) examination calls for the development of diagnostic tools to differentiate these tumors from more aggressive forms. This study combined radiomics analysis with the uptake of [...] Read more.
The increasing evidence of oncocytic renal tumors positive in 99mTc Sestamibi Single Photon Emission Tomography/Computed Tomography (SPECT/CT) examination calls for the development of diagnostic tools to differentiate these tumors from more aggressive forms. This study combined radiomics analysis with the uptake of 99mTc Sestamibi on SPECT/CT to differentiate benign renal oncocytic neoplasms from renal cell carcinoma. A total of 57 renal tumors were prospectively collected. Histopathological analysis and radiomics data extraction were performed. XGBoost classifiers were trained using the radiomics features alone and combined with the results from the visual evaluation of 99mTc Sestamibi SPECT/CT examination. The combined SPECT/radiomics model achieved higher accuracy (95%) with an area under the curve (AUC) of 98.3% (95% CI 93.7–100%) than the radiomics-only model (71.67%) with an AUC of 75% (95% CI 49.7–100%) and visual evaluation of 99mTc Sestamibi SPECT/CT alone (90.8%) with an AUC of 90.8% (95%CI 82.5–99.1%). The positive predictive values of SPECT/radiomics, radiomics-only, and 99mTc Sestamibi SPECT/CT-only models were 100%, 85.71%, and 85%, respectively, whereas the negative predictive values were 85.71%, 55.56%, and 94.6%, respectively. Feature importance analysis revealed that 99mTc Sestamibi uptake was the most influential attribute in the combined model. This study highlights the potential of combining radiomics analysis with 99mTc Sestamibi SPECT/CT to improve the preoperative characterization of benign renal oncocytic neoplasms. The proposed SPECT/radiomics classifier outperformed the visual evaluation of 99mTc Sestamibii SPECT/CT and the radiomics-only model, demonstrating that the integration of 99mTc Sestamibi SPECT/CT and radiomics data provides improved diagnostic performance, with minimal false positive and false negative results. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

19 pages, 15340 KiB  
Article
Al-Biruni Earth Radius Optimization with Transfer Learning Based Histopathological Image Analysis for Lung and Colon Cancer Detection
by Rayed AlGhamdi, Turky Omar Asar, Fatmah Y. Assiri, Rasha A. Mansouri and Mahmoud Ragab
Cancers 2023, 15(13), 3300; https://doi.org/10.3390/cancers15133300 - 23 Jun 2023
Cited by 10 | Viewed by 2066
Abstract
An early diagnosis of lung and colon cancer (LCC) is critical for improved patient outcomes and effective treatment. Histopathological image (HSI) analysis has emerged as a robust tool for cancer diagnosis. HSI analysis for a LCC diagnosis includes the analysis and examination of [...] Read more.
An early diagnosis of lung and colon cancer (LCC) is critical for improved patient outcomes and effective treatment. Histopathological image (HSI) analysis has emerged as a robust tool for cancer diagnosis. HSI analysis for a LCC diagnosis includes the analysis and examination of tissue samples attained from the LCC to recognize lesions or cancerous cells. It has a significant role in the staging and diagnosis of this tumor, which aids in the prognosis and treatment planning, but a manual analysis of the image is subject to human error and is also time-consuming. Therefore, a computer-aided approach is needed for the detection of LCC using HSI. Transfer learning (TL) leverages pretrained deep learning (DL) algorithms that have been trained on a larger dataset for extracting related features from the HIS, which are then used for training a classifier for a tumor diagnosis. This manuscript offers the design of the Al-Biruni Earth Radius Optimization with Transfer Learning-based Histopathological Image Analysis for Lung and Colon Cancer Detection (BERTL-HIALCCD) technique. The purpose of the study is to detect LCC effectually in histopathological images. To execute this, the BERTL-HIALCCD method follows the concepts of computer vision (CV) and transfer learning for accurate LCC detection. When using the BERTL-HIALCCD technique, an improved ShuffleNet model is applied for the feature extraction process, and its hyperparameters are chosen by the BER system. For the effectual recognition of LCC, a deep convolutional recurrent neural network (DCRNN) model is applied. Finally, the coati optimization algorithm (COA) is exploited for the parameter choice of the DCRNN approach. For examining the efficacy of the BERTL-HIALCCD technique, a comprehensive group of experiments was conducted on a large dataset of histopathological images. The experimental outcomes demonstrate that the combination of AER and COA algorithms attain an improved performance in cancer detection over the compared models. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

Back to TopTop