Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image
Abstract
:1. Introduction
- An original ensemble model based on CNN models is proposed for classifying eight breast cancer types.
- A novel CNN model called MultiHisNet is proposed to classify breast tumors.
- The results of the best 20 out of 110 proposed custom CNN models with different architectures are reported, and the behaviors of different architectures against problems in the BreakHis dataset are shown.
- Optimum ensemble model weights are determined by the EO algorithm.
2. Background
3. Materials and Methods
3.1. Dataset
3.2. Proposed Framework
3.3. Transfer Learning
3.4. Proposed Custom Model Development
3.5. Ensemble Model
Equilibrium Optimizer
4. Results and Discussion
4.1. Results of Transfer Learning
4.2. Results of Custom Model
4.3. Detailed Analysis of the Best-Performing Models
4.4. Results of the Ensemble Model
4.5. Discussion
4.5.1. Comparison of the Models
4.5.2. Comparison with Similar Studies
5. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- WHO. WHO Launches New Roadmap on Breast Cancer. Available online: https://www.who.int/news/item/03-02-2023-who-launches-new-roadmap-on-breast-cancer (accessed on 1 June 2024).
- Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 2014, 5, 412. [Google Scholar] [CrossRef] [PubMed]
- Kaya, M.; Çetin-Kaya, Y. A novel ensemble learning framework based on a genetic algorithm for the classification of pneumonia. Eng. Appl. Artif. Intell. 2024, 133, 108494. [Google Scholar] [CrossRef]
- Çetin-Kaya, Y.; Kaya, M. A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging. Diagnostics 2024, 14, 383. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Into Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
- Kaya, M. Feature fusion-based ensemble CNN learning optimization for automated detection of pediatric pneumonia. Biomed. Signal Process. Control 2024, 87, 105472. [Google Scholar] [CrossRef]
- Kaya, M.; Çetin-Kaya, Y. A Novel Deep Learning Architecture Optimization for Multiclass Classification of Alzheimer’s Disease Level. IEEE Access 2024, 12, 46562–46581. [Google Scholar] [CrossRef]
- Boumaraf, S.; Liu, X.; Zheng, Z.; Ma, X.; Ferkous, C. A new transfer learning based approach to magnification dependent and independent classification of breast cancer in histopathological images. Biomed. Signal Process. Control 2021, 63, 102192. [Google Scholar] [CrossRef]
- Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. Breast cancer histopathological image classification using convolutional neural networks. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 2560–2567. [Google Scholar]
- Vikranth, C.S.; Jagadeesh, B.; Rakesh, K.; Mohammad, D.; Krishna, S.; AS, R.A. Computer assisted diagnosis of breast cancer using histopathology images and convolutional neural networks. In Proceedings of the 2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP), Vijayawada, India, 12–14 February 2022; pp. 1–6. [Google Scholar]
- Xu, X.; An, M.; Zhang, J.; Liu, W.; Lu, L. A High-Precision Classification Method of Mammary Cancer Based on Improved DenseNet Driven by an Attention Mechanism. Comput. Math. Methods Med. 2022, 2022, 8585036. [Google Scholar] [CrossRef] [PubMed]
- Yari, Y.; Nguyen, H.; Nguyen, T.V. Accuracy improvement in binary and multi-class classification of breast histopathology images. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021; pp. 376–381. [Google Scholar]
- Zaalouk, A.M.; Ebrahim, G.A.; Mohamed, H.K.; Hassan, H.M.; Zaalouk, M.M. A deep learning computer-aided diagnosis approach for breast cancer. Bioengineering 2022, 9, 391. [Google Scholar] [CrossRef]
- Mewada, H. Extended Deep-Learning Network for Histopathological Image-Based Multiclass Breast Cancer Classification Using Residual Features. Symmetry 2024, 16, 507. [Google Scholar] [CrossRef]
- Mewada, H.; Al-Asad, J.F.; Patel, A.; Chaudhari, J.; Mahant, K.; Vala, A. Multi-Channel Local Binary Pattern Guided Convolutional Neural Network for Breast Cancer Classification. Open Biomed. Eng. J. 2021, 15, 132–140. [Google Scholar] [CrossRef]
- Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; Li, S. Breast cancer multi-classification from histopathological images with structured deep learning model. Sci. Rep. 2017, 7, 4172. [Google Scholar] [CrossRef] [PubMed]
- Mewada, H.K.; Patel, A.V.; Hassaballah, M.; Alkinani, M.H.; Mahant, K. Spectral–spatial features integrated convolution neural network for breast cancer classification. Sensors 2020, 20, 4747. [Google Scholar] [CrossRef]
- Umer, M.J.; Sharif, M.; Kadry, S.; Alharbi, A. Multi-class classification of breast cancer using 6b-net with deep feature fusion and selection method. J. Pers. Med. 2022, 12, 683. [Google Scholar] [CrossRef]
- Garg, S.; Singh, P. Transfer learning based lightweight ensemble model for imbalanced breast cancer classification. IEEE/ACM Trans. Comput. Biol. Bioinform. 2022, 20, 1529–1539. [Google Scholar] [CrossRef]
- Kassani, S.H.; Kassani, P.H.; Wesolowski, M.J.; Schneider, K.A.; Deters, R. Classification of histopathological biopsy images using ensemble of deep learning networks. arXiv 2019, arXiv:1909.11870. [Google Scholar]
- He, Z.; Lin, M.; Xu, Z.; Yao, Z.; Chen, H.; Alhudhaif, A.; Alenezi, F. Deconv-transformer (DecT): A histopathological image classification model for breast cancer based on color deconvolution and transformer architecture. Inf. Sci. 2022, 608, 1093–1112. [Google Scholar] [CrossRef]
- Tummala, S.; Kim, J.; Kadry, S. BreaST-Net: Multi-class classification of breast cancer from histopathological images using ensemble of swin transformers. Mathematics 2022, 10, 4109. [Google Scholar] [CrossRef]
- Feldman, V. Does learning require memorization? A short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, Chicago, IL, USA, 22–26 June 2020; pp. 954–959. [Google Scholar]
- Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; Vinyals, O. Understanding deep learning (still) requires rethinking generalization. Commun. ACM 2021, 64, 107–115. [Google Scholar] [CrossRef]
- Lakshmi Priya, C.V.; Biju, V.G.; Vinod, B.R.; Ramachandran, S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark. 2024, 40, 1–25. [Google Scholar] [CrossRef] [PubMed]
- Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A dataset for breast cancer histopathological image classification. IEEE Trans. Biomed. Eng. 2015, 63, 1455–1462. [Google Scholar] [CrossRef]
- Zerouaoui, H.; Idri, A. Deep hybrid architectures for binary classification of medical breast cancer images. Biomed. Signal Process. Control 2022, 71, 103226. [Google Scholar] [CrossRef]
- Srikantamurthy, M.M.; Rallabandi, V.S.; Dudekula, D.B.; Natarajan, S.; Park, J. Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Med. Imaging 2023, 23, 19. [Google Scholar] [CrossRef]
- He, H.; Garcia, E.A. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1263–1284. [Google Scholar]
- Sun, Y.; Wong, A.K.; Kamel, M.S. Classification of imbalanced data: A review. Int. J. Pattern Recognit. Artif. Intell. 2009, 23, 687–719. [Google Scholar] [CrossRef]
- Hua, B.-S.; Tran, M.-K.; Yeung, S.-K. Pointwise convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 984–993. [Google Scholar]
- Redmon, J. Darknet: Open Source Neural Networks in C. Available online: https://pjreddie.com/darknet/ (accessed on 1 June 2024).
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Sharma, P.; Nayak, D.R.; Balabantaray, B.K.; Tanveer, M.; Nayak, R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw. 2023, 169, 637–659. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. arXiv 2017, arXiv:1709.01507. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble deep learning: A review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
- Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
- Rai, R.; Dhal, K.G. Recent developments in equilibrium optimizer algorithm: Its variants and applications. Arch. Comput. Methods Eng. 2023, 30, 3791–3844. [Google Scholar] [CrossRef]
- Kiziloluk, S.; Sert, E.; Hammad, M.; Tadeusiewicz, R.; Pławiak, P. EO-CNN: Equilibrium Optimization-Based hyperparameter tuning for enhanced pneumonia and COVID-19 detection using AlexNet and DarkNet19. Biocybern. Biomed. Eng. 2024, 44, 635–650. [Google Scholar] [CrossRef]
- Wang, C.; Xiao, F.; Zhang, W.; Huang, S.; Zhang, W.; Zou, P. Transfer Learning and Attention Mechanism for Breast Cancer Classification. In Proceedings of the 2021 17th International Conference on Computational Intelligence and Security (CIS), Chengdu, China, 19–22 November 2021; pp. 75–79. [Google Scholar]
Reference | Method | Classification | Magnification | Performance (Accuracy %) |
---|---|---|---|---|
Spanhol et al. [12] | Transfer Learning | Binary | MD | 80.8–85.6% |
Zerouaoui et al. [30] | Feature Extraction + ML algorithms | Binary | MD | 91.73–93.93% |
Mewada et al. [20] | Custom CNN | Binary | MD | 97.02–97.58% |
Garg and Singh [22] | Ensemble Model | Binary | MD | 96.84–98.78% |
Han et al. [19] | Custom CNN + transfer learning | Binary | MD | 92.9–96.9% |
Multiclass | MD | 92.8–93.9% | ||
Boumaraf et al. [11] | Transfer Learning | Binary | MD | 98.84% |
MI | 98.42% | |||
Multiclass | MD | 92.15% | ||
MI | 92.03% | |||
Yari et al. [15] | Transfer Learning | Binary | MD | 97.12–99.05% |
MI | 99.01% | |||
Multiclass | MD | 94.23–97.96% | ||
MI | 94.33% | |||
Vikranth et al. [13] | Transfer Learning | Binary | MD | 98% |
MI | 97% | |||
Multiclass | MD | 86–91% | ||
MI | 92% | |||
Zaalouk et al. [16] | Transfer Learning | Binary | MD | 99.42–100% |
MI | 98.99% | |||
Multiclass | MD | 90.22–97.01% | ||
MI | 93.32% | |||
Xu et al. [14] | Transfer Learning | Binary | MD | 99.05–99.89% |
Multiclass | MD | 94.36–98.41% | ||
Mewada [17] | Transfer Learning | Binary | MD | 94.65–100% |
Multiclass | MD | 96.76–97.59% | ||
Umer et al. [21] | Ensemble Model (6 B-Net) | Multiclass | MI | 90.1% |
He et al. [24] | Transformers | Binary | MI | 93.02% |
Tummala et al. [25] | Ensemble of SwinT | Binary | MI | 99.6% |
Multiclass | MD | 92.6–96.0% | ||
MI | 93.4% | |||
Srikantamurthy et al. [31] | CNN + LSTM | Binary | MD | 98.07–99.75% |
Multiclass | MD | 88.04–96.3% |
Class | Train | Test | Total |
---|---|---|---|
Adenosis (A) | 355 | 89 | 444 |
Ductal Carcinoma (DC) | 2761 | 690 | 3451 |
Fibroadenoma (F) | 811 | 203 | 1014 |
Lobular Carcinoma (LC) | 501 | 125 | 626 |
Mucinous Carcinoma (MC) | 634 | 158 | 792 |
Papillary Carcinoma (PC) | 448 | 112 | 560 |
Phyllodes Tumor (PT) | 362 | 91 | 453 |
Tubular Adenoma (TA) | 455 | 114 | 569 |
Total (classes) | 6327 | 1582 | 7909 |
Hyperparameter | Range | Best |
---|---|---|
Dense Layer 1 | 128, 256, 512, 1024, 2048 | 512 |
Dense Layer 2 | 128, 256, 512, 1024, 2048 | 256 |
Dropout | 0.2, 0.3, 0.4, 0.5 | 0.2 |
Optimization algorithm | Adam, SGDNesterov | Adam |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 | |
---|---|---|---|---|---|---|---|---|---|---|
CNN Block1 | ||||||||||
CNN Blok2 | ||||||||||
CNN Block3 | ||||||||||
CNN Block4 | ||||||||||
CNN Block5 | ||||||||||
CNN Block6 | - | - | - | - | - | - | ||||
CNN Block7 | - | - | - | - | - | - | - | - | - | |
Flatten | GAP | GAP | GAP | GAP | GAP | GAP | GAP | GAP | GAP | |
Dense1 | 512 | 512 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
Dropout | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.3 | 0.2 | 0.4 | 0.2 | 0.2 |
Dense2 | 256 | 256 | 256 | 256 | 256 | 128 | 128 | 128 | 96 | 128 |
C11 | C12 | C13 | C14 | C15 | C16 | C17 | C18 | C19 | C20 | |
---|---|---|---|---|---|---|---|---|---|---|
CNN Block1 | ||||||||||
CNN Blok2 | ||||||||||
CNN Block3 | ||||||||||
CNN Block4 | ||||||||||
CNN Block5 | ||||||||||
CNN Block6 | - | |||||||||
Dense1 | 512 | 512 | 512 | 512 | 512 | 512 | 512 | 512 | 512 | 512 |
GAP | GAP | GAP | GAP | GAP | GAP | GAP | GAP | GAP | GAP | |
Dropout | 0.4 | 0.4 | 0.4 | 0.3 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 |
Dense2 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 |
Model | Accuracy (%) | Precision (%) | Recall (%) | F1-Score |
---|---|---|---|---|
DenseNet121 | 92.35 | 90.89 | 94.27 | 92.32 |
DenseNet169 | 93.49 | 92.17 | 93.96 | 92.95 |
DenseNet201 | 93.11 | 91.60 | 94.57 | 92.88 |
EfficientNetV2B0 | 85.27 | 83.35 | 87.14 | 84.66 |
EfficientNetV2B3 | 91.21 | 89.83 | 90.44 | 89.98 |
EfficientNetV2M | 91.72 | 90.13 | 93.31 | 91.52 |
EfficientNetV2S | 90.27 | 89.14 | 90.11 | 89.47 |
InceptionV3 | 89.38 | 87.32 | 89.22 | 87.99 |
InceptionResNetV2 | 91.40 | 89.88 | 91.43 | 90.52 |
MobileNetV2 | 90.77 | 88.58 | 91.00 | 89.54 |
RegNetX008 | 93.68 | 92.04 | 94.55 | 93.19 |
RegNetY008 | 93.11 | 91.42 | 93.72 | 92.42 |
ResNet50 | 90.90 | 89.07 | 92.20 | 90.34 |
ResNet101 | 92.04 | 90.96 | 90.63 | 90.74 |
ResNet152 | 91.15 | 90.38 | 90.21 | 90.20 |
ResNetRS50 | 92.86 | 90.76 | 93.15 | 91.85 |
ResNetRS100 | 91.97 | 91.02 | 91.69 | 91.20 |
VGG16 | 81.73 | 77.34 | 80.64 | 78.61 |
VGG19 | 89.13 | 86.76 | 89.82 | 87.93 |
Xception | 92.79 | 91.87 | 92.48 | 92.10 |
Model | Accuracy (%) | Precision (%) | Recall (%) | F1 Score |
---|---|---|---|---|
C1 | 87.73 | 84.68 | 87.22 | 85.69 |
C2 | 91.91 | 89.72 | 92.70 | 91.07 |
C3 | 91.02 | 88.84 | 91.61 | 89.97 |
C4 | 91.40 | 89.51 | 92.28 | 90.68 |
C5 | 92.41 | 90.25 | 93.51 | 91.61 |
C6 | 92.79 | 90.90 | 93.89 | 92.20 |
C7 | 92.98 | 91.71 | 93.27 | 92.33 |
C8 | 92.04 | 90.21 | 91.70 | 90.85 |
C9 | 92.41 | 91.10 | 92.92 | 91.79 |
C10 | 93.62 | 92.32 | 94.36 | 93.19 |
Model | Accuracy (%) | Precision (%) | Recall (%) | F1-Score |
---|---|---|---|---|
C11 | 93.05 | 91.76 | 93.44 | 92.47 |
C12 | 93.81 | 93.13 | 93.46 | 93.28 |
C13 | 92.16 | 90.60 | 92.76 | 91.50 |
C14 | 93.68 | 92.71 | 94.40 | 93.34 |
C15 | 94.06 | 93.21 | 94.86 | 93.90 |
C16 | 94.18 | 92.69 | 95.11 | 93.76 |
C17 | 93.43 | 91.84 | 94.19 | 92.80 |
C18 | 94.31 | 92.92 | 95.28 | 93.97 |
C19 | 94.37 | 93.26 | 94.77 | 93.91 |
C20 | 94.69 | 93.43 | 95.48 | 94.33 |
Block | Filters (Size, Number) |
---|---|
Block A | |
Block B | |
Block C | |
Block D | |
Block E |
Type of Model | Model (Accuracy) | Abbreviation |
---|---|---|
Best Transfer Learning Models | DenseNet169 (93.49%) | T2 |
RegNetX008 (93.68%) | T1 | |
Best Custom Models | C19 (94.37%) | C2 |
MultiHisNet (94.69%) | C1 |
Ensemble Model | Models | Weights | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) |
---|---|---|---|---|---|---|
E1 | C1, C2, T2 | 0.31, 0.20, 0.49 | 96.46 | 96.92 | 96.46 | 96.58 |
E2 | C1, C2, T1 | 0.32, 0.29, 0.39 | 96.46 | 96.87 | 96.46 | 96.57 |
E3 | C2, T1, T2 | 0.41, 0.38, 0.21 | 96.52 | 96.92 | 96.52 | 96.63 |
Proposed Ensemble Model | C1, T1, T2 | 0.50, 0.18, 0.32 | 96.71 | 97.07 | 96.71 | 96.81 |
CNN | Residual Connection | Channel and Spatial Attention Module | Ensemble | Best Performance (Accuracy-%) |
---|---|---|---|---|
√ | 92.98 | |||
√ | √ | 93.62 | ||
√ | √ | √ | 94.69 | |
√ | √ | √ | √ | 96.71 |
Research | Model | Classes | Dataset Split | Data Augmentation | Accuracy (%) |
---|---|---|---|---|---|
Boumaraf et al. [11] | Transfer Learning (ResNet-18) | 8 | 80–20 | Yes | 92.03 |
Yari et al. [15] | Transfer Learning (ResNet-50) | 8 | 75–20−5 | Yes | 94.33 |
Vikranth et al. [13] | Transfer Learning (DenseNet201, ResNet50, and MobileNetV2) 1 | 8 | - | Yes | 92 |
Zaalouk et al. [16] | Transfer Learning (Xception, DenseNet201, InceptionResNetV2, VGG19, and ResNet152) 2 | 8 | 70–20−10 | Yes | 93.32 |
Umer et al. [21] | Ensemble of features | 8 | 70–30 | - | 90.1 |
Tummala et al. [25] | Ensemble of SwinT | 8 | 70–30 | No | 93.4 |
Wang et al. [46] | Ensemble of transfer learning models | 8 | - | Yes | 94.11 |
Proposed Ensemble Model | Ensemble of custom and transfer learning models | 8 | 80–20 | Yes | 96.71 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Çetin-Kaya, Y. Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image. Diagnostics 2024, 14, 2253. https://doi.org/10.3390/diagnostics14192253
Çetin-Kaya Y. Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image. Diagnostics. 2024; 14(19):2253. https://doi.org/10.3390/diagnostics14192253
Chicago/Turabian StyleÇetin-Kaya, Yasemin. 2024. "Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image" Diagnostics 14, no. 19: 2253. https://doi.org/10.3390/diagnostics14192253
APA StyleÇetin-Kaya, Y. (2024). Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image. Diagnostics, 14(19), 2253. https://doi.org/10.3390/diagnostics14192253