Deep Learning Approaches in Histopathology
Abstract
:Simple Summary
Abstract
1. Introduction
2. Deep Learning Applications in Tumor Pathology
2.1. Diagnosis of Tumor
2.2. Classification of Tumor
2.3. Grading of Tumor
2.4. Staging of Tumor
2.5. Assessment of Pathological Attributes
2.6. Assessment of Biomarkers
2.7. Assessment of Genetic Modifications
2.8. Prognosis Prediction
2.9. Different Algorithm Models for Tumors Detection
3. Expectations and Challenges
3.1. Model Validation
3.2. Algorithm Elucidation
3.3. Histopathology and Computing Model
3.4. Pathologists’ Responsibility
3.5. Clinicians’ Responsibility
3.6. Regulations
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. AI Mag. 2006, 27, 12–14. [Google Scholar]
- El-Sherif, D.M.; Abouzid, M.; Elzarif, M.T.; Ahmed, A.A.; Albakri, A.; Alshehri, M.M. Telehealth and Artificial Intelligence Insights into Healthcare during the COVID-19 Pandemic. Healthcare 2022, 10, 385. [Google Scholar] [CrossRef] [PubMed]
- Du, X.L.; Li, W.B.; Hu, B.J. Application of Artificial Intelligence in Ophthalmology. Int. J. Ophthalmol. 2018, 11, 1555–1561. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Prewitt, J.M.S.S.; Mendelsohn, M.L. The Analysis of Cell Images. Ann. N. Y. Acad. Sci. 1966, 128, 1035–1053. [Google Scholar] [CrossRef]
- Tomczak, K.; Czerwińska, P.; Wiznerowicz, M. The Cancer Genome Atlas (TCGA): An Immeasurable Source of Knowledge. Wspolczesna Onkol. 2015, 1A, A68–A77. [Google Scholar] [CrossRef]
- Gutman, D.A.; Cobb, J.; Somanna, D.; Park, Y.; Wang, F.; Kurc, T.; Saltz, J.H.; Brat, D.J.; Cooper, L.A.D. Cancer Digital Slide Archive: An Informatics Resource to Support Integrated in Silico Analysis of TCGA Pathology Data. J. Am. Med. Inform. Assoc. 2013, 20, 1091–1098. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Sun, Y.; Broaddus, R.; Liu, J.; Sood, A.K.; Shmulevich, I.; Zhang, W. Integrated Analysis of Gene Expression and Tumor Nuclear Image Profiles Associated with Chemotherapy Response in Serous Ovarian Carcinoma. PLoS ONE 2012, 7, e36383. [Google Scholar] [CrossRef] [Green Version]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Jain, R.K.; Mehta, R.; Dimitrov, R.; Larsson, L.G.; Musto, P.M.; Hodges, K.B.; Ulbright, T.M.; Hattab, E.M.; Agaram, N.; Idrees, M.T.; et al. Atypical Ductal Hyperplasia: Interobserver and Intraobserver Variability. Mod. Pathol. 2011, 24, 917–923. [Google Scholar] [CrossRef] [Green Version]
- Shmatko, A.; Ghaffari Laleh, N.; Gerstung, M.; Kather, J.N. Artificial Intelligence in Histopathology: Enhancing Cancer Research and Clinical Oncology. Nat. Cancer 2022, 3, 1026–1038. [Google Scholar] [CrossRef] [PubMed]
- Xie, Y.; He, M.; Ma, T.; Tian, W. Optimal Distributed Parallel Algorithms for Deep Learning Framework Tensorflow. Appl. Intell. 2022, 52, 3880–3900. [Google Scholar] [CrossRef]
- Barbieri, A.L.; Fadare, O.; Fan, L.; Singh, H.; Parkash, V. Challenges in Communication from Referring Clinicians to Pathologists in the Electronic Health Record Era. J. Pathol. Inform. 2018, 9, 8. [Google Scholar] [CrossRef] [PubMed]
- Wulczyn, E.; Steiner, D.F.; Xu, Z.; Sadhwani, A.; Wang, H.; Flament-Auvigne, I.; Mermel, C.H.; Chen, P.H.C.; Liu, Y.; Stumpe, M.C. Deep Learning-Based Survival Prediction for Multiple Cancer Types Using Histopathology Images. PLoS ONE 2020, 15, e0233678. [Google Scholar] [CrossRef]
- Syrykh, C.; Abreu, A.; Amara, N.; Siegfried, A.; Maisongrosse, V.; Frenois, F.X.; Martin, L.; Rossi, C.; Laurent, C.; Brousset, P. Accurate Diagnosis of Lymphoma on Whole-Slide Histopathology Images Using Deep Learning. npj Digit. Med. 2020, 3, 1–8. [Google Scholar] [CrossRef] [PubMed]
- Araujo, T.; Aresta, G.; Castro, E.; Rouco, J.; Aguiar, P.; Eloy, C.; Polonia, A.; Campilho, A. Classification of Breast Cancer Histology Images Using Convolutional Neural Networks. PLoS ONE 2017, 12, e0177544. [Google Scholar] [CrossRef]
- Bejnordi, B.E.; Zuidhof, G.; Balkenhol, M.; Hermsen, M.; Bult, P.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J. Context-Aware Stacked Convolutional Neural Networks for Classification of Breast Carcinomas in Whole-Slide Histopathology Images. J. Med. Imaging 2017, 4, 044504. [Google Scholar] [CrossRef]
- Bejnordi, B.E.; Mullooly, M.; Pfeiffer, R.M.; Fan, S.; Vacek, P.M.; Weaver, D.L.; Herschorn, S.; Brinton, L.A.; van Ginneken, B.; Karssemeijer, N.; et al. Using Deep Convolutional Neural Networks to Identify and Classify Tumor-Associated Stroma in Diagnostic Breast Biopsies. Mod. Pathol. 2018, 31, 1502–1512. [Google Scholar] [CrossRef]
- Oster, N.V.; Carney, P.A.; Allison, K.H.; Weaver, D.L.; Reisch, L.M.; Longton, G.; Onega, T.; Pepe, M.; Geller, B.M.; Nelson, H.D.; et al. Development of a Diagnostic Test Set to Assess Agreement in Breast Pathology: Practical Application of the Guidelines for Reporting Reliability and Agreement Studies (GRRAS). BMC Women’s Health 2013, 13, 3. [Google Scholar] [CrossRef] [Green Version]
- Mercan, C.; Aksoy, S.; Mercan, E.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Multi-Instance Multi-Label Learning for Multi-Class Classification of Whole Slide Breast Histopathology Images. IEEE Trans. Med. Imaging 2018, 37, 316–325. [Google Scholar] [CrossRef] [Green Version]
- Jiang, Y.; Chen, L.; Zhang, H.; Xiao, X. Breast Cancer Histopathological Image Classification Using Convolutional Neural Networks with Small SE-ResNet Module. PLoS ONE 2019, 14, e0214587. [Google Scholar] [CrossRef] [PubMed]
- Wan, T.; Cao, J.; Chen, J.; Qin, Z. Automated Grading of Breast Cancer Histopathology Using Cascaded Ensemble with Combination of Multi-Level Image Features. Neurocomputing 2017, 229, 34–44. [Google Scholar] [CrossRef]
- Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.N.C.; Tomaszewski, J.; González, F.A.; Madabhushi, A. Accurate and Reproducible Invasive Breast Cancer Detection in Whole-Slide Images: A Deep Learning Approach for Quantifying Tumor Extent. Sci. Rep. 2017, 7, 46450. [Google Scholar] [CrossRef] [Green Version]
- Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.; Tomaszewski, J.; Madabhushi, A.; González, F. High-Throughput Adaptive Sampling for Whole-Slide Histopathology Image Analysis (HASHI) via Convolutional Neural Networks: Application to Invasive Breast Cancer Detection. PLoS ONE 2018, 13, e0196828. [Google Scholar] [CrossRef] [PubMed]
- Bejnordi, B.E.; Veta, M.; Van Diest, P.J.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Van Der Laak, J.A.W.M.; Hermsen, M.; Manson, Q.F.; Balkenhol, M.; et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA J. Am. Med. Assoc. 2017, 318, 2199–2210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, Y.; Kohlberger, T.; Norouzi, M.; Dahl, G.E.; Smith, J.L.; Mohtashamian, A.; Olson, N.; Peng, L.H.; Hipp, J.D.; Stumpe, M.C. Artificial Intelligence–Based Breast Cancer Nodal Metastasis Detection Insights into the Black Box for Pathologists. Arch. Pathol. Lab. Med. 2019, 143, 859–868. [Google Scholar] [CrossRef] [Green Version]
- Steiner, D.F.; Macdonald, R.; Liu, Y.; Truszkowski, P.; Hipp, J.D.; Gammage, C.; Thng, F.; Peng, L.; Stumpe, M.C. Impact of Deep Learning Assistance on the Histopathologic Review of Lymph Nodes for Metastatic Breast Cancer. Am. J. Surg. Pathol. 2018, 42, 1636–1646. [Google Scholar] [CrossRef]
- Veta, M.; van Diest, P.J.; Willems, S.M.; Wang, H.; Madabhushi, A.; Cruz-Roa, A.; Gonzalez, F.; Larsen, A.B.L.; Vestergaard, J.S.; Dahl, A.B.; et al. Assessment of Algorithms for Mitosis Detection in Breast Cancer Histopathology Images. Med. Image Anal. 2015, 20, 237–248. [Google Scholar] [CrossRef] [Green Version]
- Saha, M.; Chakraborty, C.; Arun, I.; Ahmed, R.; Chatterjee, S. An Advanced Deep Learning Approach for Ki-67 Stained Hotspot Detection and Proliferation Rate Scoring for Prognostic Evaluation of Breast Cancer. Sci. Rep. 2017, 7, 3213. [Google Scholar] [CrossRef] [Green Version]
- Veta, M.; Heng, Y.J.; Stathonikos, N.; Bejnordi, B.E.; Beca, F.; Wollmann, T.; Rohr, K.; Shah, M.A.; Wang, D.; Rousson, M.; et al. Predicting Breast Tumor Proliferation from Whole-Slide Images: The TUPAC16 Challenge. Med. Image Anal. 2019, 54, 111–121. [Google Scholar] [CrossRef] [Green Version]
- Turkki, R.; Linder, N.; Kovanen, P.E.; Pellinen, T.; Lundin, J. Antibody-Supervised Deep Learning for Quantification of Tumor-Infiltrating Immune Cells in Hematoxylin and Eosin Stained Breast Cancer Samples. J. Pathol. Inform. 2016, 7, 38. [Google Scholar] [CrossRef] [PubMed]
- Vandenberghe, M.E.; Scott, M.L.J.; Scorer, P.W.; Söderberg, M.; Balcerzak, D.; Barker, C. Relevance of Deep Learning to Facilitate the Diagnosis of HER2 Status in Breast Cancer. Sci. Rep. 2017, 7, 45938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, L.; Lu, L.; Nogues, I.; Summers, R.M.; Liu, S.; Yao, J. DeepPap: Deep Convolutional Networks for Cervical Cell Classification. IEEE J. Biomed. Health Inform. 2017, 21, 1633–1643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wu, M.; Yan, C.; Liu, H.; Liu, Q.; Yin, Y. Automatic Classification of Cervical Cancer from Cytological Images by Using Convolutional Neural Network. Biosci. Rep. 2018, 38, BSR20181769. [Google Scholar] [CrossRef] [Green Version]
- TIA Centre Warwick: GlaS Challenge Contest. Available online: https://warwick.ac.uk/fac/cross_fac/tia/data/glascontest/ (accessed on 1 June 2021).
- Kainz, P.; Pfeiffer, M.; Urschler, M. Segmentation and Classification of Colon Glands with Deep Convolutional Neural Networks and Total Variation Regularization. PeerJ 2017, 2017, 1–28. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9351, pp. 234–241. [Google Scholar]
- Awan, R.; Sirinukunwattana, K.; Epstein, D.; Jefferyes, S.; Qidwai, U.; Aftab, Z.; Mujeeb, I.; Snead, D.; Rajpoot, N. Glandular Morphometrics for Objective Grading of Colorectal Adenocarcinoma Histology Images. Sci. Rep. 2017, 7, 16852. [Google Scholar] [CrossRef] [Green Version]
- Korbar, B.; Olofson, A.; Miraflor, A.; Nicka, C.; Suriawinata, M.; Torresani, L.; Suriawinata, A.; Hassanpour, S. Deep Learning for Classification of Colorectal Polyps on Whole-Slide Images. J. Pathol. Inform. 2017, 8, 30. [Google Scholar] [CrossRef]
- Weis, C.A.; Kather, J.N.; Melchers, S.; Al-ahmdi, H.; Pollheimer, M.J.; Langner, C.; Gaiser, T. Automatic Evaluation of Tumor Budding in Immunohistochemically Stained Colorectal Carcinomas and Correlation to Clinical Outcome. Diagn. Pathol. 2018, 13, 64. [Google Scholar] [CrossRef] [Green Version]
- Kather, J.N.; Pearson, A.T.; Halama, N.; Jäger, D.; Krause, J.; Loosen, S.H.; Marx, A.; Boor, P.; Tacke, F.; Neumann, U.P.; et al. Deep Learning Can Predict Microsatellite Instability Directly from Histology in Gastrointestinal Cancer. Nat. Med. 2019, 25, 1054–1056. [Google Scholar] [CrossRef]
- Andrews, S.; Tsochantaridis, I.; Hofmann, T. Support Vector Machines for Multi Ple-Instance Learning. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 9–14 December 2002; Volume 15, pp. 561–568. [Google Scholar]
- Ilse, M.; Tomczak, J.M.; Welling, M. Attention-Based Deep Multiple Instance Learning. In Proceedings of the 35th International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Wang, S.; Zhu, Y.; Yu, L.; Chen, H.; Lin, H.; Wan, X.; Fan, X.; Heng, P.A. RMDL: Recalibrated Multi-Instance Deep Learning for Whole Slide Gastric Image Classification. Med. Image Anal. 2019, 58, 101549. [Google Scholar] [CrossRef]
- Sharma, H.; Zerbe, N.; Klempert, I.; Hellwich, O.; Hufnagl, P. Deep Convolutional Neural Networks for Automatic Classification of Gastric Carcinoma Using Whole Slide Images in Digital Histopathology. Comput. Med. Imaging Graph. 2017, 61, 2–13. [Google Scholar] [CrossRef]
- Zhuge, Y.; Ning, H.; Mathen, P.; Cheng, J.Y.; Krauze, A.V.; Camphausen, K.; Miller, R.W. Automated Glioma Grading on Conventional MRI Images Using Deep Convolutional Neural Networks. Med. Phys. 2020, 47, 3044–3053. [Google Scholar] [CrossRef] [PubMed]
- Mobadersany, P.; Yousefi, S.; Amgad, M.; Gutman, D.A.; Barnholtz-Sloan, J.S.; Velázquez Vega, J.E.; Brat, D.J.; Cooper, L.A.D. Predicting Cancer Outcomes from Histology and Genomics Using Convolutional Networks. Proc. Natl. Acad. Sci. USA 2018, 115, E2970–E2979. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Teramoto, A.; Tsukamoto, T.; Kiriyama, Y.; Fujita, H. Automated Classification of Lung Cancer Types from Cytological Images Using Deep Convolutional Neural Networks. BioMed Res. Int. 2017, 2017, 4067832. [Google Scholar] [CrossRef] [Green Version]
- Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and Mutation Prediction from Non–Small Cell Lung Cancer Histopathology Images Using Deep Learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef] [PubMed]
- Gertych, A.; Swiderska-Chadaj, Z.; Ma, Z.; Ing, N.; Markiewicz, T.; Cierniak, S.; Salemi, H.; Guzman, S.; Walts, A.E.; Knudsen, B.S. Convolutional Neural Networks Can Accurately Distinguish Four Histologic Growth Patterns of Lung Adenocarcinoma in Digital Slides. Sci. Rep. 2019, 9, 1483. [Google Scholar] [CrossRef] [Green Version]
- Wei, J.W.; Tafe, L.J.; Linnik, Y.A.; Vaickus, L.J.; Tomita, N.; Hassanpour, S. Pathologist-Level Classification of Histologic Patterns on Resected Lung Adenocarcinoma Slides with Deep Neural Networks. Sci. Rep. 2019, 9, 3358. [Google Scholar] [CrossRef] [Green Version]
- Aprupe, L.; Litjens, G.; Brinker, T.J.; Van Der Laak, J.; Grabe, N. Robust and Accurate Quantification of Biomarkers of Immune Cells in Lung Cancer Micro-Environment Using Deep Convolutional Neural Networks. PeerJ 2019, 2019, 1–16. [Google Scholar] [CrossRef]
- Sha, L.; Osinski, B.; Ho, I.; Tan, T.; Willis, C.; Weiss, H.; Beaubier, N.; Mahon, B.; Taxter, T.; Yip, S. Multi-Field-of-View Deep Learning Model Predicts Nonsmall Cell Lung Cancer Programmed Death-Ligand 1 Status from Whole-Slide Hematoxylin and Eosin Images. J. Pathol. Inform. 2019, 10, 24. [Google Scholar] [CrossRef]
- Wang, S.; Chen, A.; Yang, L.; Cai, L.; Xie, Y.; Fujimoto, J.; Gazdar, A.; Xiao, G. Comprehensive Analysis of Lung Cancer Pathology Images to Discover Tumor Shape and Boundary Features That Predict Survival Outcome. Sci. Rep. 2018, 8, 10393. [Google Scholar] [CrossRef] [Green Version]
- Arvaniti, E.; Fricker, K.S.; Moret, M.; Rupp, N.; Hermanns, T.; Fankhauser, C.; Wey, N.; Wild, P.J.; Rüschoff, J.H.; Claassen, M. Automated Gleason Grading of Prostate Cancer Tissue Microarrays via Deep Learning. Sci. Rep. 2018, 8, 12054. [Google Scholar] [CrossRef] [PubMed]
- Schaumberg, A.; Rubin, M.; Fuchs, T. H&E-Stained Whole Slide Image Deep Learning Predicts SPOP Mutation State in Prostate Cancer. bioRxiv 2016, 064279. [Google Scholar] [CrossRef] [Green Version]
- Guan, Q.; Wang, Y.; Ping, B.; Li, D.; Du, J.; Qin, Y.; Lu, H.; Wan, X.; Xiang, J. Deep Convolutional Neural Network VGG-16 Model for Differential Diagnosing of Papillary Thyroid Carcinomas in Cytological Images: A Pilot Study. J. Cancer 2019, 10, 4876–4882. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Guan, Q.; Lao, I.; Wang, L.; Wu, Y.; Li, D.; Ji, Q.; Wang, Y.; Zhu, Y.; Lu, H.; et al. Using Deep Convolutional Neural Networks for Multi-Classification of Thyroid Tumor by Histopathology: A Large-Scale Pilot Study. Ann. Transl. Med. 2019, 7, 468. [Google Scholar] [CrossRef] [PubMed]
- Tomita, N.; Abdollahi, B.; Wei, J.; Ren, B.; Suriawinata, A.; Hassanpour, S. Attention-Based Deep Neural Networks for Detection of Cancerous and Precancerous Esophagus Tissue on Histopathological Slides. JAMA Netw. Open 2019, 2, e1914645. [Google Scholar] [CrossRef] [Green Version]
- Wang, L.; Ding, L.; Liu, Z.; Sun, L.; Chen, L.; Jia, R.; Dai, X.; Cao, J.; Ye, J. Automated Identification of Malignancy in Whole-Slide Pathological Images: Identification of Eyelid Malignant Melanoma in Gigapixel Pathological Slides Using Deep Learning. Br. J. Ophthalmol. 2020, 104, 318–323. [Google Scholar] [CrossRef]
- Vaickus, L.J.; Suriawinata, A.A.; Wei, J.W.; Liu, X. Automating the Paris System for Urine Cytopathology—A Hybrid Deep-Learning and Morphometric Approach. Cancer Cytopathol. 2019, 127, 98–115. [Google Scholar] [CrossRef] [Green Version]
- Wu, M.; Yan, C.; Liu, H.; Liu, Q. Automatic Classification of Ovarian Cancer Types from Cytological Images Using Deep Convolutional Neural Networks. Biosci. Rep. 2018, 38, BSR20180289. [Google Scholar] [CrossRef] [Green Version]
- Niazi, M.K.K.; Tavolara, T.E.; Arole, V.; Hartman, D.J.; Pantanowitz, L.; Gurcan, M.N. Identifying Tumor in Pancreatic Neuroendocrine Neoplasms from Ki67 Images Using Transfer Learning. PLoS ONE 2018, 13, e0195621. [Google Scholar] [CrossRef] [Green Version]
- Bardou, D.; Zhang, K.; Ahmad, S.M. Classification of Breast Cancer Based on Histology Images Using Convolutional Neural Networks. IEEE Access 2018, 6, 24680–24693. [Google Scholar] [CrossRef]
- LeNail, A. NN-SVG: Publication-Ready Neural Network Architecture Schematics. J. Open Source Softw. 2019, 4, 747. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Sanghvi, A.B.; Allen, E.Z.; Callenberg, K.M.; Pantanowitz, L. Performance of an Artificial Intelligence Algorithm for Reporting Urine Cytopathology. Cancer Cytopathol. 2019, 127, 658–666. [Google Scholar] [CrossRef] [PubMed]
- Ertosun, M.G.; Rubin, D.L. Automated Grading of Gliomas Using Deep Learning in Digital Pathology Images: A Modular Approach with Ensemble of Convolutional Neural Networks. AMIA Annu. Symp. Proc. MIA Symp. 2015, 2015, 1899–1908. [Google Scholar]
- Bashashati, A.; Goldenberg, S.L. AI for Prostate Cancer Diagnosis—Hype or Today’s Reality? Nat. Rev. Urol. 2022, 19, 261–262. [Google Scholar] [CrossRef]
- Perincheri, S.; Levi, A.W.; Celli, R.; Gershkovich, P.; Rimm, D.; Morrow, J.S.; Rothrock, B.; Raciti, P.; Klimstra, D.; Sinard, J. An Independent Assessment of an Artificial Intelligence System for Prostate Cancer Detection Shows Strong Diagnostic Accuracy. Mod. Pathol. 2021, 34, 1588–1595. [Google Scholar] [CrossRef]
- Pantanowitz, L.; Quiroga-Garza, G.M.; Bien, L.; Heled, R.; Laifenfeld, D.; Linhart, C.; Sandbank, J.; Albrecht Shach, A.; Shalev, V.; Vecsler, M.; et al. An Artificial Intelligence Algorithm for Prostate Cancer Diagnosis in Whole Slide Images of Core Needle Biopsies: A Blinded Clinical Validation and Deployment Study. Lancet Digit. Health 2020, 2, e407–e416. [Google Scholar] [CrossRef]
- Ström, P.; Kartasalo, K.; Olsson, H.; Solorzano, L.; Delahunt, B.; Berney, D.M.; Bostwick, D.G.; Evans, A.J.; Grignon, D.J.; Humphrey, P.A.; et al. Artificial Intelligence for Diagnosis and Grading of Prostate Cancer in Biopsies: A Population-Based, Diagnostic Study. Lancet Oncol. 2020, 21, 222–232. [Google Scholar] [CrossRef]
- Mishra, R.; Daescu, O.; Leavey, P.; Rakheja, D.; Sengupta, A. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma. J. Comput. Biol. 2018, 25, 313–325. [Google Scholar] [CrossRef]
- Cristofanilli, M. Circulating Tumor Cells, Disease Progression, and Survival in Metastatic Breast Cancer. Semin. Oncol. 2006, 33, 9–14. [Google Scholar] [CrossRef] [PubMed]
- De Bono, J.S.; Scher, H.I.; Montgomery, R.B.; Parker, C.; Miller, M.C.; Tissing, H.; Doyle, G.V.; Terstappen, L.W.W.M.; Pienta, K.J.; Raghavan, D. Circulating Tumor Cells Predict Survival Benefit from Treatment in Metastatic Castration-Resistant Prostate Cancer. Clin. Cancer Res. 2008, 14, 6302–6309. [Google Scholar] [CrossRef] [PubMed]
- Rhim, A.D.; Mirek, E.T.; Aiello, N.M.; Maitra, A.; Bailey, J.M.; McAllister, F.; Reichert, M.; Beatty, G.L.; Rustgi, A.K.; Vonderheide, R.H.; et al. EMT and Dissemination Precede Pancreatic Tumor Formation. Cell 2012, 148, 349–361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chaffer, C.L.; Weinberg, R.A. A Perspective on Cancer Cell Metastasis. Science 2011, 331, 1559–1564. [Google Scholar] [CrossRef] [PubMed]
- Pantel, K.; Alix-Panabières, C. Real-Time Liquid Biopsy in Cancer Patients: Fact or Fiction? Cancer Res. 2013, 73, 6384–6388. [Google Scholar] [CrossRef] [Green Version]
- Strati, A.; Kasimir-Bauer, S.; Markou, A.; Parisi, C.; Lianidou, E.S. Comparison of Three Molecular Assays for the Detection and Molecular Characterization of Circulating Tumor Cells in Breast Cancer. Breast Cancer Res. 2013, 15, R20. [Google Scholar] [CrossRef] [Green Version]
- Zeune, L.L.; de Wit, S.; Berghuis, A.M.S.; IJzerman, M.J.; Terstappen, L.W.M.M.; Brune, C. How to Agree on a CTC: Evaluating the Consensus in Circulating Tumor Cell Scoring. Cytom. Part A 2018, 93, 1202–1206. [Google Scholar] [CrossRef]
- Halama, N.; Michel, S.; Kloor, M.; Zoernig, I.; Benner, A.; Spille, A.; Pommerencke, T.; Von Knebel Doeberitz, M.; Folprecht, G.; Luber, B.; et al. Localization and Density of Immune Cells in the Invasive Margin of Human Colorectal Cancer Liver Metastases Are Prognostic for Response to Chemotherapy. Cancer Res. 2011, 71, 5670–5677. [Google Scholar] [CrossRef] [Green Version]
- Savas, P.; Salgado, R.; Denkert, C.; Sotiriou, C.; Darcy, P.K.; Smyth, M.J.; Loi, S. Clinical Relevance of Host Immunity in Breast Cancer: From TILs to the Clinic. Nat. Rev. Clin. Oncol. 2016, 13, 228–241. [Google Scholar] [CrossRef]
- Khameneh, F.D.; Razavi, S.; Kamasak, M. Automated Segmentation of Cell Membranes to Evaluate HER2 Status in Whole Slide Images Using a Modified Deep Learning Network. Comput. Biol. Med. 2019, 110, 164–174. [Google Scholar] [CrossRef]
- Barbieri, C.E.; Baca, S.C.; Lawrence, M.S.; Demichelis, F.; Blattner, M.; Theurillat, J.P.; White, T.A.; Stojanov, P.; Van Allen, E.; Stransky, N.; et al. Exome Sequencing Identifies Recurrent SPOP, FOXA1 and MED12 Mutations in Prostate Cancer. Nat. Genet. 2012, 44, 685–689. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bychkov, D.; Linder, N.; Turkki, R.; Nordling, S.; Kovanen, P.E.; Verrill, C.; Walliander, M.; Lundin, M.; Haglund, C.; Lundin, J. Deep Learning Based Tissue Analysis Predicts Outcome in Colorectal Cancer. Sci. Rep. 2018, 8, 3395. [Google Scholar] [CrossRef] [PubMed]
- Kather, J.N.; Krisam, J.; Charoentong, P.; Luedde, T.; Herpel, E.; Weis, C.A.; Gaiser, T.; Marx, A.; Valous, N.A.; Ferber, D.; et al. Predicting Survival from Colorectal Cancer Histology Slides Using Deep Learning: A Retrospective Multicenter Study. PLoS Med. 2019, 16, e1002730. [Google Scholar] [CrossRef] [PubMed]
- Shaikh, F.J.; Rao, D.S. Prediction of Cancer Disease Using Machine Learning Approach. Mater. Today Proc. 2021, 50, 40–47. [Google Scholar] [CrossRef]
- Ullah, N.; Khan, J.A.; Khan, M.S.; Khan, W.; Hassan, I.; Obayya, M.; Negm, N.; Salama, A.S. An Effective Approach to Detect and Identify Brain Tumors Using Transfer Learning. Appl. Sci. 2022, 12, 5645. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, Y.; Wu, W.; Chen, H.; Cheng, L.; Wang, S. Tumor Detection Using Deep Learning Method in Automated Breast Ultrasound. Biomed. Signal Process. Control 2021, 68, 102677. [Google Scholar] [CrossRef]
- Rela, M.; Suryakari, N.R.; Patil, R.R. A Diagnosis System by U-Net and Deep Neural Network Enabled with Optimal Feature Selection for Liver Tumor Detection Using CT Images. Multimed. Tools Appl. 2022. [Google Scholar] [CrossRef]
- Couture, H.D.; Williams, L.A.; Geradts, J.; Nyante, S.J.; Butler, E.N.; Marron, J.S.; Perou, C.M.; Troester, M.A.; Niethammer, M. Image Analysis with Deep Learning to Predict Breast Cancer Grade, ER Status, Histologic Subtype, and Intrinsic Subtype. npj Breast Cancer 2018, 4, 30. [Google Scholar] [CrossRef] [Green Version]
- Zech, J.R.; Badgeley, M.A.; Liu, M.; Costa, A.B.; Titano, J.J.; Oermann, E.K. Variable Generalization Performance of a Deep Learning Model to Detect Pneumonia in Chest Radiographs: A Cross-Sectional Study. PLoS Med. 2018, 15, e1002683. [Google Scholar] [CrossRef] [Green Version]
- Ching, T.; Himmelstein, D.S.; Beaulieu-Jones, B.K.; Kalinin, A.A.; Do, B.T.; Way, G.P.; Ferrero, E.; Agapow, P.M.; Zietz, M.; Hoffman, M.M.; et al. Opportunities and Obstacles for Deep Learning in Biology and Medicine. J. R. Soc. Interface 2018, 15, 20170387. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Madabhushi, A.; Lee, G. Image Analysis and Machine Learning in Digital Pathology: Challenges and Opportunities. Med. Image Anal. 2016, 33, 170–175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rudin, C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- Wang, X.; Janowczyk, A.; Zhou, Y.; Thawani, R.; Fu, P.; Schalper, K.; Velcheti, V.; Madabhushi, A. Prediction of Recurrence in Early Stage Non-Small Cell Lung Cancer Using Computer Extracted Nuclear Features from Digital H&E Images. Sci. Rep. 2017, 7, 1–10. [Google Scholar] [CrossRef] [Green Version]
- US FDA. Developing a Software Precertification Program: A Working Model; U.S. Food Drug Administration: White Oak, MD, USA, 2019; pp. 1–58.
- Pesapane, F.; Volonté, C.; Codari, M.; Sardanelli, F. Artificial Intelligence as a Medical Device in Radiology: Ethical and Regulatory Issues in Europe and the United States. Insights Imaging 2018, 9, 745–753. [Google Scholar] [CrossRef]
- Philips IntelliSite Pathology Solution (PIPS) Evaluation of Automatic Class III Designation–De Novo Request. 2017. Available online: https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN160056.pdf (accessed on 15 May 2021).
- FDA Grants Breakthrough Designation to Paige.AI|Business Wire. Available online: https://www.businesswire.com/news/home/20190307005205/en/FDA-Grants-Breakthrough-Designation-Paige.AI (accessed on 15 May 2021).
- PAIGE. Available online: https://www.paige.ai/resources/philips-and-paige-team-up-to-bring-artificial-intelligence-ai-to-clinical-pathology-diagnostics/ (accessed on 15 May 2021).
Training Set | AI Determinants | Outcomes | Ref. |
---|---|---|---|
Breast cancer | |||
Diagnosis | |||
H&E-stained images (n = 249; 2040 × 1536 px) | -Model I *: Carcinoma|Non-carcinoma -Model II: Normal|Benign, CIS, IC | -Model I had higher accuracy than Model II (83.3% vs. 77.8%) -Overall sensitivity = 95.6% | [16] |
WSIs of H&E-stained tissue (n = 221; 0.243 μm × 0.243 μm) | -Model I *: Malignant|Non-malignant -Model II: Benign|DCIS, IDC | -Model I AUROC = 0.962 -Model II accuracy = 81.3%, a developing model for routine diagnostics | [17] |
H&E-stained tissue (n = 2387; 0.455 µm × 0.455 µm) | Benign|IC | -↑AUROC = 0.962, depending only on the stromal characteristics -Estimate the amount of tumor-associated stroma and its distance from grade 3 vs. grade 1 | [18] |
H&E-stained biopsies (n = 240; 100,000 × 64,000 px; 40×) [19] | Non-proliferative|Proliferative|Atypical hyperplasia|CIS|IC | Maximum precision = 81% | [20] |
Tumor subtyping | |||
Microscopic images (n = 7909; 700 × 460 px; 40–400×) | -Benign cancer: Adenosis|Fibroadenoma|Tubular adenoma|Phyllodes tumor -Malignant cancer: Ductal carcinoma|Lobular carcinoma|Mucinous carcinoma|Papillary carcinoma | Less magnification was association with better accuracy (400× = 90.66%; 200× = 92.22%; 100× = 93.81%; 40× = 93.74%) | [21] |
Tumor grading | |||
H&E-stained breast biopsy tissue (n = 106) | Low, Intermediate, High | Overall accuracy: 69% -Low vs. high = 92%-Low vs. intermediate = 77% -Intermediate vs. high = 76% | [22] |
Tumor staging | |||
Overall set (n = 600; validation TCGA = 200) | Regional heatmap of IC | -Dice coefficient = 75.86% -PPV = 71.62% -NPV = 96.77% | [23] |
HASHI (n = 500) followed by testing on TCGA studies (n = 195) | Dice coefficient = 76%, and its analyzing power was ∼2000 in 1 min | [24] | |
WSIs (n = 270; with nodal metastases = 110) (n = 110) | Absence vs. presence of breast cancer metastasis in lymph nodes | -AUROC range = 0.556 to 0.994 -The algorithm performance was better than pathologists WTC [AUROC = 0.810 (0.738–0.884)***; p < 0.001] | [25] |
WSIs of H&E-stained lymph nodes (n = 399 patients) [25] | -LYNA AUROC = 99% -Sensitivity = 91% at one false-positive per patient | [26] | |
Digitized slides from lymph node sections (n = 70) | Metastatic regions in lymph node | -Sensitivity = 83% and avg. processing time per image = 116 s -With algorithm-assisted pathologists, the sensitivity improved to 91% (p = 0.02), and the processing time reduced to 61 s (p = 0.002) | [27] |
Evaluation of pathological features | |||
Mitotic figures (n > 1000) | Mitotic count | -IDSIA was the highest-rank approach -F1 score = 0.611 | [28] |
Sample images (n = 450; 315 training) | Ki-67 index | -GMM’s precision value = 93% -F-score of 0.91%, and 0.88% recall value | [29] |
WSIs breast cancer (n = 821; 500 training) | -Model I: Predict mitotic scores -Model II: Predict the gene expression based on PAM50 proliferation scores | -Model I’s κ score = 0.567 (95% CI: 0.464, 0.671) -Model II’s R-value = 0.617 (95% CI: 0.581 0.651) | [30] |
A set of super px images (n = 123,442) | -Model I: Identify regions of immune cell-rich and immune cell-poor -Model II: Quantify immune infiltration | -Model I, CNN’s F-score of 0.94 (0.92–0.94) *** -Model II, only 200 images were used, and the CNN was compared to pathologists and achieved a similar agreement level of 90% with κ values of 0.79 and 0.78 | [31] |
Evaluation of biomarkers | |||
A cohort of breast tumor resection samples (n = 71) | HER2 status: Negative|Equivocal|Positive | -Overall accuracy = 83% (95% CI: 0.74–0.92) -Cohen’s κ coefficient = 0.69 (95% CI: 0.55–0.84) -Kendall’s tau-b correlation coefficient = 0.84 (95% CI: 0.75–0.93) | [32] |
Cervical cancer | |||
Diagnosis | |||
-Herlev Dataset: Abnormal and normal cell image (n = 100 and 280) -HEMLBC Dataset: Abnormal and normal cells (n = 989 and 1381) Both dataset sizes = 256 × 256 × 3 px | Normal|Abnormal | -Accuracy = 98.3% -Specificity = 98.3%. -↑AUC = 0.99. -Higher results were reproducible on the HEMLBC dataset | [33] |
Tumor subtyping | |||
Original image group (n = 3012 datasets) and augmented image group (n = 108432 datasets), 227 × 227 px | Keratinizing|Non-keratinizing|Basaloid squamous cell carcinoma | The original images displayed significantly higher accuracy (p < 0.05) than the augmented group, with values of 93.33% and 89.48%, resp. | [34] |
Colorectal cancer | |||
Diagnosis | |||
H&E-stained images (n = 165; 0.62 µm; 20×) [35] | Benign|Malignant | -Accuracy ≥ 95% -↑F1-score > 0.88, and the false-positive benign cases were zero | [36] |
Pixel-based DNN for gland [37] trained on digitized H&E-stained images | -Model I (diagnosis) *: Normal|Cancer -Model II (grading): Normal|Low|High | Model I (diagnosis) had higher accuracy than Model II (grading), with 97% and 91%, resp. | [38] |
Tumor subtyping | |||
Reference standard dataset (n = 2074) | Hyperplastic polyp|Sessile serrated adenoma|Traditional serrated adenoma|Tubular adenoma|Tubulovillous|Villous adenoma | The methodology of the residual network architecture yielded superior results in classifying the six major determinants with a value of 93.0% (95% Cl = 89.0–95.9%) | [39] |
Evaluation of pathological features | |||
Pan-cytokeratin-stained WSI (n = 20) | No. tumor budding | -Spontaneously detected the absolute number of tumor buds for each image, R2 = 0.86 -Nodal status was neither associated with tumor buds at the invasive front nor the number of hotspots | [40] |
Evaluation of genetic changes | |||
-Dataset I: Large patient cohorts from TCGA (n = 315) -Dataset II: FFPE samples of stomach adenocarcinoma (n = 360) | MSI|MSS | The AUC of dataset I (0.84, 95% CI = 0.72–0.92) was higher than the AUC of dataset II (0.75, 95% CI = 0.63–0.83) | [41] |
Gastric cancer | |||
Diagnosis | |||
H&E-stained images (n = 606; 0.2517 μm/px; 40×) | Normal|Dysplasia|Cancer | RMDL = 0.923, good accuracy of 86.5%. The outcomes of this method were better than those implemented by MISVM [42] and Attention-MIP [43] with values of 0.908, 82.5%, and 0.875, 82%, resp. | [44] |
Evaluation of genetic changes | |||
Original uncropped images (n = 21,000) were used to produce testing dataset (n = 231,000) and for detection of necrosis (n = 47,130) | HER2 status: Negative|Equivocal|Positive | The CNN approach had higher performance detecting necrosis than the overall HER2 classification with values of 81.44% and 69.90% resp. | [45] |
Glioma | |||
Tumor grading | |||
Digitized WSIs obtained from TCGA | -Lower-grade glioma: Grade II|Grade III -Glioblastoma multiforme: Grade IV | -CNN distinguished lower-grade glioma from glioblastoma multiforme with accuracy = 96% -Grade II and Grade III classification accuracy lowered to 71% | [46] |
Prognosis prediction | |||
Dataset obtained from TCGA (n = 769) | Risk: Low|Intermediate|High | The prognostic power of SCNN median c index = 0.754, and it was comparable with manual models, median c index = 0.745, p = 0.307 | [47] |
Lung cancer | |||
Tumor subtyping | |||
Multiple images (n = 298; 2040 × 1536 px; 40×) | -Model I **: Small and non-small cell cancer -Model II: Adenocarcinoma|Squamous cell|Small cell carcinoma | -Model I had a substantial accuracy of 86.6%, and it was higher than Model II with an overall accuracy of 71.1% -The lowest accuracy rate was in the determination of squamous cell carcinoma, with a value of 60%, while the highest was for adenocarcinoma, with a value of 89% -The accuracy of small cell carcinoma was moderate at a value of 70.3% | [48] |
WSI dataset obtained from Genomic Data Commons database (n = 1635) | -Model I: Adenocarcinoma|Squamous cell carcinoma -Model II (gene prediction): STK11|TP53|EGFR|SETBP1|KRAS|FAT1 | -Model I performance was high (AUC = 0.97) to classify the three subtypes -Six out of ten of the most mutated genes were predicted, AUC = 0.733–0.856 *** | [49] |
Image tiles (n = 19,924) obtained from 78 slides from two institutions: CSMC and MIMW | Solid|Micropapillary|Acinar|Cribriform|Non-tumor | Overall, slides from CSMC had higher quality; their accuracy level was significantly higher (p < 2.3 × 10−4) than MIMW with values of 88.5% and 84.2%, resp. Overall accuracy in differentiating the five classes was 89.24% | [50] |
Digitized WSIs (n = 143) | Lepidic|Solid|Micropapillary|Acinar|Cribriform | -The results were compared with a group of pathologists (n = 3), with κ score of 0.525 and an agreement of 66.6% -The performance was marginally higher than the inter-pathologist κ score of 0.485 and agreement of 62.7% | [51] |
Dataset obtained from NCTD Tissue Bank (n = 39) stained for markers CD3, CD8, and CD20 and stained all T-cells, cytotoxic T cells, and B-cells, resp. | Immune cell count | -The accuracy of the augmented patch level was 98.6% -The stained tissues with T-cells were successfully classified with a sensitivity of 98.8% and specificity of 98.7% -The false-positive and false-negative detection rates were 1.30% and 1.19%, resp. | [52] |
Evaluation of biomarkers | |||
Training set (n = 130 patients; training = 48) | PD-L1 status: Negative|Positive | -AUROC = 0.80, p < 0.01, and it persisted effectively over a range of PD-L1 cutoff thresholds (AUROC = 0.67–0.81, p ≤ 0.01) -AUROC was slightly decreased when dissimilar proportions of the labels were randomly shuffled for simulating inter-pathologist disagreement (AUROC = 0.63–0.77, p ≤ 0.03) | [53] |
Prognosis prediction | |||
Independent patient cohort (n = 389) | Risk: Low|High | -The predicted low-risk group had better survival than the high-risk group (p = 0.0029) -It serves as an independent prognostic factor (high-risk vs. low-risk, HR = 2.25, 95% CI: 1.34–3.77, p = 0.0022) | [54] |
Prostate cancer | |||
Tumor grading | |||
A discovery cohort (n = 641 patients) and independent test cohort (n = 245 patients) | Gleason scoring | The inter-annotator agreements between the model and each pathologist, quantified via κ score of 0.75 and 0.71, resp., compared with the inter-pathologist agreement (κ = 0.71) | [55] |
Evaluation of genetic changes | |||
H&E-stained slides from TCGA cohort (n = 177) | SPOP mutation|SPOP non-mutant | -AUROC = 0.74 -Fisher’s Exact Test p = 0.007 | [56] |
Thyroid cancer | |||
Diagnosis | |||
Original image dataset (n = 279) | Model I **: PTC|Benign nodules | The accuracy of VGG-16 and Inception-V3 in the test group was 97.66% and 92.75%, resp. | [57] |
Tumor subtyping | |||
Fragmented images (n = 11,715; training = 9763) | Normal tissue|Adenoma|Nodular goiter|PTC|FTC|MTC|ATC | Both MTC and nodular goiter had an accuracy of 100% and decreased gradually: 98.89% for FTC, 98.57% for ATC, 97.77% for PTC, 92.44% for adenoma, and 88.33% for normal tissue | [58] |
Miscellaneous Applications | |||
Diagnosis for esophageal lesion | |||
WSIs with high resolution (n = 379) | Barrett esophagus|Dysplasia|Cancer | The DL model accuracy = 0.83 (95% CI = 0.80–0.86) | [59] |
Diagnosis for melanocytic lesion | |||
H&E-stained WSIs (n = 155) were used to extract pathological patches (n = 225,230) | Nevus|Aggressive malignant melanoma | -The result of the model differed from the extracted patches and WSIs since the latter had higher sensitivity, specificity, and accuracy (94.9%, 94.7%, and 95.3% vs. 100%, 96.5%, and 98.2%, resp.). -WSIs had a higher AUROC value [0.998 (95% CI = 0.994 to 1.000) vs. 0.989 (95% CI = 0.989 to 0.991)] | [60] |
Diagnosis of urinary tract lesion | |||
WSIs of liquid-based urine cytology specimens (n = 217) | Risk: Low|High | Sensitivity of 83% with a false-positive rate of 13% and AUROC of 0.92 | [61] |
Subtyping for ovary cancer | |||
H&E-stained tissue sections of ovarian cancer obtained from FAHXMU (n = 85; 1360 × 1024 px) | Serous|Mucinous|Endometrioid|Clear cell carcinoma | Two models were designed based on the training of the original images (n = 1848) and augmented images (n = 20,328) The accuracy of the model increased from 72.76% to 78.20% when utilizing the augmented images as training data | [62] |
Biomarker for pancreatic neuroendocrine neoplasm | |||
A set of WSIs (n = 33) | Ki-67 index | The DL model employed 30 high-power fields and had a high sensitivity of 97.8% and specificity of 88.8% | [63] |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ahmed, A.A.; Abouzid, M.; Kaczmarek, E. Deep Learning Approaches in Histopathology. Cancers 2022, 14, 5264. https://doi.org/10.3390/cancers14215264
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers. 2022; 14(21):5264. https://doi.org/10.3390/cancers14215264
Chicago/Turabian StyleAhmed, Alhassan Ali, Mohamed Abouzid, and Elżbieta Kaczmarek. 2022. "Deep Learning Approaches in Histopathology" Cancers 14, no. 21: 5264. https://doi.org/10.3390/cancers14215264
APA StyleAhmed, A. A., Abouzid, M., & Kaczmarek, E. (2022). Deep Learning Approaches in Histopathology. Cancers, 14(21), 5264. https://doi.org/10.3390/cancers14215264