Next Article in Journal
Meta-Analysis of Modulated Electro-Hyperthermia and Tumor Treating Fields in the Treatment of Glioblastomas
Next Article in Special Issue
AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions
Previous Article in Journal
PIEZO1-Related Physiological and Pathological Processes in CNS: Focus on the Gliomas
Previous Article in Special Issue
A Series-Based Deep Learning Approach to Lung Nodule Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis

1
Department of Biomedical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Software Engineering, College of Computer and Information Science, King Saud University, Riyadh 11543, Saudi Arabia
3
Department of Information Systems, College of Science & Art at Mahayil, King Khalid University, Abha 62529, Saudi Arabia
4
Department of Computer Science, Faculty of Computers and Information Technology, Future University in Egypt, New Cairo 11835, Egypt
5
Department of Information Systems, College of Business Administration in Hawtat Bani Tamim, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
6
Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(3), 885; https://doi.org/10.3390/cancers15030885
Submission received: 30 December 2022 / Revised: 20 January 2023 / Accepted: 25 January 2023 / Published: 31 January 2023

Abstract

:

Simple Summary

This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process.

Abstract

Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%.

1. Introduction

Cancer is one of the most serious health concerns that threaten the health and lives of individuals [1]. The mortality rate and incidence of breast cancer seem to be increasing in recent times. Early precise diagnosis is considered to be a key to enhancing the chances of survival. The primary step in initial diagnosis is a mammogram, but it can be difficult to identify tumors in dense breast tissue, and X-ray radiation imposes a risk to the radiologist’s and the patient’s health [2]. The precise diagnosis of breast cancer requires skilled histopathologists, as well as large amounts of effort and time for task completion. Furthermore, the diagnosis outcomes of various histopathologists are not the same, because they mainly depend on the former knowledge of each histopathologist [3]. The average diagnosis precision is just 75%, which leads to low consistency in diagnoses. The term histopathology can be defined as the process of detailed evaluation and microscopic inspection of biopsy samples carried out by a pathologist or expert to learn about cancer growth in tissues or organs [4]. Common histopathological specimens have more structures and cells that can be dispersed and surrounded haphazardly by distinct types of tissues [5]. The physical analysis of historic pictures, along with the visual observation of such images, consumes time. This necessitates expertise and experience. In order to raise the predictive and analytical capabilities of histopathological images, the utility of computer-based image analysis represents an effective method [6]. This form of analysis is even efficient for histopathological images because it renders a dependable second opinion for consistent study, which increases output. This could aid in curtailing the time it takes to identify an issue. Thus, the burden on pathologists and the death rate can be minimized [7].
Today, machine learning (ML) is fruitfully enforced in text classification, image recognition, and object recognition. With the progression of computer-aided diagnosis (CAD) technology, ML is effectively implemented in breast cancer diagnosis [8]. Histopathological image classification related to conventional ML techniques and artificial feature extraction demands a manual model of features; however, it does not need an apparatus with more efficiency, and it has benefits in the computing period [9]. However, histopathological image classification related to deep learning (DL), particularly convolutional neural networks (CNNs), frequently needs a large number of labelled training models, whereas the labelled data are hard to gain [10]. The labeling of lesions is laborious and time-consuming work, even for professional histopathologists.
This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The presented AOADL-HBCC technique mainly aims to recognize the presence of breast cancer in HIs. At the primary level, the AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC approach, a wide range of simulations was performed.

2. Related Works

Shankar et al. [11] established a new chaotic sparrow search algorithm including a deep TL-assisted BC classification (CSSADTL-BCC) technique on histopathological images (HPIs). The projected technique mostly concentrated on the classification and detection of BC. To realize this, the CSSADTL-BCC system initially carried out a Gaussian filter (GF) system for eradicating the presence of noise. In addition, a MixNet-oriented extracting feature system was utilized for generating a suitable group of feature vectors. Furthermore, a stacked GRU (SGRU) classifier system was utilized for allotting classes. In [12], TL and deep extracting feature approaches were employed that adjusted a pretraining CNN system to the current problem. The VGG16 and AlexNet methods were considered in the projected work for extracting features and AlexNet was employed for additional finetuning. The achieved features were then classified by SVM.
Khan et al. [13] examined a new DL infrastructure for the classification and recognition of BC from breast cytology images utilizing the model of TL. Generally, DL infrastructures demonstrated that certain problems were accomplished in isolation. In the presented structure, features in images were extracted employing pretrained CNN infrastructures such as ResNet, GoogLeNet, and VGGNet that are provided as fully connected (FC) layers to classify benign and malignant cells employing an average pooling classifier. In [14], a DL-related TL system was presented for classifying histopathological images automatically. Two famous and present pretrained CNN techniques, DenseNet161 and ResNet50, were trained as well as tested via grayscale and color images.
Singh et al. [15] examined a structure dependent upon the concept of TL for addressing this problem and concentrated their efforts on HPI and imbalanced image classifiers. The authors utilized common VGG19 as the base method and complemented it with different recent approaches for improving the entire efficiency of the technique. In [16], the conventional softmax and SVM-classifier-related TL systems were estimated for classifying histopathological cancer images in a binary BC database and a multiclass lung and colon cancer database. For achieving optimum classifier accuracy, a procedure that assigns an SVM technique to an FC layer of softmax-related TL techniques was presented. In [17], the authors’ concentration on BC in HPI was attained by utilizing microscopic scans of breast tissues. The authors proposed two integrated DCNNs for extracting well-known image features utilizing TL. The pretrained Xception and Inception techniques were utilized in parallel. Afterwards, feature maps were integrated and decreased by dropout before they provided the final FC layer to classify.

3. The Proposed Model

In this work, an automated breast cancer classification method, named the AOADL-HBCC technique, was developed using HIs. The presented AOADL-HBCC technique mainly aims to recognize the presence of breast cancer in HIs. It encompasses a series of processes, namely SqueezeNet feature extraction, AOA hyperparameter tuning, DBN classification, and an Adamax optimizer. Figure 1 shows a block diagram of the AOADL-HBCC mechanism.

3.1. Design of AOA with SqueezeNet Model

In this study, the presented AOADL-HBCC technique utilized an AOA with a SqueezeNet model to derive feature vectors. Presently, GoogLeNet, ResNet, VGG, AlexNet, etc., are signature techniques of DNN [18]. However, deep networks might lead to remarkable performance; this method is trained and recognition speed is reduced. Since the residual architecture does not enhance the module variable, the complexity of the trained degradation and gradient disappearance is effectively mitigated, and the convergence efficacy of the module is improved. Thus, the SqueezeNet architecture was applied as a backbone network to extract features. Figure 2 showcases the framework of the SqueezeNet method.
Compared with AlexNet and VGGNet, the SqueezeNet architecture has a smaller number of parameters. The fire module was the primary approach from SqueezeNet. This approach was classified into expand and squeeze structures. The squeeze encompasses 1 × 1 convolutional kernels. The expand layer includes 3 × 3 and 1 × 1 convolutional kernels. The number of 3 × 3 convolutional kernels is E 3 × 3   and the number of 1 × 1 convolutional kernels is E 1 × 1 . The model must satisfy < E 1 × 1 + E 3 × 3 . Thus, 1 × 1 convolution is added to each inception module, the number of input networks and the convolutional kernel variable are decreased, and the computation difficulty is reduced. Lastly, a 1 × 1 convolutional layer is added to enhance the number of channels and feature extraction. SqueezeNet changes 3 × 3 convolution with a 1 × 1 convolutional layer to reduce the variable count to one-ninth. Image feature extraction depends on a shared convolutional layer. The lowest-level features, such as edges and angles, are detached from the basic network. The higher-level features explain that the target form is eliminated at the highest level. For demonstrating the ship target on scale, the FPN was determined to extend the backbone network; viz., it was especially efficient in the detection of smaller targets. The topmost-level feature of FPN architecture is integrated with basic features by up-sampling via each layer predicting the feature map.
To adjust the hyperparameters of the SqueezeNet method, an AOA was implemented in this work. The AOA starts with a number of arbitrary populations of objects as candidates (immersed objects) [19]. Here, the object was initialized through arbitrary location from the fluid. The initial location of each object was accomplished as follows:
x i = x l i + r a n d × x u i x l i i = 1 , 2 ,   ,   N
In this expression, x i describes the i t h object from a population with N objects, along with x u i and x l i , which indicate the upper and lower boundaries of the solution space, respectively. In addition, the following indicates the location, AOA initialized density (D), acceleration (A), and volume (V), to ith object numbers:
V i = r a n d  
D i = r a n d
A i = x l i + r a n d × x u i x l i  
Next, the cost value of the candidate is evaluated and stored as V b e s t ,   D b e s t , or A b e s t , based on the population. Then, the candidate is upgraded through the parameter model as follows:
V t + 1 i = V t i + r a n d × V b e s t V t i
D t + 1 i = D t i + r a n d × D b e s t D t i
In this case, y b e s t and D b e s t denote the density and volume, respectively, associated with the best object initiated before, and r a n d indicates the arbitrary number that is uniformly distributed. The AOA applies a transfer operator (TF) to reach exploration–exploitation:
T F =   exp   t t max t max
In Equation (7), T F slowly steps up from the period still accomplishing 1, and t and t m a x indicate the iteration value and maximal iteration count, respectively. Likewise, a reduction factor of (d) density is used to offer a global–local search:
D t + 1 = exp t max t t max t t m a x
In Equation (8), D t + 1 is reduced with time that offers the ability to converge. This term renders a proper trade-off between exploitation and exploration. The exploration was stimulated on the basis of collision among objects. When T F 0.5 , a random material (mr) was preferred for upgrading acceleration of the object to t + 1 iteration:
A t + 1 = D m r + V m r × A m r D t + 1 i × V t + 1 i
Here, A i , V i , and D i denote the acceleration, volume, and density of the i t h object. The exploitation was stimulated based on no collision among objects. When T F > 0.5 , the object is then upgraded as follows:
A t + 1 i = D b e s t + V b e s t × A b e s t D t + 1 i × V t + 1 i
where A b e s t indicates the optimal object acceleration. The subsequent step to normalize acceleration for assessing alteration percentage is as follows:
A t + 1 i ¯ = u × A t + 1 i min A max A min A + l  
Here, A t + 1 i refers to the percentage of steps, and l and u correspondingly imply the normalized limit that is fixed to 0.1 and 0. 9, respectively. When T F 0.5 , the location of the ith object to the succeeding round is accomplished as follows:
x t + 1 i = x t i + c 1 × r a n d × A t + 1 i ¯ × D × x r a n d x t i
In Equation (12), C 1 denotes the constant corresponding to 2. In addition, when T F > 0.5 , the location of the object is upgraded:
x t + 1 i = x b e s t t + F × c 2 × r a n d × A t + 1 i ¯ × D × T × x b e s t x t i
In this expression, c 2 denotes a constant number corresponding to 6. T enhances with time from a range c 3 × 0.3 , 1 and obtains a determined percentage in the best location. This percentage slowly enhances to diminish the variance among optimum and present locations to offer an optimal balance between exploration and exploitation. F shows the flag for changing the motion path as
F = + 1 ,    i f   P 0.5 + 1 ,    i f   P > 0.5
while
P = 2 × r a n d c 4
Finally, the value of each object was assessed through a cost function and returned the optimal solution once the end state was satisfied.
The AOA method extracts a fitness function (FF) to receive enhanced classifier outcomes. It sets a positive value that signifies the superior outcome of the candidate’s solutions. In this work, the minimized classifier error rate is indicated as the FF, as provided in Equation (16).
f i t n e s s x i = C l a s s i f i e r E r r o r R a t e x i                                                                                         = n u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   s a m p l e s 100

3.2. Breast Cancer Classification Using Optimal DBN Model

Finally, an Adamax optimizer with the DBN method was applied for the breast cancer classification process (Algorithm 1). A DBN is a stack of RBM, excluding the primary RBM that has an undirected connection [20]. Significantly, this network architecture creates DL possibilities and reduces training complexity. The simple and effective layer-wise trained method was developed for DBN by Hinton. It consecutively trains layers and greedily trains by tying the weight of unlearned layers, applying CD to learn the weight of a single layer and iterating until all the layers are trained. Then, the network weight was finetuned through a two-pass up-down model, and this illustrates that the network learned without pretraining, since this phase implemented as regular and assisted with the supervised optimized problem. The energy constrained from the directed approach was calculated where the maximal energy was upper-bounded and accomplished equivalence, whether the network weight was tied or not, as follows:
E x 0 ,   h 0 =   log   p h 0 +   log   p x 0 | h 0
log   p x 0 h 0 Q ( h 0 | x 0 ) (   log   p h 0 +   log   p ( x 0 | h 0 ) ) h 0 Q ( h 0 | x 0 )   log   Q ( h 0 | x 0 )
  log   p x 0 ξ n , m = h 0 Q h 0 | x 0 log   p h 0
Then, iteratively learning the weight of the network, the up-down approach was used to finetune the network weight. The wake-sleep approach is an unsupervised algorithm applied to train NNs from two phases: the “wake” phase was implemented on the feedforward path to compute weight and the “sleep” phase was executed on the feedback path. The up-down approach was executed to network for decreasing underfit that could usually be detected by a greedily trained network. Particularly in the primary phase, the weight on the directed connection was from named parameters or generative weight that can be adjusted by updating the weight utilizing CD, calculating the wake-phase probability, and sampling the states. Then, the prior layer was stochastically stimulated with top-down connections called inference weights or parameters. The sleep-stage probability was calculated, the state was sampled, and the result was estimated.
For optimizing the training efficacy of the DBN, the Adamax optimizer was employed for altering the hyperparameter values [21]:
w t i = w t 1 i η v t + ϵ m ^ t
where
m ^ t = m t 1 β 1 t
v t = m a x β 2 · v t 1 , G t
m t = β 1 m t 1 + 1 β 1   G  
G = w C w t
In this expression, η denotes the learning rate, w t represents the weight at t step, C . indicates the cost function, and w C w t specifies the gradient of the w t weight variable. β i is exploited to select the data needed for the old upgrade, where β i 0 , 1 .   m t and v t represent the first and second moments.
Algorithm 1. Pseudocode of Adamax
η : Rate of Learning
β 1 , β 2   [ 0 ,1): Exponential decomposing value to moment candidate
C w : The cost function with variable w
w 0 : Primary parameter vector
m 0 0
u 0 0
i 0 (Apply time step)
while w does not converge apply
i i + 1
m i β 1 · m i 1 + 1 β 1 · C w w i
u i   max β 2 · u i 1 ,   C w w i
w i + 1 w i η / 1 β 1 i · m i / u i
end while
displaying w i (end variable)

4. Experimental Validation

This section examines the breast cancer classification results of the AOADL-HBCC model on a benchmark dataset [22]. The dataset holds two sub-datasets, namely the 100× dataset and the 200× dataset, as represented in Table 1. Figure 3 illustrates some sample images.
The proposed model was simulated using Python 3.6.5 tools on PC i5-8600k, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD. The parameter settings were given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU.
The confusion matrices of the AOADL-HBCC model on the 100× dataset are reported in Figure 4. This figure implies the AOADL-HBCC method proficiently recognized and sorted the HIs into malignant and benign classes in all aspects.
Table 2 reports the overall breast cancer classification outcomes of the AOADL-HBCC method on the 100× database. The outcomes indicate that the AOADL-HBCC approach recognized both benign and malignant classes proficiently. For example, in the 80% TR database, the AOADL-HBCC method revealed an average a c c u y of 94.59%, s e n s y of 94.36%, s p e c y of 94.36%, F s c o r e of 93.75%, and MCC of 87.55%. Simultaneously, in the 20% TS database, the AOADL-HBCC method exhibited an average a c c u y of 96.40%, s e n s y of 95.93%, s p e c y of 95.93%, F s c o r e of 95.83%, and MCC of 91.67%. Concurrently, in the 70% TR database, the AOADL-HBCC approach displayed an average a c c u y of 95.60%, s e n s y of 93.19%, s p e c y of 93.19%, F s c o r e of 94.62%, and MCC of 89.56%.
The TACC and VACC of the AOADL-HBCC technique under the 100× dataset are inspected on BCC performance in Figure 5. This figure indicates that the AOADL-HBCC method displayed enhanced performance with increased values of TACC and VACC. It is noted that the AOADL-HBCC algorithm gained maximum TACC outcomes.
The TLS and VLS of the AOADL-HBCC approach under the 100× dataset are tested on BCC performance in Figure 6. This figure shows that the AOADL-HBCC method exhibited better performance with minimal values of TLS and VLS. It is noted the AOADL-HBCC approach resulted in reduced VLS outcomes.
A clear precision–recall investigation of the AOADL-HBCC methodology under the test database is given in Figure 7. This figure exhibits that the AOADL-HBCC system enhanced precision–recall values in every class label.
A brief ROC analysis of the AOADL-HBCC approach under the test database is shown in Figure 8. The fallouts show that the AOADL-HBCC methodology exhibited its capacity in classifying different classes in the test database.
The confusion matrices of the AOADL-HBCC approach on the 200× database are given in Figure 9. This figure indicates that the AOADL-HBCC approach proficiently recognized and sorted the HIs into malignant and benign classes in every aspect.
Table 3 shows the overall breast cancer classification results of the AOADL-HBCC approach on the 200× dataset. The results indicate that the AOADL-HBCC model recognized both benign and malignant classes proficiently. For example, in the 80% TR database, the AOADL-HBCC technique exhibited an average a c c u y of 96.40%, s e n s y of 96.18%, s p e c y of 96.18%, F s c o r e of 95.91%, and MCC of 91.83%. Concurrently, in the 20% TS database, the AOADL-HBCC approach displayed an average a c c u y of 96.77%, s e n s y of 96.88%, s p e c y of 96.88%, F s c o r e of 95.85%, and MCC of 91.80%. Simultaneously, in the 70% TR database, the AOADL-HBCC technique displayed an average a c c u y of 93.04%, s e n s y of 90.03%, s p e c y of 90.03%, F s c o r e of 91.51%, and MCC of 83.45%.
The TACC and VACC of the AOADL-HBCC method under the 200× dataset are inspected on BCC performance in Figure 10. This figure shows that the AOADL-HBCC methodology displayed enhanced performance with increased values of TACC and VACC. It is noted that the AOADL-HBCC technique attained maximum TACC outcomes.
The TLS and VLS of the AOADL-HBCC approach under the 200× dataset are tested on BCC performance in Figure 11. This figure indicates that the AOADL-HBCC methodology revealed superior performance with minimal values of TLS and VLS. It is noted that the AOADL-HBCC method resulted in reduced VLS outcomes.
A clear precision–recall inspection of the AOADL-HBCC methodology under the test database is shown in Figure 12. This figure indicates that the AOADL-HBCC method enhanced precision–recall values in every class label.
A brief ROC study of the AOADL-HBCC system under the test database is given in Figure 13. The outcomes exhibited by the AOADL-HBCC method reveal its ability in classifying different classes in the test database.
A detailed comparative study of the AOADL-HBCC model with recent DL models is reported in Table 4 and Figure 14 [23]. The simulation values representing the Incep. V3, VGG16, and ResNet-50 models reported lower a c c u y of 81.67%, 80.15%, and 82.18%, respectively. Next, the Incep. V3-LSTM and Incep. V3-BiLSTM models attained reasonable a c c u y of 91.46% and 92.05%, respectively.
Although the DTLRO-HCBC model reached near-optimal a c c u y of 93.52%, the AOADL-HBCC model gained maximum a c c u y of 96.77%. These results ensured the enhanced outcomes of the AOADL-HBCC model over other models.

5. Conclusions

In this work, an automated breast cancer classification model, named the AOADL-HBCC technique, was developed on HIs. The presented AOADL-HBCC technique mainly aims to recognize the presence of breast cancer in HIs. At the primary level, the AOADL-HBCC technique exploited MF-based noise removal and a contrast enhancement process. In addition, the presented AOADL-HBCC technique utilized an AOA with a SqueezeNet model to derive feature vectors. Lastly, an Adamax optimizer with a DBN model was applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, a wide range of simulations were performed. A comparative study indicated the better performance of the AOADL-HBCC technique over other recent methodologies, with a maximum accuracy of 96.77%. Therefore, the AOADL-HBCC technique can be employed for timely and accurate BC classification. In the future, ensemble-learning-based DL classifiers can be involved to boost the overall performance of the AOADL-HBCC technique. In addition, the performance of the proposed model can be tested on large-scale real-time databases.

Author Contributions

M.O.: Methodology, Project administration, Funding acquisition, Formal analysis, Writing—original draft; M.S.M.: Writing—original draft, Writing—review & editing, Validation; N.N.: Review & editing, revising critically for important intellectual content, Validation; Funding acquisition. H.M.: Formal analysis, Writing—original draft; Writing—review & editing; A.M.: Supervision, Conceptualization, data curation, Writing—original draft; A.E.O.: Software, data curation, Writing—original draft; Writing—review & editing; A.A.A.: Supervision, Writing—original draft; Writing—review & editing; M.I.A.: review & editing, revising critically for important intellectual content, Validation. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number (2/44). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R203), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. Research Supporting Project number (RSP2023R787), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

This article does not contain any studies with human participants performed by any of the authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Boumaraf, S.; Liu, X.; Zheng, Z.; Ma, X.; Ferkous, C. A new transfer learning based approach to magnification dependent and independent classification of breast cancer in histopathological images. Biomed. Signal Process. Control 2021, 63, 102192. [Google Scholar] [CrossRef]
  2. Bose, S.; Garg, A.; Singh, S.P. Transfer Learning for Classification of Histopathology Images of Invasive Ductal Carcinoma in Breast. In Proceedings of the 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 17–19 August 2022; pp. 1039–1044. [Google Scholar]
  3. Ahmad, N.; Asghar, S.; Gillani, S.A. Transfer learning-assisted multi-resolution breast cancer histopathological images classification. Vis. Comput. 2021, 38, 2751–2770. [Google Scholar] [CrossRef]
  4. Thuy, M.B.H.; Hoang, V.T. Fusing of deep learning, transfer learning and gan for breast cancer histopathological image classification. In International Conference on Computer Science, Applied Mathematics and Applications; Springer: Cham, Switzerland, 2019; pp. 255–266. [Google Scholar]
  5. Abbasniya, M.R.; Sheikholeslamzadeh, S.A.; Nasiri, H.; Emami, S. Classification of Breast Tumors Based on Histopathology Images Using Deep Features and Ensemble of Gradient Boosting Methods. Comput. Electr. Eng. 2022, 103, 108382. [Google Scholar] [CrossRef]
  6. Chang, J.; Yu, J.; Han, T.; Chang, H.J.; Park, E. A method for classifying medical images using transfer learning: A pilot study on histopathology of breast cancer. In Proceedings of the 2017 IEEE 19th International Conference on E-Health Networking, Applications and Services (Healthcom), Dalian, China, 12–15 October 2017; pp. 1–4. [Google Scholar]
  7. Ahmad, H.M.; Ghuffar, S.; Khurshid, K. Classification of breast cancer histology images using transfer learning. In Proceedings of the 2019 16th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, Pakistan, 8–12 January 2019; pp. 328–332. [Google Scholar]
  8. Alzubaidi, L.; Al-Shamma, O.; Fadhel, M.A.; Farhan, L.; Zhang, J.; Duan, Y. Optimizing the performance of breast cancer classification by employing the same domain transfer learning from hybrid deep convolutional neural network model. Electronics 2020, 9, 445. [Google Scholar] [CrossRef] [Green Version]
  9. Baghdadi, N.A.; Malki, A.; Balaha, H.M.; AbdulAzeem, Y.; Badawy, M.; Elhosseini, M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput. Sci. 2022, 8, e1054. [Google Scholar] [CrossRef] [PubMed]
  10. Sajjad, U.; Hussain, I.; Hamid, K.; Ali, H.M.; Wang, C.C.; Yan, W.M. Liquid-to-vapor phase change heat transfer evaluation and parameter sensitivity analysis of nanoporous surface coatings. Int. J. Heat Mass Transf. 2022, 194, 123088. [Google Scholar] [CrossRef]
  11. Shankar, K.; Dutta, A.K.; Kumar, S.; Joshi, G.P.; Doo, I.C. Chaotic Sparrow Search Algorithm with Deep Transfer Learning Enabled Breast Cancer Classification on Histopathological Images. Cancers 2022, 14, 2770. [Google Scholar] [CrossRef] [PubMed]
  12. Deniz, E.; Şengür, A.; Kadiroğlu, Z.; Guo, Y.; Bajaj, V.; Budak, Ü. Transfer learning based histopathologic image classification for breast cancer detection. Health Inf. Sci. Syst. 2018, 6, 1–7. [Google Scholar] [CrossRef] [PubMed]
  13. Khan, S.; Islam, N.; Jan, Z.; Din, I.U.; Rodrigues, J.J.C. A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognit. Lett. 2019, 125, 1–6. [Google Scholar] [CrossRef]
  14. Talo, M. Automated classification of histopathology images using transfer learning. Artif. Intell. Med. 2019, 101, 101743. [Google Scholar] [CrossRef] [PubMed]
  15. Singh, R.; Ahmed, T.; Kumar, A.; Singh, A.K.; Pandey, A.K.; Singh, S.K. Imbalanced breast cancer classification using transfer learning. IEEE/ACM Trans. Comput. Biol. Bioinform. 2020, 18, 83–93. [Google Scholar] [CrossRef] [PubMed]
  16. Fan, J.; Lee, J.; Lee, Y. A transfer learning architecture based on a support vector machine for histopathology image classification. Appl. Sci. 2021, 11, 6380. [Google Scholar] [CrossRef]
  17. Elmannai, H.; Hamdi, M.; AlGarni, A. Deep learning models combining for breast cancer histopathology image classification. Int. J. Comput. Intell. Syst. 2021, 14, 1003. [Google Scholar] [CrossRef]
  18. Escorcia-Gutierrez, J.; Gamarra, M.; Beleño, K.; Soto, C.; Mansour, R.F. Intelligent deep learning-enabled autonomous small ship detection and classification model. Comput. Electr. Eng. 2022, 100, 107871. [Google Scholar] [CrossRef]
  19. Kaveh, A.; Hamedani, K.B. Improved arithmetic optimization algorithm and its application to discrete structural optimization. In Structures; Elsevier: Amsterdam, The Netherlands, 2022; Volume 35, pp. 748–764. [Google Scholar]
  20. Zand, R.; Camsari, K.Y.; Pyle, S.D.; Ahmed, I.; Kim, C.H.; DeMara, R.F. Low-energy deep belief networks using intrinsic sigmoidal spintronic-based probabilistic neurons. In Proceedings of the 2018 on Great Lakes Symposium on VLSI, Chicago, IL, USA, 23–25 May 2018; pp. 15–20. [Google Scholar]
  21. Kandel, I.; Castelli, M.; Popovič, A. Comparative study of first order optimizers for image classification using convolutional neural networks on histopathology images. J. Imaging 2020, 6, 92. [Google Scholar] [CrossRef] [PubMed]
  22. Available online: https://web.inf.ufpr.br/vri/databases/breast-cancer-histopathological-database-breakhis/ (accessed on 5 July 2022).
  23. Ragab, M.; Nahhas, A.F. Optimal Deep Transfer Learning Model for Histopathological Breast Cancer Classification. CMC-Comput. Mater. Contin. 2022, 73, 2849–2864. [Google Scholar] [CrossRef]
Figure 1. Block diagram of AOADL-HBCC system.
Figure 1. Block diagram of AOADL-HBCC system.
Cancers 15 00885 g001
Figure 2. Architecture of SqueezeNet model.
Figure 2. Architecture of SqueezeNet model.
Cancers 15 00885 g002
Figure 3. Sample images.
Figure 3. Sample images.
Cancers 15 00885 g003
Figure 4. Confusion matrices of AOADL-HBCC system under 100× dataset: (a,b) TR and TS databases of 80:20, and (c,d) TR and TS databases of 70:30.
Figure 4. Confusion matrices of AOADL-HBCC system under 100× dataset: (a,b) TR and TS databases of 80:20, and (c,d) TR and TS databases of 70:30.
Cancers 15 00885 g004
Figure 5. TACC and VACC analysis of AOADL-HBCC approach under 100× dataset.
Figure 5. TACC and VACC analysis of AOADL-HBCC approach under 100× dataset.
Cancers 15 00885 g005
Figure 6. TLS and VLS analysis of AOADL-HBCC approach under 100× dataset.
Figure 6. TLS and VLS analysis of AOADL-HBCC approach under 100× dataset.
Cancers 15 00885 g006
Figure 7. Precision–recall analysis of AOADL-HBCC approach under 100× dataset.
Figure 7. Precision–recall analysis of AOADL-HBCC approach under 100× dataset.
Cancers 15 00885 g007
Figure 8. ROC analysis of AOADL-HBCC approach under 100× dataset.
Figure 8. ROC analysis of AOADL-HBCC approach under 100× dataset.
Cancers 15 00885 g008
Figure 9. Confusion matrices of AOADL-HBCC system under 200× dataset: (a,b) TR and TS databases of 80:20, and (c,d) TR and TS databases of 70:30.
Figure 9. Confusion matrices of AOADL-HBCC system under 200× dataset: (a,b) TR and TS databases of 80:20, and (c,d) TR and TS databases of 70:30.
Cancers 15 00885 g009
Figure 10. TACC and VACC analysis of AOADL-HBCC approach under 200× dataset.
Figure 10. TACC and VACC analysis of AOADL-HBCC approach under 200× dataset.
Cancers 15 00885 g010
Figure 11. TLS and VLS analysis of AOADL-HBCC approach under 200× dataset.
Figure 11. TLS and VLS analysis of AOADL-HBCC approach under 200× dataset.
Cancers 15 00885 g011
Figure 12. Precision–recall analysis of AOADL-HBCC approach under 200× dataset.
Figure 12. Precision–recall analysis of AOADL-HBCC approach under 200× dataset.
Cancers 15 00885 g012
Figure 13. ROC analysis of AOADL-HBCC approach under 200× dataset.
Figure 13. ROC analysis of AOADL-HBCC approach under 200× dataset.
Cancers 15 00885 g013
Figure 14. Comparative analysis of AOADL-HBCC system with existing approaches.
Figure 14. Comparative analysis of AOADL-HBCC system with existing approaches.
Cancers 15 00885 g014
Table 1. Dataset details.
Table 1. Dataset details.
ClassNo. of Images
100×200×
Benign644623
Malignant14371390
Total No. of Images20812013
Table 2. BCC outcomes of AOADL-HBCC approach with various measures under 100× dataset.
Table 2. BCC outcomes of AOADL-HBCC approach with various measures under 100× dataset.
Class AccuracySensitivitySpecificityF-ScoreMCC
Training/Testing (80:20)
Training Phase
Benign94.5993.7694.9691.4487.55
Malignant94.5994.9693.7696.0587.55
Average94.5994.3694.3693.7587.55
Testing Phase
Benign96.4094.6697.2094.3091.67
Malignant96.4097.2094.6697.3791.67
Average96.4095.9395.9395.8391.67
Training/Testing (70:30)
Training Phase
Benign95.6087.0799.3192.3189.56
Malignant95.6099.3187.0796.9289.56
Average95.6093.1993.1994.6289.56
Testing Phase
Benign96.1690.6498.8293.8891.21
Malignant96.1698.8290.6497.2091.21
Average96.1694.7394.7395.5491.21
Table 3. BCC outcomes of AOADL-HBCC approach with various measures under 200× dataset.
Table 3. BCC outcomes of AOADL-HBCC approach with various measures under 200× dataset.
Class AccuracySensitivitySpecificityF-ScoreMCC
Training/Testing (80:20)
Training Phase
Benign96.4095.5896.7994.4991.83
Malignant96.4096.7995.5897.3291.83
Average96.4096.1896.1895.9191.83
Testing Phase
Benign96.7797.0996.6793.9091.80
Malignant96.7796.6797.0997.8191.80
Average96.7796.8896.8895.8591.80
Training/Testing (70:30)
Training Phase
Benign93.0482.2297.8587.9083.45
Malignant93.0497.8582.2295.1283.45
Average93.0490.0390.0391.5183.45
Testing Phase
Benign95.0389.4797.5891.8988.38
Malignant95.0397.5889.4796.4288.38
Average95.0393.5393.5394.1688.38
Table 4. Comparative analysis of AOADL-HBCC system with current approaches.
Table 4. Comparative analysis of AOADL-HBCC system with current approaches.
MethodsAccuracy
AOADL-HBCC96.77
DTLRO-HCBC93.52
Incep.V381.67
Incep.V3-LSTM91.46
Incep.V3-BiLSTM92.05
VGG16 Model80.15
ResNet-50 Model82.18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Obayya, M.; Maashi, M.S.; Nemri, N.; Mohsen, H.; Motwakel, A.; Osman, A.E.; Alneil, A.A.; Alsaid, M.I. Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis. Cancers 2023, 15, 885. https://doi.org/10.3390/cancers15030885

AMA Style

Obayya M, Maashi MS, Nemri N, Mohsen H, Motwakel A, Osman AE, Alneil AA, Alsaid MI. Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis. Cancers. 2023; 15(3):885. https://doi.org/10.3390/cancers15030885

Chicago/Turabian Style

Obayya, Marwa, Mashael S. Maashi, Nadhem Nemri, Heba Mohsen, Abdelwahed Motwakel, Azza Elneil Osman, Amani A. Alneil, and Mohamed Ibrahim Alsaid. 2023. "Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis" Cancers 15, no. 3: 885. https://doi.org/10.3390/cancers15030885

APA Style

Obayya, M., Maashi, M. S., Nemri, N., Mohsen, H., Motwakel, A., Osman, A. E., Alneil, A. A., & Alsaid, M. I. (2023). Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis. Cancers, 15(3), 885. https://doi.org/10.3390/cancers15030885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop