Next Article in Journal
Dissipative Discrete PID Load Frequency Control for Restructured Wind Power Systems via Non-Fragile Design Approach
Next Article in Special Issue
Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
Previous Article in Journal
Transient Dynamics in Counter-Rotating Stratified Taylor–Couette Flow
Previous Article in Special Issue
Effects of Exploration Weight and Overtuned Kernel Parameters on Gaussian Process-Based Bayesian Optimization Search Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Breast Cancer Diagnosis Using a Novel Parallel Support Vector Machine with Harris Hawks Optimization

1
Department of Computer Science, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
2
Department of Computer Science, Faculty of Computer and Information Systems, Islamic University of Madinah, Medinah 42351, Saudi Arabia
3
Scientific Computing Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13511, Egypt
4
Computer Science Department, Integrated Thebes Institutes, Cairo 11331, Egypt
5
Artificial Intelligence Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13511, Egypt
6
Faculty of Computer Studies, Arab Open University, Cairo 11211, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3251; https://doi.org/10.3390/math11143251
Submission received: 29 June 2023 / Revised: 14 July 2023 / Accepted: 16 July 2023 / Published: 24 July 2023

Abstract

:
Three contributions are proposed. Firstly, a novel hybrid classifier (HHO-SVM) is introduced, which is a combination between the Harris hawks optimization (HHO) and a support vector machine (SVM) is introduced. Second, the performance of the HHO-SVM is enhanced using the conventional normalization method. The final contribution is to improve the efficiency of the HHO-SVM by adopting a parallel approach that employs the data distribution. The proposed models are evaluated using the Wisconsin Diagnosis Breast Cancer (WDBC) dataset. The results show that the HHO-SVM achieves a 98.24% accuracy rate with the normalization scaling technique, outperforming other related works. On the other hand, the HHO-SVM achieves a 99.47% accuracy rate with the equilibration scaling technique, which is better than other previous works. Finally, to compare the three effective scaling strategies on four CPU cores, the parallel version of the proposed model provides an acceleration of 3.97.

1. Introduction

Breast cancer is the most common disease in men and women of all ages, accounting for 11.7 percent of all cancer cases in 2020 [1]. It is the most common cancer in women worldwide, accounting for 24.5 percent of all new cases diagnosed in 2020. Breast cancer must be detected early in order to receive appropriate treatment and to reduce the number of fatalities caused by the disease.
Expert systems and artificial intelligence techniques can aid breast cancer detection professionals in avoiding costly mistakes. These expert systems can review medical data in less time and provide assistance to junior physicians. Breast cancer has been detected with excellent accuracy using a variety of artificial intelligence techniques. Marcano-Cedeo et al. [2] proposed the artificial metaplasticity MLP (AMMLP) method with a 99.26 percent accuracy. An RS-SVM classifier for breast cancer diagnosis was used by Chen et al. [3] and achieved 100% and 96.87% for the highest and average accuracy, respectively. Hui-Ling Chen et al. [4] obtained a 99.3% accuracy using a PSO-SVM. For the breast cancer dataset, Liu and Fu [5] presented the CS-PSO-SVM model, which merged a support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS) and obtained an accuracy of 91.3% versus 90% for both the PSO-SVM and GA-SVM models. Bashir, Qamar, and Khan [5] achieved a 97.4% accuracy with ensemble learning algorithms. Tuba et al. [6] proposed an adjusted bat algorithm to optimize the parameters of a support vector machine and showed that compared to the grid search, it led to a 96.49% better classifier versus 96.31% for the WDBC dataset. Shokoufeh Aalaei et al. [7] introduced a feature selection strategy based on GA, which achieved a 96.9% accuracy. In S. Mandal [8], different cancer classification models (naïve Bayes (NB), logistic regression (LR), decision tree (DT)) were compared to find the smallest subset of features that could warrant a high-accuracy classification of breast cancer. The author concluded that logistic regression classifier was the best classifier with the highest accuracy of 97.9%. The particle swarm optimization (PSO) algorithm was used as a feature option and to improve the C4.5 algorithm by Muslim et al. [9]. The accuracy of C4.5 was 95.61% versus 96.49% for the PSO C4.5 algorithm for the WBC dataset. Liu et al. [10] suggested an improved cost-sensitive support vector machine classifier (ICS-SVM), which took into consideration the unequal misclassification costs of breast cancer intelligent diagnosis and tested the approach on the (WBC) and (WDBC) breast cancer datasets. They scored 98.83% on the WDBC dataset. Agarap [11] performed a comparison of six ML techniques and obtained a 99.04% accuracy rate. The fruit fly optimization algorithm (FOA) enhanced by the Levy flight (LF) strategy (LFOA) was proposed by Huang et al. [12] to optimize the best parameters of an SVM and build an LFOA-based SVM for breast cancer diagnosis. Xie et al. [13] introduced a new technique based on an SVM, with a combined RBF and polynomial kernel functions, and the dragonfly algorithm (DA-CKSVM). Harikumar and Chakravarthy [14] proposed a model that applied two machine learning (ML) algorithms, a decision tree (DT) and the K-nearest neighbors (KNN) algorithm to the WDBC dataset after a feature selection using a principal component analysis (PCA), and the results of the comparative analysis indicated that the KNN classifier outperformed the DT classifier. Habib [15] used genetic programming and machine learning algorithms and achieved a 98.24% classification accuracy. Hemeida et al. [16] proposed four distinct optimization strategies for the classification of two datasets, the Iris dataset and the Breast Cancer dataset, using ANN. Telsang and Hegde [17] presented a prediction of breast cancer using various machine learning algorithms and compared the accuracy of their predictions using the WDBC dataset. After analysis, the SVM model had an accuracy of 96.25 percent. Umme and Doreswamy [18] proposed a hybrid diagnostic model that combined the bat method, gravitational search algorithm (GSA), and a feed-forward neural network (FNN). When training and testing, the accuracy on the WDBC dataset was found to be 94.28 percent and 92.10 percent, respectively. Singh et al. [19] proposed the grey wolf–whale optimization algorithm, a hybrid metaheuristic-swarm-intelligence-based SVM classifier (GWWOA-SVM). The hyperparameters of the SVM were tuned using the WOA and GWO. The WDBC dataset was used to test the effectiveness of the GWWOA-SVM. The model obtained a classification accuracy of 97.721 percent. Badr et al. [20] proposed three contributions. They used a recent grey wolf optimizer (GWO) to improve the performance of an SVM for diagnosing breast cancer utilizing efficient scaling strategies in contrast to the traditional normalization technique. They made use of a parallel technique that used task allocation to boost GWO’s efficiency. The suggested model was tested on the WDBC dataset and obtained an accuracy rate of 98.60 percent with normalization scaling, and using scaling strategies also resulted in a fast convergence and a 99.30 percent accuracy rate. On four CPU cores, the parallel version of the proposed model provided a speedup of 3.9.
Scaling strategies can help classifiers become more accurate. For the SVM optimization, Elsayed Badr et al. [21] presented ten efficient scaling approaches. For linear programming approaches, these scaling techniques were effective [22,23,24,25,26,27,28,29,30,31]. On the WDBC dataset, they utilized the arithmetic mean and de Buchet scaling techniques for three cases (p = 1, 2, ∞), and the equilibration, geometric mean, IBM MPSX, and Lp-norm scaling techniques for three cases (p = 1, 2, ∞).
The parallel swarm technique was created by the authors of [32] for two-sided balancing problems. In [33], a parallel approach was applied to data testing in order to achieve massive passing. The authors of [34] introduced and discussed parallel dynamic programming methods. Reference [35] gives a survey of numerous strategies for parallelizing algorithms. Reference [36] introduces a parallel approach to constraint-solving methods. Polap et al. [37] proposed three strategies for improving traditional procedures that reduced the solution space by using a neighborhood search. The second was to reduce the calculation time by limiting the number of possible solutions. In addition, the two procedures indicated above were combined. Metaheuristic algorithms such as ABC, FPA, BA, PSO, and MFO have been used to optimize SVMs and extreme learning machines, allowing them to readily overcome local minima and overfitting difficulties. The reader can refer to [38,39], which present the advantages and disadvantages of traditional machine learning methods such as SVMs and deep learning methods.
Three achievements are presented in this work. The first is a new hybrid classifier (HHO-SVM) that combines the Harris hawks optimization (HHO) and support vector machine (SVM) techniques. In order to increase the HHO-SVM performance, the second contribution compares three efficient scaling algorithms with the usual normalizing methodology. The final contribution is to improve the efficiency of the HHO-SVM by adopting a parallel approach that employs the data distribution. The proposed models are tested on the Wisconsin Diagnosis Breast Cancer (WDBC) dataset. The results show that the HHO-SVM achieves a 98.24% accuracy rate with the normalization scaling technique outperforming the results in [6,7,8,15,17,18,19]. On the other hand, the HHO-SVM achieves a 99.47% accuracy rate with the equilibration scaling technique, better than the results in [6,7,8,10,11,15,17,18,19,20]. Finally, the parallel version of the suggested model achieves a speedup of 3.97 on four CPU cores.
The sections that follow are grouped as such: Section 2 delves into SVM and HHO. Section 3 contains an explanation of the suggested model. Section 4 provides a complete study of three unique scaling methods: the equilibration, arithmetic, and geometric means. Section 5 explains the parallel version of the HHO-SVM. Section 6 has an experimental design that includes data descriptions, experimental setup, performance evaluation measures, and a comparative analysis. The experimental results and discussion are found in Section 7. Finally, Section 8 provides a conclusion as well as future work.

2. Preliminaries

Support vector machines (SVM) and the Harris hawks optimization (HHO) are introduced and studied in this section.

2.1. Support Vector Machine (SVM)

The goal of an SVM is to find an N-dimensional hyperplane that classifies the available data vectors with the least amount of error. An SVM employs convex quadratic programming to avoid local minima [40]. If we assume a binary classification problem and have a training dataset with a class label: x 1 ,   y 1   x n ,   y n ,   x i   R d   a n d   y i     ( 1 , + 1 ) where x i   is the class label and yi is the input or feature vector, the best hyperplane is as follows:
w x T + b = 0
such that w, x, and b indicate the weight, input vector, and bias, respectively. The letters w and b fulfill the following requirements:
w x i T + b   + 1                   i f     y i = 1
w x i T + b   1                   i f   y i = 1
The goal of the SVM model training is to find the w and b that maximize the margin   1 w 2 .
Nonlinearly separable problems are common. To transform the nonlinear problem to a linear one, the input space is converted into a higher-dimensional space.
Kernel functions [41] could be used to extend the data’s dimensions and turn the problem into a linear one. The linear and nonlinear SVMs are depicted in Figure 1. Furthermore, kernel functions may be useful in speeding up calculations in high-dimensional space. For example, in the extended feature space, the linear kernel can be used to compute the dot product of two features. The most frequent SVM kernels are RBF and polynomial. They can be expressed as:
K x i , x j = e γ x i x j 2
K x i , x j = ( 1 + x i T x j ) p
such that the parameters γ and p are the width of the Gaussian kernel and the polynomial order, respectively. Setting proper model parameters has been demonstrated to increase the accuracy of SVM classification [42]. The adjustment of SVM parameters is a very delicate process. These parameters are C, gamma, and the SVM kernel function which finds the mapping from the nonlinear to linear problem by increasing the dimension.

2.2. Harris Hawks Optimization (HHO)

Heidari et al. [43] developed an algorithm called HHO (Harris hawks optimization). It derives from the hunting style and cooperation of Harris’s hawks. Some hawks cooperate when attacking their prey from different directions to surprise and disable it. Furthermore, to aid in the selection of different hunting strategies, it is dependent on various sceneries and kinds of prey flying. Exploring a prey, transitioning from exploration to exploitation, and exploitation are the three primary phases of the HHO. In this diagram, all phases of the HHO are depicted (Figure 2). The following is a diagram of each phase:

2.2.1. Exploration Phase

This phase is mathematically modeled primarily for waiting, searching, and prey detection. Harris’s hawks are the alternative or best at every step. Harris’s hawks’ position X ( i + 1 ) can be formulated according to Equation (6):
X i + 1 = ( X r a n d i ) r 1 X r a n d i 2 r 2 X i           i f   q 0.5 X r a b b i t i X m i b a d r r 3 L B + r 4 U B L B   i f   q < 0.5
where i is the current iteration, X r a b b i t is the rabbit’s position, X r a n d is a randomly chosen hawk at the current population, r j ,   j = 1,2 , 3,4 ,   q are random numbers between 0 and 1, and X m is the average position of the hawks, which can be calculated by:
X m ( i ) = 1 N j = 1 N X j ( i )
where the vector X j denotes the position of each hawk j, and N is the number of hawks.

2.2.2. Transition from Exploration to Exploitation

The HHO alternates between exploration and exploitation depending on the rabbit’s escaping energy. Moreover, the rabbit’s energy can be calculated using the formula below:
E = 2 E 0 1 i T
where E indicates the rabbit’s escaping energy, T denotes the maximum size of the iterations, and   E 0 ( 1,1 ) presents the initial energy at each step.
E 0 = 2   r a n d     1
The HHO can determine the state of a rabbit based on the direction of E 0 (the HHO enters the exploration phase in order to locate the prey when   E 1 , otherwise, during the exploitation steps, this strategy seeks to exploit the solutions’ proximity).

2.2.3. Exploitation Phase

At this phase, hawks besiege the prey from all directions to hunt it, and this siege is hard or soft according to the remaining prey’s energy. During this siege, the prey’s escape depends on the chance r (it succeeds in escaping if r < 0.5). Moreover, if |E| ≥ 0.5, the HHO is besieging softly, otherwise, it is besieging hard. According to the phenomenon of prey escape and hawks–hawks’ strategies in pursuit, the HHO implements four attack strategies: a soft siege, a hard siege, a soft siege with progressive rapid dives, a hard siege with progressive rapid dives. In particular, the rabbit has enough energy to escape if   E 0.5 ; however, the prey’s ability to escape or not depends on both values of E and   r .

Soft Siege (r ≥ 0.5 and |E| ≥ 0.5)

This procedure can be written as:
X i + 1 = X i E J X r a b b i t i X i
X i = X r a b b i t i X ( i )
where X i indicates the difference between the rabbit’s current location and the rabbit’s location vector at the i iteration, J = 2 ( 1 r 5 ) is the intensity of the rabbit’s random jumping during the escape process, and r 5 ( 0,1 ) is a random number.

Hard Siege (r ≥ 0.5 and |E| < 0.5)

In this strategy, current positions can be updated with the following formula:
X i + 1 = X r a b b i t i E X i

Soft Siege with Progressive Rapid Dives (|E| ≥ 0.5 and r < 0.5)

As for the soft siege, hawks decide their next move with the following equation:
Y = X r a b b i t i E J X r a b b i t i X ( i )
The hawks dive according to the following rules based on the LF-based patterns:
Z = Y + S × L F ( D )
in which D indicates the dimension of problem, and   S 1 × D   denotes a random vector.
The levy flight ( L F ) can be calculated by Equation (15):
L F D = 0.01 × μ × σ v 1 β , σ = Γ ( 1 + β ) sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 , β = 1.5
where   μ and   v represent a range of random numbers between 0 and 1. As a result, Equation (16) can be used to describe the final strategy of this phase, which is to update the positions of the hawks:
X i + 1 = Y       i f   F ( Y ) < F ( X i ) Z     i f   F ( Z ) < F ( X i )

Hard Siege with Progressive Rapid Dives (|E| < 0.5 and r < 0.5)

The hawk is always in close proximity to the prey during this step. The following is a model of the behavior:
X i + 1 = Y       i f   F ( Y ) < F ( X i ) Z     i f   F ( Z ) < F ( X i )
The following formulas can be used to calculate Y and Z:
Y = X r a b b i t i E J X r a b b i t i X m ( i )
Z = Y + S × L F ( D )
where     X m i = 1 N i = 1 N X i ( i )
The main purpose of this study was to employ new scaling approaches to scale breast cancer data, compute the SVM parameter using the HHO algorithm to efficiently classify breast tumors, and use a parallel approach to reduce the proposed model’s execution time.

3. The Proposed HHO-SVM Classification Model

The HHO-SVM system is implemented in two stages. The HHO algorithm determines the SVM parameters automatically for the first phase. The optimized SVM algorithm diagnoses a breast tumor as benign or malignant in the second phase. To obtain the best accurate result, a ten-fold cross-validation (CV) is used. To test the SVM parameters, the HHO-SVM model applies the root-mean-square error (RMSE) as the fitness function. The following formula is used to calculate the RMSE:
R M S E = i = 1 N   P r e d i c t e d i A c t u a l i N
such that N is the number of entities in the test dataset.
In the HHO-SVM algorithm for breast cancer, the population size is set to N, and each hawk represents X i   ( i = 1,2 , , N ) , the maximum number of iterations is set to T, the number of dimensions is set to dim, the upper bound is set to ub, the lower bound is set to lb, and the boundary of positions is set to X r a b b i t . X r a b b i t is the position of the rabbit, and all hawks update their positions. After that, random values are used to form the initial population (N*dim). After the data have been loaded, we use one of the scaling strategies to modify it. It uses a k-fold cross-validation and conducts several procedures for each fold to evaluate the model’s efficiency. If the number of iterations does not equal T, the model repeats the steps below for each iteration.
To begin, it passes each bird through two specified functions and sets its output to the SVM (C and γ) parameters, then trains the SVM and classifies the test set. Then, it calculates the fitness function ( R M S E ) from Equation (21), updates X r a b b i t , according to the smallest fitness value, and update the initial energy   E 0 , jump strength   J , and the position of the current hawk according to the X r a b b i t ,   E 0   ,   J ,   E ,   a n d   r values, where r is a random value and   E is the energy. Then, the algorithm checks whether (   E 1 ) ; if it is, then it enters the exploration phase and updates the location vector using Equation (6); if (   E < 1 ) , then it enters the exploitation phase, which may be a soft siege, hard siege, soft siege with progressive rapid dives or a hard siege with progressive rapid dives.
Therefore, the algorithm checks whether ( E 0.5   a n d   r 0.5 ) ; if true, then it is a soft siege, and the location vector is updated using Equation (10). If ( E < 0.5   a n d   r 0.5 ) , then it is a hard siege, and the location vector is updated using Equation (12). If ( E 0.5   a n d   r < 0.5 ) , then it is a soft siege with progressive rapid dives and the location vector is updated using Equation (16), F Y   a n d   F ( Z ) are calculated by passing Y or Z to two particular functions, and the parameters of the SVM ( C and   γ ) are equal to its output. Then, the algorithm trains the SVM and classifies the test set. It computes the R M S E from Equation (21) as the value of   F Y   o r   F ( Z ) . If (   E < 0.5   a n d   r < 0.5 ) , then it is a hard siege with progressive rapid dives. The location vector is updated using Equation (17), F Y   a n d   F ( Z ) are calculated by passing Y or Z to two particular functions, and the parameters of the SVM (C and γ) are equal to its output; then, the algorithm trains the SVM and classifies the test set. It computes the R M S E from Equation (21) as the value of   F Y   o r   F ( Z ) . Then, if the number of iterations does not surpass T, it goes back to step 4 in the process (Algorithm 1). We move on to the next fold and return to step 3 if T is satisfied. If T and the fold number k are equal, we proceed to step 5. Finally, we compute the averages of the RMSE and the accuracy of the k folds and return them.
Algorithm 1: HHO-SVM Algorithm
Input: N       T h e   p o p u l a t i o n   s i z e  
    T          M a x i m u m   n u m b e r   o f   I t e r a t i o n s
    l b         L o w e r _ B o u n d
    u b     U p p e r _ B o u n d
    d i m N o .   o f   d i m e n s i o n s
    k       N o .   o f   f o l d s
Output: A v e r a g e   R M S E :   A v e r a g e   c l a s s i f i c a t i o n   a c c u r a c y   r a t e s
1.
Initialize the random population Xi (i = 1, 2, …, N)
2.
Apply one of the scaling techniques after loading the data.
3.
for (each fold j) do
    Divide the data into train and test subsets randomly
4.
  while (t < T) do
    for (each hawk (Xi)) do
      Pass X i to particular functions
      Set function’s output to parameter of SVM ( C , γ )
      Train and test the SVM model
      Evaluate the fitness X i with EQ (21)
      Update Xrabbit as the position of the rabbit (best position based on the fitness value)
    end (for)
    for (every hawk (Xi)) do
      Update E0 and J (initial energy and jump strength)
      Update the E by EQ (8)
      if ( E 1 ) then       ▷ Exploration phase
      Update the position vector by EQ (6)
      if ( E < 1 ) then       ▷ Exploration phase
      if ( r 0.5   and E 0.5 ) then ▷ Soft siege
      Update the position vector by EQ (10)
     else if ( r 0.5 and E < 0.5 ) then ▷ Hard siege
        Update the position vector by EQ (12)
     else if ( r < 0.5 and E 0.5 ) then
     ▷ Soft siege with PRD
     Update the position vector by EQ (16)
      F Y ,   F Z   a n d   F X i calculated by using RMSE
     else if ( r < 0.5 and E < 0.5 ) then ▷ Hard siege with PRD
       Update the position vector by EQ (17)
    end (for)
    t=t+1
  end (while)
  t=0
end (for)
5.
Return averages of R M S E and classification accuracy for all folds

4. Scaling Techniques

Before introducing the scaling techniques, some of the necessary mathematical symbols should be presented. We treat the breast cancer data as a matrix and present some mathematical symbols as shown in Table 1. The final scaled matrix is denoted by   R A S , where R = d i a g   ( r 1 ,   , r m ) and   S = d i a g   ( s 1 ,   , s n ) .
All of the scaling approaches presented in this section scale the rows first, then the columns. Equations (22) and (23) show the steps for scaling the matrix.
A R = R A
A R S = A R S
(1)
Arithmetic mean:
The variance between nonzero entries in the coefficient matrix A is reduced using the arithmetic mean scaling technique. As shown in Equation (24), the rows are scaled by dividing each row by the mean of the absolute value of the nonzero values:
r i = n i j N i a i j
Each column (attribute) is scaled by dividing the modulus value of the nonzero items in that column by the mean of the modulus of the nonzero entries in that column as shown in Equation (25):
s j = m i i M j a i j R
(2)
Equilibration scaling technique:
This scaling method’s cornerstone is the largest value in absolute value. The row scaling is done by dividing every row (instance) of matrix A by the absolute value of the row’s largest value. Then, we divide every column of the matrix by the absolute value of the largest value in that column, which is scaled by the row factor. The final scaled matrix A has a range of [−1, 1].
(3)
Geometric mean:
To begin, Equation (26) depicts the scaling of the rows, in which every row is split by the geometric mean of the nonzero elements in that row.
r i = max j N i a i j   min j N i a i j 1 / 2
Second, Equation (27) represents the column scaling where every column is divided by the geometric mean of the modulus of the nonzero elements in that column.
s j = max j M j a i j R   min j M j a i j R 1 / 2
(4)
Normalization [−1, 1]:
Equation (28) represents the normalization within the range [−1, 1] where a, a′, m a x k , and m i n k are the initial value, the scaled value, the maximum value, and the minimum value of feature k, respectively.
a ˙ = a m i n k m a x k m i n k × 2 1

5. The Parallel Metaheuristic Algorithm

We implemented a parallel metaheuristic algorithm based on the population, where the population is divided into different parts that are easy to exchange, that evolve separately, and that are then later combined. In this paper, the parallel approach was implemented by dividing the population into several sets on different cores. The number of cores, N c , was identified. The starting population consisted of n particles randomly initialized. The group size was calculated as follows:
n g = n N c
The proposed model steps are shown in Algorithm 2.
Algorithm 2: Parallel Approach
1: Begin
   2: Identify N c (no. of cores);
   3: Randomly initialize the population;
   4: Compute n g particles with Equation (20);
   5: Make N c sets;
   6: Distribute the particles on cores.
7: Run the HHO-SVM model on each core
   8: Choose the optimal particles from all cores;
    9: Update the model’s parameters and particle positions;
     10: For all folds, return the average accuracy.
11: End
The ceil function was used to obtain an integer number of particles to be distributed on the cores. The basic algorithm steps were executed for all sets in a standalone thread. Nc best particles were chosen as a solution for the optimization problem when these phases were completed. Moreover, these particles were combined to obtain the best particles in general on all cores and update the position according to them.

6. Experimental Design

This part contains a description of the data, a performance evaluation measure, as well as a comparative study.

6.1. Data Description

The proposed model was tested on the Wisconsin diagnostic Breast Cancer (WDBC) dataset, which is available at the University of California, Irvine Machine Learning Repository [44]. There are 569 examples in the dataset, which are separated into two groups (malignant and benign). There are 357 cases of malignant tumors and 212 cases of benign tumors, respectively. Each database record has thirty-two attributes. Table 2 lists the thirty-two qualities.

6.2. Experimental Setup

MATLAB was used to create the suggested HHO-SVM detection method. Chang and Lin [45] created the SVM method, and their implementation was improved. The computing environment for the experiment is described in Table 3.
The k-fold CV was proposed by Salzberg [46], and it was used to ensure that the results were genuine. k = 10 in this study. The following are the HHO-SVM’s detailed settings: 1000, 19, 25, and 10 were the values for the iterations, search agents, dimensions, and k-fold, respectively. The [lb, ub] lower and upper bounds were set to [−5, 5].

6.3. Performance Metrics

Six metrics, sensitivity, specificity, accuracy, precision, G-mean, and F-score, were used to assess the efficacy of the suggested HHO-SVM model. These metrics are defined as follows according to the confusion matrix:
A c c u r a c y = T P + T N T P + T N + F P + F N × 100
S e n s i t i v i t y = T P T P + F N × 100
S p e c i f i c i t y = T N T N + F P × 100
P r e c i s i o n = T P T P + F P × 100
G m e a n = S e n s i t i v i t y × S p e c i f i c i t y
F m e a s u r e = 2 × P r e c i s i o n × S e n s i t i v i t y P r e c i s i o n + S e n s i t i v i t y
If the dataset has two classes (“M” for malignant and “B” for benign), then the true positives (TP) are the total number of cases with classification result “M” when they are actually “M” in the dataset; the true negatives (TN) are the total number of cases with classification result “B” when they are actually “B” in the dataset; the false positives (FP) are the total number of cases with classification result “M” when they are “B” in the dataset; and the false negatives (FN) are the total number of cases with classification result “B” when they are “M” in the dataset.

6.4. Comparative Study

In this study, the efficiency of the presented HHO-SVM algorithm was compared to the SVM algorithm with the grid search technique. Figure 3 shows how the SVM algorithm works with the grid search technique

7. Empirical Results and Discussion

In this study, the abbreviations S0, S1, S2, S3, and S4 are used to denote no scaling, a normalization in [−1, 1], the arithmetic mean, the geometric mean, and the equilibration scaling techniques, respectively. Experiments on the WBCD dataset were used to assess the efficacy of the proposed HHO-SVM model for breast cancer against the SVM algorithm with a grid search technique. First and foremost, our findings show the value of the grid search methodologies, the usefulness of the HHO-SVM model that was developed sequentially, and the superiority of the most recent scaling strategies over the previous normalizing methodology. Finally, the results show that the parallel version of the proposed model achieves a speedup of 3.97 for four cores.
Table 4, Table 5 and Table 6 demonstrate a comparison of the SVM classification accuracies using the grid search algorithm with S0, S1, S2, S3, and S4. Table 4 and Table 5 show that the average accuracy rates obtained by the SVM using S3 (98.59%) are higher than those produced by the SVM using S1 (96.66%) (98.59%). On the other hand, the S4 technique outperforms all other scaling techniques with an accuracy of 98.95% compared to that obtained by the SVM.
Table 7, Table 8, Table 9, Table 10 and Table 11 and Figure 4 show the importance of the data scaling techniques in improving classification accuracy, with the average classification accuracy rate without scaling the data (89.11%) being lower than the average classification accuracy rate when using any other scaling technique, and when comparing the normalization and other scaling techniques, we found that the novel scaling techniques outperformed the normalization in terms of both accuracy rates and CPU time. It is obvious that the HHO-SVM with the arithmetic mean scaling approach (98.25) achieved higher average accuracy rates than the HHO-SVM with normalization and the scaling strategy in the range [−1, 1] (98.24%). With an accuracy of 99.47 percent, the equilibration scaling technique outperforms all the other scaling strategies, including the HHO-SVM.
Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17 show the importance of the data scaling techniques in improving the classification accuracy, with the average classification accuracy rate without scaling the data (89.11 percent) being lower than the average classification accuracy rate when using any other scaling technique, and when comparing the normalization and other scaling techniques, we found that the novel scaling techniques outperformed the normalization in terms of both accuracy rates and CPU time. The HHO-SVM with the arithmetic mean scaling approach (98.25) clearly outperformed the HHO-SVM with the normalization scaling strategy in the range [−1, 1] (98.24%). With an accuracy of 99.47%, S4 outperformed all other scaling procedures.
The results of all scaling strategies obtained by the HHO-SVM in terms of accuracies and CPU times are summarized in Table 18 and Figure 5 and Figure 6. In terms of accuracy and CPU time, the equilibration scaling technique clearly outperformed all other scaling techniques. In terms of precision, however, the equilibration scaling technique was the least accurate. According to CPU time, the normalization scaling in the range [−1, 1] was the greatest.
The accuracy rate of the proposed HHO-SVM model was compared to that of the conventional SVM employing a grid search technique in Table 19. For the scaling procedures S4, S2, S3, and S1, the accuracy rates of the proposed HHO-SVM model were 99.47, 98.25, 98.24, and 98.24, respectively. For the scaling approaches S4 and S1, the accuracy rates of the classic SVM with a grid search algorithm were 98.95 and 96.49, respectively.
The parallel version of the HHO-SVM algorithm was provided to reduce its running time. CPU timings for all scaling strategies produced by the HHO-SVM on different cores are shown in Table 20 and Figure 7.
In addition, Table 21 and Figure 8 show the speedup obtained by the HHO-SVM for all scaling strategies.
Table 22 shows that the performance of the presented HHO-SVM model against other related models developed in the literature, demonstrating the usefulness of our method. Table 22 shows that the classification accuracy of our created HHO-SVM diagnostic system is equivalent to or better than that of existing classifiers on the WBCD database.

8. Conclusions

Three achievements were proposed. The first achievement was a novel hybrid classifier (HHO-SVM), which was a combination of the Harris hawks optimization (HHO) and a support vector machine (SVM). In order to increase the HHO-SVM performance, the second goal was to compare three efficient scaling algorithms to the old normalizing methodology. The final contribution was to improve the efficiency of the HHO-SVM by adopting a parallel approach that employed the data distribution. On the Wisconsin Diagnosis Breast Cancer (WDBC) dataset, the proposed models were tested. The results showed that the HHO-SVM achieved a 98.24% accuracy rate with the normalization scaling technique, thus outperforming the results in [6,7,8,11,15,17,18,19]. On the other hand, the HHO-SVM achieved a 99.47% accuracy rate with the equilibration scaling technique, outperforming the results in [6,7,8,10,11,15,17,18,19,20]. Finally, on four CPU cores, the parallel HHO-SVM model delivered a speedup of 3.97. The proposed approach will be evaluated in various medical datasets in future research. In addition, we are attempting to incorporate various measuring techniques that will reduce the running time and improve the proposed diagnostic system’s efficiency.

Author Contributions

Conceptualization, S.A.; Methodology, S.A.; Software, E.B.; Validation, E.B.; Formal analysis, M.A.S.; Investigation, M.A.S.; Resources, H.A.; Data curation, H.A. All authors have read and agreed to the published version of the manuscript.

Funding

The author would like to thank Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No. R-2023-519.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The help from Benha University and Thebes Academy, Cornish El Nile, El-Maadi, Egypt for publishing is sincerely and greatly appreciated. We also thank the referees for suggestions to improve the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  2. Marcano-Cedeño, A.; Quintanilla-Domínguez, J.; Andina, D. WBCD breast cancer database classification applying artificial metaplasticity neural network. Expert Syst. Appl. 2011, 38, 9573–9579. [Google Scholar] [CrossRef]
  3. Chen, H.-L.; Yang, B.; Liu, J.; Liu, D.-Y. A support vector machine classifier with rough set-based feature selection for breast cancer diagnosis. Expert Syst. Appl. 2011, 38, 9014–9022. [Google Scholar] [CrossRef]
  4. Chen, H.L.; Yang, B.; Wang, G.; Wang, S.J.; Liu, J.; Liu, D.Y. Support vector machine based diagnostic system for breast cancer using swarm intelligence. J. Med. Syst. 2012, 36, 2505–2519. [Google Scholar] [CrossRef] [PubMed]
  5. Bashir, S.; Qamar, U.; Khan, F.H. Heterogeneous classifiers fusion for dynamic breast cancer diagnosis using weighted vote based ensemble. Qual. Quant. 2015, 49, 2061–2076. [Google Scholar] [CrossRef]
  6. Tuba, E.; Tuba, M.; Simian, D. Adjusted bat algorithm for tuning of support vector machine parameters. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2225–2232. [Google Scholar] [CrossRef]
  7. Aalaei, S.; Shahraki, H.; Rowhanimanesh, A.; Eslami, S. Feature selection using genetic algorithm for breast cancer diagnosis: Experiment on three different datasets. Iran. J. Basic. Med. Sci. 2016, 19, 476–482. [Google Scholar]
  8. Mandal, S.K. Performance Analysis of Data Mining Algorithms for Breast Cancer Cell Detection Using Naïve Bayes, Logistic Regression and Decision Tree. Int. J. Eng. Comput. Sci. 2017, 6, 20388–20391. [Google Scholar]
  9. Muslim, M.A.; Rukmana, S.H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S. Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis. J. Phys. Conf. Ser. 2018, 983, 012063. [Google Scholar] [CrossRef]
  10. Liu, N.; Shen, J.; Xu, M.; Gan, D.; Qi, E.-S.; Gao, B. Improved Cost-Sensitive Support Vector Machine Classifier for Breast Cancer Diagnosis. Math. Probl. Eng. 2018, 2018, 3875082. [Google Scholar] [CrossRef]
  11. Agarap, A.F.M. On breast cancer detection: An application of machine learning algorithms on the wisconsin diagnostic dataset. In Proceedings of the 2nd International Conference on Machine Learning and Soft Computing, Phuoc Island, Vietnam, 2–4 February 2018; pp. 5–9. [Google Scholar] [CrossRef] [Green Version]
  12. Huang, H.; Feng, X.; Zhou, S.; Jiang, J.; Chen, H.; Li, Y.; Li, C. A new fruit fly optimization algorithm enhanced support vector machine for diagnosis of breast cancer based on high-level features. BMC Bioinform. 2019, 20, 290. [Google Scholar] [CrossRef] [Green Version]
  13. Xie, T.; Yao, J.; Zhou, Z. DA-Based Parameter Optimization of Combined Kernel Support Vector Machine for Cancer Diagnosis. Processes 2019, 7, 263. [Google Scholar] [CrossRef] [Green Version]
  14. Rajaguru, H.; SR, C.S. Analysis of Decision Tree and K-Nearest Neighbor Algorithm in the Classification of Breast Cancer. Asian Pac. J. Cancer Prev. 2019, 20, 3777–3781. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Dhahri, H.; Al Maghayreh, E.; Mahmood, A.; Elkilani, W.; Nagi, M.F. Automated Breast Cancer Diagnosis Based on Machine Learning Algorithms. J. Health Eng. 2019, 2019, 4253641. [Google Scholar] [CrossRef] [PubMed]
  16. Hemeida, A.; Alkhalaf, S.; Mady, A.; Mahmoud, E.; Hussein, M.; Eldin, A.M.B. Implementation of nature-inspired optimization algorithms in some data mining tasks. Ain Shams Eng. J. 2020, 11, 309–318. [Google Scholar] [CrossRef]
  17. Telsang, V.A.; Hegde, K. Breast Cancer Prediction Analysis using Machine Learning Algorithms. In Proceedings of the 2020 International Conference on Communication, Computing and Industry 4.0 (C2I4), Bangalore, India, 17–18 December 2020; pp. 1–5. [Google Scholar] [CrossRef]
  18. Salma, M.U.; Doreswamy, N. Hybrid BATGSA: A metaheuristic model for classification of breast cancer data. Int. J. Adv. Intell. Paradig. 2020, 15, 207. [Google Scholar] [CrossRef]
  19. Singh, I.; Bansal, R.; Gupta, A.; Singh, A. A Hybrid Grey Wolf-Whale Optimization Algorithm for Optimizing SVM in Breast Cancer Diagnosis. In Proceedings of the 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, India, 6–8 November 2020; pp. 286–290. [Google Scholar] [CrossRef]
  20. Badr, E.; Almotairi, S.; Salam, M.A.; Ahmed, H. New Sequential and Parallel Support Vector Machine with Grey Wolf Optimizer for Breast Cancer Diagnosis. Alex. Eng. J. 2021, 61, 2520–2534. [Google Scholar] [CrossRef]
  21. Badr, E.; Salam, M.A.; Almotairi, S.; Ahmed, H. From Linear Programming Approach to Metaheuristic Approach: Scaling Techniques. Complexity 2021, 2021, 9384318. [Google Scholar] [CrossRef]
  22. Badr, E.S.; Paparrizos, K.; Samaras, N.; Sifaleras, A. On the Basis Inverse of the Exterior Point Simplex Algorithm. In Proceedings of the 17th National Conference of Hellenic Operational Research Society (HELORS), Rio, Greece, 16–18 June 2005; pp. 677–687. [Google Scholar]
  23. Badr, E.S.; Paparrizos, K.; Thanasis, B.; Varkas, G. Some computational results on the efficiency of an exterior point algorithm. In Proceedings of the 18th National conference of Hellenic Operational Research Society (HELORS), Kozani, Greece, 15–17 June 2006; pp. 1103–1115. [Google Scholar]
  24. Badr, E.M.; Moussa, M.I. An upper bound of radio k-coloring problem and its integer linear programming model. Wirel. Netw. 2020, 26, 4955–4964. [Google Scholar] [CrossRef]
  25. Badr, E.; AlMotairi, S. On a Dual Direct Cosine Simplex Type Algorithm and Its Computational Behavior. Math. Probl. Eng. 2020, 2020, 7361092. [Google Scholar] [CrossRef]
  26. Badr, E.S.; Moussa, M.; Paparrizos, K.; Samaras, N.; Sifaleras, A. Some computational results on MPI parallel implementation of dense simplex method. Trans. Eng. Comput. Technol. 2006, 17, 228–231. [Google Scholar]
  27. Elble, J.M.; Sahinidis, N.V. Scaling linear optimization problems prior to application of the simplex method. Comput. Optim. Appl. 2012, 52, 345–371. [Google Scholar] [CrossRef]
  28. Ploskas, N.; Samaras, N. The impact of scaling on simplex type algorithms. In Proceedings of the 6th Balkan Conference in Informatics, Thessaloniki Greece, 19–21 September 2013; pp. 17–22. [Google Scholar] [CrossRef]
  29. Triantafyllidis, C.; Samaras, N. Three nearly scaling-invariant versions of an exterior point algorithm for linear programming. Optimization 2015, 64, 2163–2181. [Google Scholar] [CrossRef]
  30. Ploskas, N.; Samaras, N. A computational comparison of scaling techniques for linear optimization problems on a graphical processing unit. Int. J. Comput. Math. 2015, 92, 319–336. [Google Scholar] [CrossRef]
  31. Badr, E.M.; Elgendy, H. A hybrid water cycle-particle swarm optimization for solving the fuzzy underground water confined steady flow. Indones. J. Electr. Eng. Comput. Sci. 2020, 19, 492–504. [Google Scholar] [CrossRef]
  32. Tapkan, P.; Özbakır, L.; Baykasoglu, A. Bee algorithms for parallel two-sided assembly line balancing problem with walking times. Appl. Soft Comput. 2016, 39, 275–291. [Google Scholar] [CrossRef]
  33. Tian, T.; Gong, D. Test data generation for path coverage of message-passing parallel programs based on co-evolutionary genetic algorithms. Autom. Softw. Eng. 2016, 23, 469–500. [Google Scholar] [CrossRef]
  34. Maleki, S.; Musuvathi, M.; Mytkowicz, T. Efficient parallelization using rank convergence in dynamic programming algorithms. Commun. ACM 2016, 59, 85–92. [Google Scholar] [CrossRef] [Green Version]
  35. Sandes, E.F.D.O.; Boukerche, A.; De Melo, A.C.M.A. Parallel Optimal Pairwise Biological Sequence Comparison. ACM Comput. Surv. 2016, 48, 1–36. [Google Scholar] [CrossRef]
  36. Truchet, C.; Arbelaez, A.; Richoux, F.; Codognet, P. Estimating parallel runtimes for randomized algorithms in constraint solving. J. Heuristics 2016, 22, 613–648. [Google Scholar] [CrossRef]
  37. Połap, D.; Kęsik, K.; Woźniak, M.; Damaševičius, R. Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space. Appl. Sci. 2018, 8, 293. [Google Scholar] [CrossRef] [Green Version]
  38. Jiao, S.; Gao, Y.; Feng, J.; Lei, T.; Yuan, X. Does deep learning always outperform simple linear regression in optical imag-ing? Opt. Express 2020, 28, 3717–3731. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Chauhan, D.; Anyanwu, E.; Goes, J.; Besser, S.A.; Anand, S.; Madduri, R.; Getty, N.; Kelle, S.; Kawaji, K.; Mor-Avi, V.; et al. Comparison of machine learning and deep learning for view identification from cardiac magnetic resonance images. Clin. Imaging 2022, 82, 121–126. [Google Scholar] [CrossRef]
  40. Sain, S.R.; Vapnik, V.N. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996; Volume 38. [Google Scholar]
  41. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Meth-Ods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  42. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar] [CrossRef] [Green Version]
  43. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  44. UCI Machine Learning Repository. Breast Cancer Wisconsin (Diagnostic) Data Set 1995. Available online: https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic (accessed on 1 January 2015).
  45. Chang, C.; Lin, C. LIBSVM: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2013, 2, 1–27. [Google Scholar] [CrossRef]
  46. Salzberg, S.L. On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach. Data Min. Knowl. Discov. 1997, 1, 317–328. [Google Scholar] [CrossRef]
Figure 1. (a) Linear support vector machine and (b) nonlinear support vector machine.
Figure 1. (a) Linear support vector machine and (b) nonlinear support vector machine.
Mathematics 11 03251 g001
Figure 2. All phases of the HHO algorithm.
Figure 2. All phases of the HHO algorithm.
Mathematics 11 03251 g002
Figure 3. SVM algorithm with the grid search technique.
Figure 3. SVM algorithm with the grid search technique.
Mathematics 11 03251 g003
Figure 4. The accuracy and CPU time of Grid-SVM with S0, S1, S2, S3, and S4.
Figure 4. The accuracy and CPU time of Grid-SVM with S0, S1, S2, S3, and S4.
Mathematics 11 03251 g004
Figure 5. Accuracy of the HHO-SVM model with S0, S1, S2, S3, and S4.
Figure 5. Accuracy of the HHO-SVM model with S0, S1, S2, S3, and S4.
Mathematics 11 03251 g005
Figure 6. CPU time of the HHO-SVM model with S0, S1, S2, S3, and S4.
Figure 6. CPU time of the HHO-SVM model with S0, S1, S2, S3, and S4.
Mathematics 11 03251 g006
Figure 7. CPU time of the parallel HHO-SVM model for different cores.
Figure 7. CPU time of the parallel HHO-SVM model for different cores.
Mathematics 11 03251 g007
Figure 8. CPU time of the parallel HHO-SVM on different cores for all scaling techniques.
Figure 8. CPU time of the parallel HHO-SVM on different cores for all scaling techniques.
Mathematics 11 03251 g008
Table 1. Some mathematical terms for scaling techniques.
Table 1. Some mathematical terms for scaling techniques.
TermMeaning
A ( a i j ) m × n matrix (with m entities and n attributes)
r i The scaling factor of row i
s j The scaling factor of row j
R R = d i a g ( r 1 , , r m ) (diagonal matrix)
S S = d i a g ( s , , s n ) (diagonal matrix)
N i N i = { j | A i j 0 } ,   s u c h   t h a t   1 i m
M j M j = { i | A i j 0 } ,   s u c h   t h a t   1 j n
n i The cardinality of the set N i
m j The cardinality of the set M j
A R ( a i j R ) The scaled matrix by row R scaling factor
A R S ( a i j R S ) The scaled matrix in its final form.
Table 2. Description of dataset.
Table 2. Description of dataset.
NoAttribute NameDescription
3RadiusThe range between the center and point on the perimeter
4TextureGray-scale values’ standard deviation
5PerimeterThe total distance between the points that make up the nuclear perimeter
6AreaThe average of the cancer cell areas
7SmoothnessThe distance between a radial line’s length and the mean length of the lines that surround it.
8Compactness P e r i m e t e r 2 / a r e a 1.0
9ConcavityThe severity of the contour’s concave parts
10Concave pointsThe number of concave contour parts
11Fractal dimension(“coastline approximation”—1)
12SymmetryIn both directions, the length difference between lines perpendicular to the major axis and the cell boundary.
Table 3. Computational environment.
Table 3. Computational environment.
Center Processing UnitIntel (R) Core (TM) i5—7200U CPU@ 2.70 GHz
RAM size4 GB RAM
MATLAB ver.R2015a
Table 4. SVM using S0 and S1.
Table 4. SVM using S0 and S1.
Fold(S0)(S1)
CγAccuracy %CγAccuracy %
1232−1394.762112194.64
2272−1591.592152192.98
32152−1310021321100
4252−1397.182132198.25
5212−1196.232152196.49
62−12−991.292152−196.49
72112−1597.5921321100
8292−1598.602152196.49
9292−1597.592132194.74
102152−996.232132−196.49
Avg.6877.90.0004996.10174081.796.66
Time52.6216719.208797
Table 5. SVM using S2 and S3.
Table 5. SVM using S2 and S3.
Fold(S2)(S3)
CγAccuracy %CγAccuracy %
1232−7100.00212−5100
22152−998.25292−598.25
3292−596.49292−596.49
42−12−596.492−12−596.49
5292−9100.00292−9100
6252−598.25272−598.25
7272−798.25232−3100.00
82−12−398.252152−398.25
9292−9100.00292−9100
102152−998.25252−398.25
Avg.67240.02498.423498.70.053598.59
Time7.2375096.822561
Table 6. SVM using S4.
Table 6. SVM using S4.
Fold(S4)
CγAccuracy %
1252−1100.00
2232198.25
3252−1100.00
42152198.25
5212−1100.00
6292−198.25
721521100.00
821521100.00
9232194.74
102321100.00
Avg.9890.61.498.95
CPU Time6.066946
Table 7. Grid-SVM accuracy with S0, S1, S2, S3, and S4.
Table 7. Grid-SVM accuracy with S0, S1, S2, S3, and S4.
NoSymbolAccuracyCPU Time
1(S4)98.95 6.066946
2(S3)98.59 6.822561
3(S2)98.427.237509
4(S1)96.6619.208797
6(S0)96.1052.62167
Table 8. Different metrics for the HHO-SVM model with S0.
Table 8. Different metrics for the HHO-SVM model with S0.
FoldHHO-SVM (S0)
Accuracy
%
Sensitivity
%
Specificity
%
Precision
%
191.0790.4891.4391.07
298.9881.8210098.98
3100100100100
496.4995.2497.2296.49
563.16010063.16
692.9880.9510092.98
796.4995.2497.2296.49
863.16010063.16
996.4995.2497.2296.49
1098.2510097.2298.25
Avg.89.1173.9098.0389.11
CPU Time1.88 × 104
Table 9. Different metrics for the HHO-SVM model with S0.
Table 9. Different metrics for the HHO-SVM model with S0.
FoldHHO-SVM (S0)
Recall
%
F-Score
%
G-Mean
%
RMSE
190.4890.950.298890.48
281.8290.450.264981.82
31001000.00100
495.2496.230.187395.24
50.000.000.60700.00
680.9589.970.264980.95
795.2496.230.187395.24
80.000.000.60700.00
995.2496.230.187395.24
1010098.600.1325100
Avg.73.9075.870.273773.90
CPU Time1.88 × 104
Table 10. Different metrics for the HHO-SVM model with S1.
Table 10. Different metrics for the HHO-SVM model with S1.
FoldHHO-SVM (S1)
Accuracy
%
Sensitivity
%
Specificity
%
Precision
%
194.6495.2494.2990.91
298.2510097.1495.65
396.4910094.2991.67
4100100100100
598.2595.24100100
6100100100100
7100100100100
894.7485.71100100
9100100100100
10100100100100
Avg.98.2497.6298.5797.82
CPU Time1.13 × 105
Table 11. Different metrics for the HHO-SVM model with S1.
Table 11. Different metrics for the HHO-SVM model with S1.
FoldHHO-SVM (S1)
Recall %F-Score %G-Mean %RMSE
195.2493.0294.760.2315
210097.7898.560.1325
310095.6597.10.1873
41001001000
595.2497.5697.590.1325
61001001000
71001001000
885.7192.3192.580.2294
91001001000
101001001000
Avg.97.6297.6398.060.0913
CPU Time1.13 × 105
Table 12. Different metrics for the HHO-SVM model with S2.
Table 12. Different metrics for the HHO-SVM model with S2.
FoldHHO-SVM (S2)
Accuracy
%
Sensitivity
%
Specificity
%
Precision
%
1100100100100
2100100100100
394.7490.9197.1495.24
498.2595.24100100
5100100100100
6100100100100
7100100100100
894.7490.4897.2295
998.2595.24100100
1096.4990.48100100
Avg.98.2596.2399.4499.02
CPU Time2.20 × 104
Table 13. Different metrics for the HHO-SVM model with S2.
Table 13. Different metrics for the HHO-SVM model with S2.
FoldHHO-SVM (S2)
Recall %F-Score %G-Mean %RSME
11001001000
21001001000
390.9193.0293.970.2294
495.2497.5697.590.1325
51001001000
61001001000
71001001000
890.4892.6893.790.2294
995.2497.5697.590.1325
1090.489595.120.1873
Avg.96.2397.5897.810.0911
CPU Time2.20 × 104
Table 14. Different metrics for the HHO-SVM model with S3.
Table 14. Different metrics for the HHO-SVM model with S3.
FoldHHO-SVM (S3)
Accuracy
%
Sensitivity
%
Specificity
%
Precision
%
196.4390.48100100
2100100100100
396.4990.91100100
4100100100100
596.4990.48100100
6100100100100
796.4995.2497.2295.24
898.2510097.2295.45
998.2595.24100100
10100100100100
Avg.98.2496.2399.4499.07
Time2.71 × 104
Table 15. Different metrics for the HHO-SVM model with S3.
Table 15. Different metrics for the HHO-SVM model with S3.
FoldHHO-SVM (S3)
Recall %F-Score %G-Mean %RSME
190.489595.120.1890
21001001000
390.9195.2495.350.1873
41001001000
590.489595.120.1873
61001001000
795.2495.2496.230.1873
810097.6798.600.1325
995.2497.5697.590.1325
101001001000
Avg.96.2397.5797.800.1016
CPU Time2.71 × 104
Table 16. Different metrics for the HHO-SVM model with S4.
Table 16. Different metrics for the HHO-SVM model with S4.
FoldHHO-SVM (S4)
Accuracy %Sensitivity %Specificity %Precision %
1100100100100
296.4990.91100100
3100100100100
4100100100100
5100100100100
6100100100100
7100100100100
8100100100100
9100100100100
1098.2595.24100100
Avg.99.4798.61100100
CPU Time8.14 × 103
Table 17. Different metrics for the HHO-SVM model with S4.
Table 17. Different metrics for the HHO-SVM model with S4.
FoldHHO-SVM (S4)
Recall %F-Score %G-Mean %RMSE
11001001000
290.9195.2495.350.1873
31001001000
41001001000
51001001000
61001001000
71001001000
81001001000
91001001000
1095.2497.5697.590.1325
Avg.98.6199.2899.290.0320
CPU Time8.14 × 103
Table 18. The accuracy of the HHO-SVM model with S1, S2, S3, and S4.
Table 18. The accuracy of the HHO-SVM model with S1, S2, S3, and S4.
NoSymbolAccuracyCPU Time
1(S0)89.1118,800
1(S1)98.24113,000
2(S2)98.2522,000
3(S3)98.2427,100
4(S4)99.478140
Table 19. Accuracy comparison between HHO-SVM and Grid-SVM.
Table 19. Accuracy comparison between HHO-SVM and Grid-SVM.
SymbolScaling TechniquesHHO-SVM AccuracyGrid-SVM Accuracy
(S1)Normalization [−1, 1]98.2496.49
(S2)Arithmetic mean 98.2598.42
(S3)Geometric mean 98.2498.59
(S4)Equilibration 99.4798.95
Table 20. CPU time comparison between HHO-SVM and Grid-SVM.
Table 20. CPU time comparison between HHO-SVM and Grid-SVM.
SymbolScaling TechniquesHHO-SVM
Core1Core2Core4
(S1)Normalization [−1, 1]91,60047,461.1423,073.04
(S2)Arithmetic mean 85604703.302338.80
(S3)Geometric mean 11,0005820.112941.18
(S4)Equilibration 35002023.12980.39
Table 21. Speedup on the WBCD database using the HHO-SVM with S1, S2, S3, and S4.
Table 21. Speedup on the WBCD database using the HHO-SVM with S1, S2, S3, and S4.
SymbolHHO-SVM
Core1Core2Core4
(S1)11.933.97
(S2)11.823.66
(S3)11.893.74
(S4)11.733.57
For the scaling techniques S4, S3, S2, and S1, the speedups for four cores were 3.57, 3.74, 3.66, and 3.97, respectively.
Table 22. A comparison between related works against to our model.
Table 22. A comparison between related works against to our model.
StudyYearMethodAccuracy (%)
Tuba et al. [6](2016)ABA-SVM96.49 %
Aalaei et al. [7](2016)GA-ANN97.30%
S. Mandal [8](2017)Logistic regression97.90%
Liu et al. [10](2018)ICS-SVM98.83%
Agarap [11](2018)GRU-SVM93.80%
Dhahri et al. [15](2019)GA-AB98.23%
Telsang et al. [17] (2020)SVM96.25%
Umme et al. [18](2020)BATGSA-FNN92.10%
Singh et al. [19](2020)GWWOA-SVM97.72%
Badr et al. [20](2021)GWO-SVM99.3%
Our study(2023)HHO-SVM99.47%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almotairi, S.; Badr, E.; Abdul Salam, M.; Ahmed, H. Breast Cancer Diagnosis Using a Novel Parallel Support Vector Machine with Harris Hawks Optimization. Mathematics 2023, 11, 3251. https://doi.org/10.3390/math11143251

AMA Style

Almotairi S, Badr E, Abdul Salam M, Ahmed H. Breast Cancer Diagnosis Using a Novel Parallel Support Vector Machine with Harris Hawks Optimization. Mathematics. 2023; 11(14):3251. https://doi.org/10.3390/math11143251

Chicago/Turabian Style

Almotairi, Sultan, Elsayed Badr, Mustafa Abdul Salam, and Hagar Ahmed. 2023. "Breast Cancer Diagnosis Using a Novel Parallel Support Vector Machine with Harris Hawks Optimization" Mathematics 11, no. 14: 3251. https://doi.org/10.3390/math11143251

APA Style

Almotairi, S., Badr, E., Abdul Salam, M., & Ahmed, H. (2023). Breast Cancer Diagnosis Using a Novel Parallel Support Vector Machine with Harris Hawks Optimization. Mathematics, 11(14), 3251. https://doi.org/10.3390/math11143251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop