Next Article in Journal
A Retrospective of Project Robo Raven: Developing New Capabilities for Enhancing the Performance of Flapping Wing Aerial Vehicles
Previous Article in Journal
Derivation of Ultra-High Gain Hybrid Converter Families for HASEL Actuators Used in Soft Mobile Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Face Image Segmentation Using Boosted Grey Wolf Optimizer

1
Jilin Agricultural University Library, Jilin Agricultural University, Changchun 130118, China
2
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
3
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 11366, Iran
4
College of Computer Science and Technology, Changchun Normal University, Changchun 130032, China
5
School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
6
Department of Biological Sciences, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
7
School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
8
Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Biomimetics 2023, 8(6), 484; https://doi.org/10.3390/biomimetics8060484
Submission received: 7 August 2023 / Revised: 3 October 2023 / Accepted: 6 October 2023 / Published: 12 October 2023

Abstract

:
Image segmentation methods have received widespread attention in face image recognition, which can divide each pixel in the image into different regions and effectively distinguish the face region from the background for further recognition. Threshold segmentation, a common image segmentation method, suffers from the problem that the computational complexity shows exponential growth with the increase in the segmentation threshold level. Therefore, in order to improve the segmentation quality and obtain the segmentation thresholds more efficiently, a multi-threshold image segmentation framework based on a meta-heuristic optimization technique combined with Kapur’s entropy is proposed in this study. A meta-heuristic optimization method based on an improved grey wolf optimizer variant is proposed to optimize the 2D Kapur’s entropy of the greyscale and nonlocal mean 2D histograms generated by image computation. In order to verify the advancement of the method, experiments compared with the state-of-the-art method on IEEE CEC2020 and face image segmentation public dataset were conducted in this paper. The proposed method has achieved better results than other methods in various tests at 18 thresholds with an average feature similarity of 0.8792, an average structural similarity of 0.8532, and an average peak signal-to-noise ratio of 24.9 dB. It can be used as an effective tool for face segmentation.

1. Introduction

Face-based research has received increasing attention as one of the most common and non-contact collection important biometric features. One of the hot topics in face image processing technology is object recognition [1,2,3]. Face segmentation and face recognition technologies have been used in various places, such as banks [4], schools [5], and libraries [6]. However, face recognition, person detection, and image processing techniques often depend on the effect of image segmentation. Rangayya et al. [7] used kernelized total Bregman divergence-based K-Means clustering-based segmentation technique in the proposed face recognition method to reduce noise interference on the segmentation effect, effectively improving face recognition. Khan et al. [8] developed an automatic facial image segmentation model based on conditional random fields to improve classification accuracy. Segundo et al. [9] embedded a segmentation technique based on edge detection, region clustering, and shape analysis into a face detection system to improve face recognition performance. Zhang et al. [10] proposed a multistep iterative segmentation algorithm to achieve fine segmentation of obscured characters and improve recognition accuracy. Efficient and accurate image segmentation techniques can help improve the performance of face recognition systems [11,12,13].
At present, there are many image segmentation methods, such as multi-threshold [14], region growth [15], edge detection [16], and deep learning [17,18]. At the same time, unsupervised learning-based image segmentation methods are currently available for unlabeled training data. For example, Xia et al. [19] proposed a new unsupervised learning segmentation network by combining two fully convolutional networks into one autoencoder inspired by the idea of semantic segmentation. Kim et al. [20] designed an end-to-end unsupervised image segmentation network consisting of argmax functions for normalization and differentiable clustering. Although the segmentation methods based on unsupervised learning do not need to train the neural network in advance, and can directly and unsupervised segmentation of a single image, not only greatly save the computational resources, and segmentation of significant targets more accurate features and advantages. However, such methods are not stable enough to perform the image segmentation task and cannot effectively extract texture features to segment the image into regions of overall significance when the same target has significant color differences. There are defects of confusing foreground and background in some images, over-reliance on color information, and little consideration of target spatial feature information. There is also the problem of poor robustness in unsupervised learning. It is difficult for such methods to place restrictions on neural networks speculatively outputting a sheet of results containing only one category, a design that is more prone to overfitting. Threshold-based image segmentation methods do not require a priori knowledge; the robust segmentation effect is excellent and is an efficient means of image segmentation. On the other hand, the typical exhaustive method for determining the best threshold will increase computing complexity while decreasing computational efficiency. Therefore, using a meta-heuristic optimization algorithm to search for the optimal threshold has become effective. Li et al. [21] proposed a threshold segmentation technique based on particle swarm optimization. Liu et al. [22] used the firework algorithm to find the optimal threshold set. Li et al. [23] used the biogeographic optimization algorithm to enhance multi-threshold image segmentation. Dutta et al. [24] proposed a multi-level image thresholding method based on the quantum genetic algorithm. The threshold segmentation method based on meta-heuristic optimization can obtain the optimal set of thresholds for segmentation more efficiently, which is considered a promising threshold segmentation method.
In recent times, optimization methods have experienced a surge in prominence, capturing the sustained interest of the community in swarm-based optimization, distributed optimization [25], robust optimization [26], multi-objective optimization [27], many objective cases [28], fuzzy optimization [29], etc. The optimization methods can be classified into two fundamental classes: deterministic and approximative techniques, rendering them amenable to addressing a wide spectrum of problem scenarios [30,31]. The meta-heuristic approaches stand as pivotal categories of optimization techniques deeply grounded in concepts such as mutation, crossover, and various iterative procedures. These methodologies enable the exploration of solution spaces independently of gradient information, provided that the newly generated solutions adhere to prescribed optimality criteria. Two of the most well-known approaches are genetic algorithms (GAs), which are based on the selection of fittest and survival values in nature [32,33]. However, these classes of swarm-based methods are prone to several risks, including weak mathematical models, low robustness issues, immature convergence, and stagnation possibility [34,35]. The meta-heuristic optimization algorithms can be utilized to solve the optimal solution of complex problems, such as image segmentation [36], feature selection [37], real-world optimization problems [38], bankruptcy prediction [39], scheduling optimization [40], multi-objective optimization [41], global optimization [42,43], target tracking [44], economic emission dispatch [45], feed-forward neural networks [46], and numerical optimization [47,48,49]. It has become one of the most popular optimization methods due to its excellent optimization ability. Common optimization algorithms include particle swarm optimization (PSO) [50], sine and cosine optimization algorithm (SCA) [51], whale optimization algorithm (WOA) [52], slime mould algorithm (SMA) [53,54], hunger games search (HGS) [55], Harris hawks optimization (HHO) [56], colony predation algorithm (CPA) [57], rime optimization algorithm (RIME) [58], the weighted mean of vectors (INFO) [59], Runge Kutta optimizer (RUN) [60], grey wolf optimization algorithm (GWO) [61], and other optimization methods. As an example, the development of a method to evaluate task offloading strategies within the context of Mobile Edge Computing (MEC) was facilitated by the utilization of the Sine and Cosine Algorithm (SCA) [62].
According to the No Free Lunch theorems for optimization [63], it is known that no one optimization method can perform well on all problems. Therefore, more and more improved optimization algorithms based on optimization strategies have been proposed to address the shortcomings of different optimization algorithms in the process of searching for the global optimal solution. To adapt the algorithm to the more complex problems to be optimized, the improvement of the original algorithm has become another research hotspot. For example, Yang et al. [64] utilized roundup search, the elite Lévy-mutation, and the decentralized foraging optimization techniques to enhance the performance of differential evolution for multi-threshold image segmentation. Zhang et al. [65] proposed an adaptive differential evolution with an optional external archive (JADE). Guo et al. [66] proposed a self-optimization approach for L-SHADE (SPS_L_SHADE_EIG). Qu et al. [67] proposed a modified sine cosine algorithm based on a neighborhood search and a greedy levy mutation (MSCA). Han and Li [68] proposed an improved genetic algorithm based on adaptive crossover probability and adaptive mutation probability.
PSO is an excellent optimization algorithm; with few parameters and easy implementation. The PSO and its variants have received extensive attention from researchers. Among many PSO variants, GWO not only inherits the advantages of PSO but also is an attempt to improve global optimization ability and convergence [69]. Therefore, improved GWOs based on optimization strategies have been proposed recently. Cai et al. [70] optimized a kernel extreme learning machine with an enhanced GWO. Choubey et al. [71] optimized the parameters of the multi-machine power system stabilizer. Li et al. [72] monitored the robot path through an enhanced GWO optimization area. Mehmood et al. [73] applied an improved grey wolf optimizer technique based on chaotic mapping to optimize problems such as autoregressive exogenous structural parameter optimization.
In thresholding optimization, local optimal or suboptimal solutions mistakenly considered the best set of thresholds can lead to incorrect segmentation. Thus, the critical information about the target is lost, resulting in the degradation of image segmentation quality. To obtain a method with high optimization accuracy and a high ability to jump out of the local optimum, this study proposed a GWO improvement method based on the cosmic wormhole strategy, denoted as WGWO. To distinguish background and object more efficiently and to consider the spatial information of pixels, Kapur’s entropy was used as the objective function of WGWO and combined with a nonlocal mean two-dimensional histogram to achieve high-quality segmentation of face images. From the Berkeley dataset [74] and Flickr-Faces-High-Quality dataset (FFHQ) [75], eight face images were selected for comparative experiments, and then the segmentation effects were verified by three image evaluation metrics. The experimental results show that the WGWO multi-threshold segmentation method achieves satisfactory segmentation results. The main contributions of this paper are as follows:
  • A multi-threshold image segmentation method based on optimization technique and 2D histogram is proposed, which is used to segment face images;
  • An enhanced grey wolf optimizer based on cosmic wormhole strategy is proposed that is used to obtain the optimal segmentation threshold for the image.
The remainder of the paper is organized as follows. Section 2 describes the proposed WGWO. Section 3 presents the WGWO-based image segmentation method. In Section 4, the segmentation results have been verified and discussed. Section 5 summarizes the current work and explains the next research directions.

2. The Proposed WGWO

To improve the search efficiency of the optimal threshold set, this section details an improved grey wolf optimizer for image segmentation called WGWO.

2.1. Original GWO (a Variant of PSO)

As a PSO algorithm variant [76], GWO mainly consists of S 1 t , S 2 t and S 3 t to update the positions of the other particles in the population at the t-th iteration. S 1 t , S 2 t and S 3 t are biased by biasing the positions of individual particles in the population based on the top three optimal individuals in the population, g 1 t , g 2 t and g 3 t , respectively, as shown in Equations (1)–(3).
S 1 t = g 1 t 2 φ t 1 · r 1 t · 2 q 1 t · g 1 t X i t
S 2 t = g 2 t 2 φ t 1 · r 2 t · 2 q 2 t · g 2 t X i t
S 3 t = g 3 t 2 φ t 1 · r 3 t · 2 q 3 t · g 3 t X i t
where r 1 t , r 2 t , r 3 t , q 1 t , q 2 t , q 3 t all denote random numbers, obeying a random distribution between 0 and 1. φ t denotes an acceleration weighting factor decreasing from 2 to 0, φ t = 2 ( 1 t / T ) . X i t denotes the position vector of the i-th particle at the t-th iteration.
Combining the position information of the three intermediates S 1 t , S 2 t , and S 3 t are used to update the position of the i-th particle according to the rule of weighted averaging, as shown in Equation (4).
X i t + 1 = S 1 t + S 2   t + S 3 t / 3

2.2. Improved GWO (WGWO)

GWO has shown excellent optimization performance in various fields, e.g., power load forecast [77], power load forecast [78], etc. However, GWO still has insufficient segmentation accuracy for the image segmentation involved in this study. GWO is optimized in this study by combining the cosmic wormhole technique with population variety to improve the ability to avoid falling into the local optimum. Equations (8)–(11) show the mathematical model of the cosmic wormhole strategy [79].
X i t + 1 = X i t + w e i g h t , r 5 t < 0.5 X i t w e i g h t , r 5 t 0.5 ,   A d a < r 4 t X i t , A d a r 4 t
weight = M · ( UB LB ) · r 6 t + LB
Ada =   ( T + 4 t ) / 5 T
M = 1 t 1 / C / T 1 / C
In Equations (5)–(8), Ada is a probability parameter, Ada = [0.2, 1]. The candidate solution is decided whether to update by probability parameter Ada, M is a weight parameter, which controls the influence of random search on the current candidate solution through different search epochs. C is a constant of value 6. r 4 t , r 5 t , r 6 t represent random numbers between 0 and 1.
To obtain a better solution, we proposed the WGWO method by introducing the cosmic wormhole strategy after the wolf population update, and the flowchart of WGWO is shown in Figure 1 (code has been publicly available at https://github.com/Forproject1111/WGWO, accessed on 8 October 2023). To calculate the time complexity of WGWO, we need to analyze the maximum number of iterations of the algorithm (T), the size of the population (N), and the variable size of the individuals (D). WGWO consists of population initialization, searching for prey, and cosmic wormhole strategy. Therefore, the time complexity of WGWO is O(((2N + 1) × D) × T).

3. Multi-Threshold Image Segmentation Method

3.1. The Basic Theory of Multi-Threshold Image Segmentation

The multi-threshold image segmentation method based on WGWO, NML 2-D histogram, and Kapur’s entropy reduces noise interference and improves segmentation efficiency.

3.1.1. NML 2-D Histogram

The NML 2-D histogram [13] is composed of grayscale values and nonlocal means in a digital image, which reflects the grayscale size of pixels and information related to the pixel and neighborhood space. Assume that an image I of size M × N exists. p and q represent pixel points, and X() represents the pixel value. NML mathematical model of I is shown in Equations (9)–(12):
O ( p ) = q   I X ( q ) · ω ( p , q ) q   I ω ( p , q )
ω ( p q ) = exp - | μ ( q ) μ ( p ) | 2 σ 2
μ ( p ) = 1 n × n i S ( p ) I ( i )
μ ( q ) = 1 n × n i S ( q ) I ( i )
where ω(p, q) is a Gaussian weighting function. σ denotes the standard deviation. μ(p), μ(q) represent the local mean of p, q pixels. x is a pixel in the image I, and S(x) is a filter matrix of size n × n around the pixel x, x ∈ [1, M], y ∈ [1, N].
For each pixel, I(x, y) in I, x∈[1, M], y∈[1, N], the corresponding grayscale f(x, y) and the nonlocal mean g(x, y) can be calculated. Then, i and j are used to denote f(x, y) and g(x, y), respectively. h(i, j) denotes the vertical coordinate of the two-dimensional histogram, i.e., the number of occurrences of the gray-NML pair. Finally, h(i, j) is normalized to obtain P ij to construct a greyscale nonlocal mean two-dimensional histogram, as shown in Figure 2.

3.1.2. Kapur’s Entropy

To ensure that the segmented image obtained the maximum amount of information between the background and the target, the concept of Kapur’s entropy [13,80] was introduced in this study. Kapur’s entropy was used as a physical quantity to measure the amount of information distributed over the target and background regions. The greater Kapur’s entropy, the better the image segmentation quality. The following describes the process for segmenting images with several thresholds using Kapur’s entropy. The objective function is expressed as computing the entropy of L − 1 image segments and summing them. The objective function F expression of Kapur’s entropy is shown in Equations (13)–(15).
φ ( s , t ) = H 1 + H 2 + + H L 1
H 1 = i = 0 s 1   j = 0 t 1   P ij P 1 l n P ij P 1 H 2 = i = t 1 + 1 s 2   j = t 1 + 1 t 2   P ij P 2 l n P ij P 2 H L 1 = i = s L 2 + 1 s L 1 j = t L 2 + 1 t L 1 P ij P L 1 ln P ij P L 1
P 1 = i = 0 s 1 j = 0 t 1 P ij P 2 = i = t 1 + 1 s 2 j = t 1 + 1 t 2 P ij P L 1 = i = s L 2 + 1 s L 1 j = t L 2 + 1 t L 1 P ij
where H i represents the entropy of the i-th image segment. t 1 , t 2 , L 1 represents the grayscale value of the grayscale image and s 1 , s 2 , L 1 characterizes the grayscale value of the nonlocal mean image.

3.2. Image Segmentation Method

The flowchart of segmentation based on WGWO, Kapur’s entropy, and NML 2-D histogram are shown in Figure 3. As shown in Figure 3, the input image is converted into a grayscale image and a nonlocal mean filtered image. Then a 2D histogram is calculated using the grayscale information of the grayscale image and the nonlocal mean filtered image. The Kapur’s entropy of the two-dimensional histogram is used as an objective function to optimize the segmentation threshold of the image using the proposed WGWO algorithm, which ultimately segments the image into multiple regions. The pseudo-code for segmentation is shown in Algorithm 1.
Algorithm 1 The flow of image segmentation method
Step 1: Input digital image I, which has a size of M × N. The grayscale image F is obtained by graying out the image I;
Step 2: The grayscale image F is nonlocal mean filtered to obtain the nonlocal mean image G according to Equations (9)–(12);
Step 3: A two-dimensional image histogram is constructed using the grayscale values and nonlocal means in F and G;
Step 4: Compute the two-dimensional Kapur’s entropy according to Equations (13)–(15);
Step 5: Kapur’s entropy of the two-dimensional histogram is optimized using WGWO;
Step 6: Multi-threshold image segmentation is performed according to the optimal threshold set to obtain pseudo-color and gray images.

4. Experiment Simulation and Analysis

In this section, the performance of the WGWO-based multi-threshold segmentation method proposed in this paper is verified. In addition, all programs were run using Matlab 2018b on a Windows 10 OS-based computer with an Intel CPU i5-11400H (2.70 GHz) and 16 GB of RAM.

4.1. IEEE CEC2020 Benchmark Dataset Experiment

In this subsection, this study conducted an ablation experiment and parameter experiment based on IEEE CEC2020 [81] test components to present the global optimization capability of WGWO. The details of IEEE CEC2020 are shown in Table 1. In addition, to ensure the fairness of the experiments, the public parameters of all test methods were set uniformly, the maximum number of function evaluations was set to 300,000, the population size was set to 30, the dimensionality was set to 30, and the number of independent runs of the program was set to 30.
First, the ablation experiment was introduced in this subsection to justify the improved strategy. Where WGWO1 used only X i t + 1 = X i t + weight as the update formula in Equation (5). WGWO2 used only X i t + 1 = X i t weight as the updated formula. Table 2 shows the ranking of the ten benchmark functions and the final ranking in the ablation experiment. Where the optimal results were bolded. From the table, it can be seen that WGWO and WGWO2 have the best results in four cases each. However, the average ranking of 1.90 for WGWO was the best among the four methods. Therefore, WGWO was selected as the threshold optimization method for image segmentation in this study.
Second, the values of the parameters are one of the reasons that affect the performance of the algorithm. Among them, Ada and C are two key parameters in WGWO. Therefore, this study focused on testing the parameter sensitivity of Ada and C. Table 3 and Table 4 show the ranking results and the final ranking of Ada and C, respectively. As can be seen in Table 3, the combined ability was significantly better than that of the other versions of WGWO despite the relatively poor results for values in the range [0.3, 1] on F1 and F5. Furthermore, the final ranking in Table 4 shows that WGWO was optimized the most when the C value was 6. Based on the results of the sensitivity tests of the two parameters, we fine-tuned Ada and C. A randomized value range of [0.3, 1] for Ada and a value of 6 for C were finally determined as the parameter values for the final version of WGWO for subsequent experiments.
In conclusion, based on the results of the ablation experiment and the parameter sensitivity experiment described above, it can be concluded that the best version of WGWO was applied to optimize the threshold for multi-threshold image segmentation.

4.2. Multi-Threshold Face Image Segmentation Experiment

To provide quality face recognition data, this subsection reports the testing of image segmentation performance of the proposed method.

4.2.1. Experimental Settings

Test images of faces from the Berkeley dataset and the Flickr-Faces-High-Quality dataset were selected as validation materials, as shown in Figure 4. The size of the images was 321 × 481, where images A to D were from the Berkeley dataset and images E to H were from the FFHQ dataset. Threshold values were set in the range of 0 to 255. The maximum number of iterations was set to 100. The population size was set to 30, and the number of independent runs was set to 30.
WGWO was compared to GWO [61], PSO [50], WOA [52], BLPSO [82], IGWO [70], HLDDE [64], SCADE [83], and IWOA [84] in multi-threshold image segmentation experiments. The initialization parameters of the nine algorithms are shown in Table 5. To verify the segmentation effect of WGWO, this paper examined the segmentation effect through three evaluation methods: feature similarity (FSIM) [85], structural similarity (SSIM) [86], and peak signal-to-noise ratio (PSNR) [87]. The details of these indicators are shown in Table 6. The segmentation results were analyzed for the significance of differences using the Wilcoxon signed-rank test (WSRT) [88]. In WSRT, if the p-value is less than 0.05 and WGWO is superior to the comparison method, the advantage of WGWO performance is statistically significant and is denoted by ‘+’. If the p-value is less than 0.05 and WGWO is inferior to the comparison method, the advantage of the comparison method performance is statistically significant and is denoted by ‘−’. If the p-value is greater than or equal to 0.05, the performance of WGWO and the comparison method can be approximated as equal and is denoted by ‘=’.

4.2.2. Image Segmentation Experiment

This experiment first demonstrates nine algorithms for segmenting visual images in the Berkeley and FFHQ datasets at a threshold level of 18. Figure 5, Figure A1, Figure A2 and Figure A3 show the results of pseudo-color segmented images and gray segmented images from image A to image H. It can be seen directly from the figures that SCADE and IWOA (in Figure 5, Figure A1, Figure A2 and Figure A3) had poor segmentation effects, and HLDDE (in Figure A2 and Figure A3) also had a poor segmentation effect on image F and image H. Subsequently, Figure 6 shows that WGWO had excellent segmentation results for all eight images at different thresholds. In this case, important information was lost in the segmented images at lower thresholds, while the high threshold segmentation retained more image detail. It is important to note that the visual result only shows the segmentation effect. The experimental results based on FSIM, PSNR, and SSIM reflect the quality of the segmented image more objectively. As a result, the FSIM, PSNR, and SSIM results of WGWO were further analyzed and discussed.
Table 7 demonstrates the comparison results of FSIM between the proposed method and other methods at 4 threshold levels. From all 4 threshold levels, WGWO had the best average rankings among the compared algorithms, and even though the results of WGWO were weaker than those of GWO at 3 images in the segmentation results at 5-level and 8-level thresholds, WGWO was better as a whole. It indicates that the segmented images obtained based on the WGWO method are better able to portray the local features of the target, as well as being more in line with the human visual system’s perception of low-level features. Table 8 demonstrates the PSNR comparison results of each segmentation method. A comprehensive analysis of the data in Table 8 shows that the WGWO-based segmentation method was also excellent and stable, indicating that the WGWO-based segmentation method has less distortion and higher image quality in the segmentation process compared with the original image. Table 9 demonstrates the SSIM results of these compared methods. As can be seen from Table 9, the proposed WGWO-based segmentation method dominated, which indicates that the proposed method has less distortion and is more in line with the requirements of the human visual system. It is worth noting that as a variant of PSO, GWOs were more suitable for solving the optimization problem of segmentation thresholds compared to the PSO baseline method (in Table 7, Table 8 and Table 9). In conclusion, the segmentation performance of WGWO has been validated by three image segmentation quality assessment metrics at multiple threshold levels.
In addition to exploring the image segmentation performance of WGWO in a conventional setting, this study further explored the stability of the proposed method when dealing with a larger number of population agents dealing with high threshold segmentation problems. In this study, we set the initial population size to 100 and the threshold segmentation level to 18. Table 10 shows the ranking of the three segmentation metrics and the comparison results based on WSRT. The table shows that the multi-threshold image segmentation method of WGWO is still the best performer with stable image segmentation performance under the condition of a larger population size.
In this study, time cost and convergence of threshold optimization methods are one of the evaluation methods of the algorithm’s performance, especially the convergence based on Kapur entropy, which is the key for the algorithm to obtain the optimal segmentation threshold. Figure 7 shows the average time cost of 30 experiments of 9 algorithms on different threshold levels for all the images processed. The observation shows that the proposed method ranked third in terms of time cost on each threshold. Moreover, the time cost of each algorithm grew as the threshold level increased. Figure 8 shows the convergence curves for all methods. Figure 8 depicts the convergence curves of nine comparison techniques for Kapur’s entropy optimization on eight images. The optimization of Kapur’s entropy can be considered a maximum optimization problem, so a higher entropy value means that more useful information is retained, making the segmentation better. The following points can be seen by observing the convergence curves of the optimization of eight images. First, a simple examination of WGWO’s eight convergence curves reveals that, although its convergence speed was not the fastest in the whole convergence process, its convergence accuracy was superior to that of other algorithms. It is worth noting that WGWO has a strong ability to prevent premature convergence. Second, the convergence curve of the WOA algorithm in the early and middle iterations was above all algorithms. The slope of the convergence curve of the WGWO algorithm became larger around the 90th iteration, which resulted in the best fitness of WGWO among all methods. Third, compared to PSO, GWO was more suitable to deal with the image segmentation problem based on Kapur’s entropy, and GWO had higher convergence accuracy and a better ability to jump out of the local optimal solution. Finally, by comparing the convergence curves of WGWO and GWO, it can be seen that the changes in the two convergence curves were very similar, but the convergence accuracy of WGWO was better than that of GWO, which shows that the introduction strategy enhances the optimization ability of the algorithm.
In conclusion, WGWO has the best optimization ability compared to the other five algorithms and can complete higher-quality multi-threshold image segmentation. Of course, it also can be applied to many other fields, such as machine learning models [89], image denoising [90], medical signals [91], structured sparsity optimization [92], renal pathology image segmentation [93], mental health prediction [94], lung cancer diagnosis [95], computer-aided medical diagnosis [96], MRI reconstruction [97], and power distribution network [98].

5. Conclusions

This study proposed a grey wolf optimization algorithm based on the cosmic wormhole strategy. The population position update mechanism was optimized to improve the convergence accuracy of the algorithm, which can help the algorithm jump out of the local optimum. A multi-threshold image segmentation method based on WGWO was then used to segment the face images. The facial picture is then segmented using a multi-threshold image segmentation approach based on WGWO. The experimental results show that WGWO makes obtaining a set of threshold values suitable for face image segmentation easier. The proposed method is also verified to have better segmentation performance than other methods by three image quality evaluation criteria. In conclusion, the proposed method can support intelligent library face recognition technology more effectively.
Although the WGWO-based image segmentation method proposed in this paper can provide better quality segmented images for face recognition systems, there are still some shortcomings. First, there is still potential to improve the optimization performance of WGWO. In addition, this paper does not explore the optimal threshold level. The above two points are still areas that authors need to further investigate. Additionally, it is intriguing to include parallel computing methods into the framework for multi-threshold picture segmentation to boost computational efficiency.

Author Contributions

Conceptualization, H.Z., Z.C., L.X. and A.A.H.; methodology, H.Z. and L.X.; data curation, H.Z. and L.X.; writing—original draft, H.Z. and H.C.; investigation, Z.C. and S.W.; visualization, Z.C., L.X. and A.A.H.; resources, A.A.H. and H.C.; software, H.C. and D.Z.; formal analysis, H.C., D.Z. and Y.Z.; writing—review and editing, D.Z. and Y.Z.; supervision, D.Z. and S.W.; validation, S.W. and Y.Z.; funding acquisition, S.W.; project administration, H.C. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is partially supported by MRC, UK (MC_PC_17171); Royal Society, UK (RP202G0230); BHF, UK (AA/18/3/34220); Hope Foundation for Cancer Research, UK (RM60G0680); GCRF, UK (P202PF11); Sino-UK Indus-trial Fund, UK (RP202G0289); LIAS, UK (P202ED10, P202RE969); Data Science Enhancement Fund, UK (P202RE237); Fight for Sight, UK (24NN201); Sino-UK Education Fund, UK (OP202006); BBSRC, UK (RM32G0178B8).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data is available at https://github.com/Forproject1111/WGWO, accessed on 8 October 2023.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Segmentation results of images (C,D).
Figure A1. Segmentation results of images (C,D).
Biomimetics 08 00484 g0a1
Figure A2. Segmentation results of images (E,F).
Figure A2. Segmentation results of images (E,F).
Biomimetics 08 00484 g0a2
Figure A3. Segmentation results of images (G,H).
Figure A3. Segmentation results of images (G,H).
Biomimetics 08 00484 g0a3

References

  1. Abdulhussain, S.H.; Mahmmod, B.M.; AlGhadhban, A.; Flusser, J. Face recognition algorithm based on fast computation of orthogonal moments. Mathematics 2022, 10, 2721. [Google Scholar] [CrossRef]
  2. Minaee, S.; Luo, P.; Lin, Z.; Bowyer, K. Going deeper into face detection: A survey. arXiv 2021, arXiv:2103.14983. [Google Scholar]
  3. Mahmmod, B.M.; Abdulhussain, S.H.; Naser, M.A.; Alsabah, M.; Hussain, A.; Al-Jumeily, D. 3D Object Recognition Using Fast Overlapped Block Processing Technique. Sensors 2022, 22, 9209. [Google Scholar] [CrossRef] [PubMed]
  4. Szczuko, P.; Czyżewski, A.; Hoffmann, P.; Bratoszewski, P.; Lech, M. Validating data acquired with experimental multimodal biometric system installed in bank branches. J. Intell. Inf. Syst. 2019, 52, 1–31. [Google Scholar] [CrossRef]
  5. Rice, L.M.; Wall, C.A.; Fogel, A.; Shic, F. Computer-assisted face processing instruction improves emotion recognition, mentalizing, and social skills in students with ASD. J. Autism Dev. Disord. 2015, 45, 2176–2186. [Google Scholar] [CrossRef]
  6. Gul, S.; Bano, S. Smart libraries: An emerging and innovative technological habitat of 21st century. Electron. Libr. 2019, 37, 764–783. [Google Scholar] [CrossRef]
  7. Rangayya; Virupakshappa; Patil, N. Improved face recognition method using SVM-MRF with KTBD based KCM segmentation approach. Int. J. Syst. Assur. Eng. Manag. 2022, 1–12. [Google Scholar] [CrossRef]
  8. Khan, K.; Attique, M.; Syed, I.; Gul, A. Automatic gender classification through face segmentation. Symmetry 2019, 11, 770. [Google Scholar] [CrossRef]
  9. Segundo, M.P.P.; Silva, L.; Bellon, O.R.P.; Queirolo, C.C. Automatic face segmentation and facial landmark detection in range images. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2010, 40, 1319–1330. [Google Scholar] [CrossRef]
  10. Zhang, M.; Xie, K.; Zhang, Y.-H.; Wen, C.; He, J.-B. Fine Segmentation on Faces With Masks Based on a Multistep Iterative Segmentation Algorithm. IEEE Access 2022, 10, 75742–75753. [Google Scholar] [CrossRef]
  11. Wu, H.; Miao, Z.; Wang, Y.; Chen, J. Recognition improvement through optimized spatial support methodology. Multimed. Tools Appl. 2016, 75, 5603–5618. [Google Scholar] [CrossRef]
  12. Lee, Y.W.; Kim, K.W.; Hoang, T.M.; Arsalan, M.; Park, K.R. Deep residual CNN-based ocular recognition based on rough pupil detection in the images by NIR camera sensor. Sensors 2019, 19, 842. [Google Scholar] [CrossRef] [PubMed]
  13. Mittal, H.; Saraswat, M. An optimum multi-level image thresholding segmentation using non-local means 2D histogram and exponential Kbest gravitational search algorithm. Eng. Appl. Artif. Intell. 2018, 71, 226–235. [Google Scholar] [CrossRef]
  14. Zhao, X.; Turk, M.; Li, W.; Lien, K.-c.; Wang, G. A multilevel image thresholding segmentation algorithm based on two-dimensional K–L divergence and modified particle swarm optimization. Appl. Soft Comput. 2016, 48, 151–159. [Google Scholar] [CrossRef]
  15. Huang, Q.; Sun, Y.; Huang, L.; Zhang, P. The liver CT image sequence segmentation based on region growing. In Proceedings of the 2015 International Conference on Advanced Engineering Materials and Technology, Guangzhou, China, 22–23 August 2015; pp. 572–577. [Google Scholar]
  16. Liu, L.; Li, B.; Wang, Y.; Yang, J. Remote sensing image segmentation based on improved Canny edge detection. Calc. Mech. Eng. Appl. 2019, 12, 56–57. [Google Scholar]
  17. Wu, J.; Yuan, T.; Zeng, J.; Gou, F. A Medically Assisted Model for Precise Segmentation of Osteosarcoma Nuclei on Pathological Images. IEEE J. Biomed. Health Inform. 2023, 27, 3982–3993. [Google Scholar] [CrossRef] [PubMed]
  18. He, K.; Qin, Y.; Gou, F.; Wu, J. A Novel Medical Decision-Making System Based on Multi-Scale Feature Enhancement for Small Samples. Mathematics 2023, 11, 2116. [Google Scholar] [CrossRef]
  19. Xia, X.; Kulis, B. W-net: A deep model for fully unsupervised image segmentation. arXiv 2017, arXiv:1711.08506. [Google Scholar]
  20. Kim, W.; Kanezaki, A.; Tanaka, M. Unsupervised learning of image segmentation based on differentiable feature clustering. IEEE Trans. Image Process. 2020, 29, 8055–8068. [Google Scholar] [CrossRef]
  21. Li, Y.; Jiao, L.; Shang, R.; Stolkin, R.J. Dynamic-context cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Inf. Sci. 2015, 294, 408–422. [Google Scholar] [CrossRef]
  22. Liu, W.; Shi, H.; He, X.; Pan, S.; Ye, Z.; Wang, Y. An application of optimized Otsu multi-threshold segmentation based on fireworks algorithm in cement SEM image. J. Algorithms Comput. Technol. 2018, 13, 1748301818797025. [Google Scholar] [CrossRef]
  23. Wei, L.; Xiao-hui, H.; Hong-chuang, W. Two-dimensional cross entropy multi-threshold image segmentation based on improved BBO algorithm. J. Meas. Sci. Instrum. 2018, 9, 42–49. [Google Scholar]
  24. Dutta, T.; Dey, S.; Bhattacharyya, S.; Mukhopadhyay, S.; Chakrabarti, P. Hyperspectral multi-level image thresholding using qutrit genetic algorithm. Expert Syst. Appl. 2021, 181, 115107. [Google Scholar] [CrossRef]
  25. Li, B.; Tan, Y.; Wu, A.-G.; Duan, G.-R. A distributionally robust optimization based method for stochastic model predictive control. IEEE Trans. Autom. Control 2021, 67, 5762–5776. [Google Scholar] [CrossRef]
  26. Xu, X.; Lin, Z.; Li, X.; Shang, C.; Shen, Q. Multi-objective robust optimisation model for MDVRPLS in refined oil distribution. Int. J. Prod. Res. 2022, 60, 6772–6792. [Google Scholar] [CrossRef]
  27. Cao, B.; Zhao, J.; Gu, Y.; Fan, S.; Yang, P. Security-aware industrial wireless sensor network deployment optimization. IEEE Trans. Ind. Inform. 2019, 16, 5309–5316. [Google Scholar] [CrossRef]
  28. Cao, B.; Wang, X.; Zhang, W.; Song, H.; Lv, Z. A Many-Objective Optimization Model of Industrial Internet of Things Based on Private Blockchain. IEEE Netw. 2020, 34, 78–83. [Google Scholar] [CrossRef]
  29. Cao, B.; Dong, W.; Lv, Z.; Gu, Y.; Singh, S.; Kumar, P. Hybrid Microgrid Many-Objective Sizing Optimization with Fuzzy Decision. IEEE Trans. Fuzzy Syst. 2020, 28, 2702–2710. [Google Scholar] [CrossRef]
  30. Cao, B.; Zhao, J.; Lv, Z.; Yang, P. Diversified personalized recommendation optimization based on mobile data. IEEE Trans. Intell. Transp. Syst. 2020, 22, 2133–2139. [Google Scholar] [CrossRef]
  31. Cao, B.; Li, M.; Liu, X.; Zhao, J.; Cao, W.; Lv, Z. Many-Objective Deployment Optimization for a Drone-Assisted Camera Network. IEEE Trans. Netw. Sci. Eng. 2021, 8, 2756–2764. [Google Scholar] [CrossRef]
  32. Qian, L.; Zheng, Y.; Li, L.; Ma, Y.; Zhou, C.; Zhang, D. A new method of inland water ship trajectory prediction based on long short-term memory network optimized by genetic algorithm. Appl. Sci. 2022, 12, 4073. [Google Scholar] [CrossRef]
  33. Cao, B.; Zhao, J.; Gu, Y.; Ling, Y.; Ma, X. Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evol. Comput. 2020, 53, 100626. [Google Scholar] [CrossRef]
  34. Cao, B.; Gu, Y.; Lv, Z.; Yang, S.; Zhao, J.; Li, Y. RFID reader anticollision based on distributed parallel particle swarm optimization. IEEE Internet Things J. 2020, 8, 3099–3107. [Google Scholar] [CrossRef]
  35. Li, S.; Chen, H.; Chen, Y.; Xiong, Y.; Song, Z. Hybrid Method with Parallel-Factor Theory, a Support Vector Machine, and Particle Filter Optimization for Intelligent Machinery Failure Identification. Machines 2023, 11, 837. [Google Scholar] [CrossRef]
  36. Yu, H.; Song, J.; Chen, C.; Heidari, A.A.; Liu, J.; Chen, H.; Zaguia, A.; Mafarja, M. Image segmentation of Leaf Spot Diseases on Maize using multi-stage Cauchy-enabled grey wolf algorithm. Eng. Appl. Artif. Intell. 2022, 109, 104653. [Google Scholar] [CrossRef]
  37. Yang, X.; Zhao, D.; Yu, F.; Heidari, A.A.; Bano, Y.; Ibrohimov, A.; Liu, Y.; Cai, Z.; Chen, H.; Chen, X. Boosted machine learning model for predicting intradialytic hypotension using serum biomarkers of nutrition. Comput. Biol. Med. 2022, 147, 105752. [Google Scholar] [CrossRef] [PubMed]
  38. Yang, X.; Wang, R.; Zhao, D.; Yu, F.; Huang, C.; Heidari, A.A.; Cai, Z.; Bourouis, S.; Algarni, A.D.; Chen, H. An adaptive quadratic interpolation and rounding mechanism sine cosine algorithm with application to constrained engineering optimization problems. Expert Syst. Appl. 2023, 213, 119041. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Liu, R.; Heidari, A.A.; Wang, X.; Chen, Y.; Wang, M.; Chen, H. Towards augmented kernel extreme learning models for bankruptcy prediction: Algorithmic behavior and comprehensive analysis. Neurocomputing 2021, 430, 185–212. [Google Scholar] [CrossRef]
  40. Wen, X.; Wang, K.; Li, H.; Sun, H.; Wang, H.; Jin, L. A two-stage solution method based on NSGA-II for Green Multi-Objective integrated process planning and scheduling in a battery packaging machinery workshop. Swarm Evol. Comput. 2021, 61, 100820. [Google Scholar] [CrossRef]
  41. Zhao, C.; Zhou, Y.; Lai, X. An integrated framework with evolutionary algorithm for multi-scenario multi-objective optimization problems. Inf. Sci. 2022, 600, 342–361. [Google Scholar] [CrossRef]
  42. Sahoo, S.K.; Saha, A.K. A Hybrid Moth Flame Optimization Algorithm for Global Optimization. J. Bionic Eng. 2022, 19, 1522–1543. [Google Scholar] [CrossRef]
  43. Sharma, S.; Chakraborty, S.; Saha, A.K.; Nama, S.; Sahoo, S.K. mLBOA: A Modified Butterfly Optimization Algorithm with Lagrange Interpolation for Global Optimization. J. Bionic Eng. 2022, 19, 1161–1176. [Google Scholar] [CrossRef]
  44. Zhang, X.; Hu, W.; Xie, N.; Bao, H.; Maybank, S. A robust tracking system for low frame rate video. Int. J. Comput. Vis. 2015, 115, 279–304. [Google Scholar] [CrossRef]
  45. Dong, R.; Chen, H.; Heidari, A.A.; Turabieh, H.; Mafarja, M.; Wang, S. Boosted kernel search: Framework, analysis and case studies on the economic emission dispatch problem. Knowl.-Based Syst. 2021, 233, 107529. [Google Scholar] [CrossRef]
  46. Xue, Y.; Tong, Y.; Neri, F. An ensemble of differential evolution and Adam for training feed-forward neural networks. Inf. Sci. 2022, 608, 453–471. [Google Scholar] [CrossRef]
  47. Sun, G.; Yang, G.; Zhang, G. Two-level parameter cooperation-based population regeneration framework for differential evolution. Swarm Evol. Comput. 2022, 75, 101122. [Google Scholar] [CrossRef]
  48. Li, C.; Sun, G.; Deng, L.; Qiao, L.; Yang, G. A population state evaluation-based improvement framework for differential evolution. Inf. Sci. 2023, 629, 15–38. [Google Scholar] [CrossRef]
  49. Sun, G.; Li, C.; Deng, L. An adaptive regeneration framework based on search space adjustment for differential evolution. Neural Comput. Appl. 2021, 33, 9503–9519. [Google Scholar] [CrossRef]
  50. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  51. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  52. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  53. Chen, H.; Li, C.; Mafarja, M.; Heidari, A.A.; Chen, Y.; Cai, Z. Slime mould algorithm: A comprehensive review of recent variants and applications. Int. J. Syst. Sci. 2022, 54, 204–235. [Google Scholar] [CrossRef]
  54. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  55. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  56. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  57. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The Colony Predation Algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  58. Su, H.; Zhao, D.; Asghar Heidari, A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  59. Ahmadianfar, I.; Asghar Heidari, A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  60. Ahmadianfar, I.; Asghar Heidari, A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN Beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  61. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  62. Wang, Y.; Han, X.; Jin, S. MAP based modeling method and performance study of a task offloading scheme with time-correlated traffic and VM repair in MEC systems. Wirel. Netw. 2023, 29, 47–68. [Google Scholar] [CrossRef]
  63. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  64. Yang, X.; Wang, R.; Zhao, D.; Yu, F.; Heidari, A.A.; Xu, Z.; Chen, H.; Algarni, A.D.; Elmannai, H.; Xu, S. Multi-level threshold segmentation framework for breast cancer images using enhanced differential evolution. Biomed. Signal Process. Control 2023, 80, 104373. [Google Scholar] [CrossRef]
  65. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  66. Guo, S.-M.; Tsai, J.S.-H.; Yang, C.-C.; Hsu, P.-H. A self-optimization approach for L-SHADE incorporated with eigenvector-based crossover and successful-parent-selecting framework on CEC 2015 benchmark set. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1003–1010. [Google Scholar]
  67. Qu, C.; Zeng, Z.; Dai, J.; Yi, Z.; He, W. A modified sine-cosine algorithm based on neighborhood search and greedy levy mutation. Comput. Intell. Neurosci. 2018, 2018, 4231647. [Google Scholar] [CrossRef]
  68. Han, S.; Xiao, L. An improved adaptive genetic algorithm. SHS Web Conf. 2022, 140, 01044. [Google Scholar] [CrossRef]
  69. Camacho Villalón, C.L.; Stützle, T.; Dorigo, M. Grey Wolf, Firefly and Bat Algorithms: Three Widespread Algorithms that Do Not Contain Any Novelty. In Proceedings of the Swarm Intelligence, Barcelona, Spain, 26–28 October 2020; pp. 121–133. [Google Scholar]
  70. Cai, Z.; Gu, J.; Luo, J.; Zhang, Q.; Chen, H.; Pan, Z.; Li, Y.; Li, C. Evolving an optimal kernel extreme learning machine by using an enhanced grey wolf optimization strategy. Expert Syst. Appl. 2019, 138, 112814. [Google Scholar] [CrossRef]
  71. Choubey, V.; Parkh, K.; Jangid, R. Optimal Design of Power System Stabilizer Using a Gray Wolf Optimization Technique. Int. J. Res. Eng. Sci. Manag. 2019, 2, 2581–5792. [Google Scholar]
  72. Li, J.; Yang, F. Task assignment strategy for multi-robot based on improved Grey Wolf Optimizer. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 6319–6335. [Google Scholar] [CrossRef]
  73. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z. Variants of chaotic grey wolf heuristic for robust identification of control autoregressive model. Biomimetics 2023, 8, 141. [Google Scholar] [CrossRef] [PubMed]
  74. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; pp. 416–423. [Google Scholar]
  75. Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 4401–4410. [Google Scholar]
  76. Camacho-Villalón, C.L.; Dorigo, M.; Stützle, T. Exposing the grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms: Six misleading optimization techniques inspired by bestial metaphors. Int. Trans. Oper. Res. 2023, 30, 2945–2971. [Google Scholar] [CrossRef]
  77. Zhang, Z.; Hong, W.-C. Application of variational mode decomposition and chaotic grey wolf optimizer with support vector regression for forecasting electric loads. Knowl.-Based Syst. 2021, 228, 107297. [Google Scholar] [CrossRef]
  78. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  79. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  80. Kapur, J.N.; Kesavan, H.K. Entropy optimization principles and their applications. In Entropy and Energy Dissipation in Water Resources; Springer: Dordrecht, The Netherlands, 1992; pp. 3–20. [Google Scholar]
  81. Kadavy, T.; Pluhacek, M.; Viktorin, A.; Senkerik, R. SOMA-CL for competition on single objective bound constrained numerical optimization benchmark: A competition entry on single objective bound constrained numerical optimization at the genetic and evolutionary computation conference (GECCO) 2020. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 8–12 July 2020; pp. 9–10. [Google Scholar]
  82. Chen, X.; Tianfield, H.; Mei, C.; Du, W.; Liu, G. Biogeography-based learning particle swarm optimization. Soft Comput. 2017, 21, 7519–7541. [Google Scholar] [CrossRef]
  83. Nenavath, H.; Jatoth, R.K. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  84. Tubishat, M.; Abushariah, M.A.; Idris, N.; Aljarah, I. Improved whale optimization algorithm for feature selection in Arabic sentiment analysis. Appl. Intell. 2019, 49, 1688–1707. [Google Scholar] [CrossRef]
  85. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  86. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  87. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  88. García, S.; Fernández, A.; Luengo, J.; Herrera, F.J. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  89. Zhao, C.; Wang, H.; Chen, H.; Shi, W.; Feng, Y. JAMSNet: A Remote Pulse Extraction Network Based on Joint Attention and Multi-Scale Fusion. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 2783–2797. [Google Scholar] [CrossRef]
  90. Zhang, X.; Zheng, J.; Wang, D.; Zhao, L. Exemplar-Based Denoising: A Unified Low-Rank Recovery Framework. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 2538–2549. [Google Scholar] [CrossRef]
  91. Dai, Y.; Wu, J.; Fan, Y.; Wang, J.; Niu, J.; Gu, F.; Shen, S. MSEva: A musculoskeletal rehabilitation evaluation system based on EMG signals. ACM Trans. Sens. Netw. 2022, 19, 1–23. [Google Scholar] [CrossRef]
  92. Zhang, X.; Zheng, J.; Wang, D.; Tang, G.; Zhou, Z.; Lin, Z. Structured Sparsity Optimization With Non-Convex Surrogates of ℓ2,0ℓ2,0-Norm: A Unified Algorithmic Framework. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 6386–6402. [Google Scholar] [CrossRef]
  93. Chen, J.; Cai, Z.; Chen, H.; Chen, X.; Escorcia-Gutierrez, J.; Mansour, R.F.; Ragab, M. Renal Pathology Images Segmentation Based on Improved Cuckoo Search with Diffusion Mechanism and Adaptive Beta-Hill Climbing. J. Bionic Eng. 2023, 20, 2240–2275. [Google Scholar] [CrossRef] [PubMed]
  94. Kourou, K.; Manikis, G.; Poikonen-Saksela, P.; Mazzocco, K.; Pat-Horenczyk, R.; Sousa, B.; Oliveira-Maia, A.J.; Mattson, J.; Roziner, I.; Pettini, G.; et al. A machine learning-based pipeline for modeling medical, socio-demographic, lifestyle and self-reported psychological traits as predictors of mental health outcomes after breast cancer diagnosis: An initial effort to define resilience effects. Comput. Biol. Med. 2021, 131, 104266. [Google Scholar] [CrossRef] [PubMed]
  95. Faruqui, N.; Yousuf, M.A.; Whaiduzzaman, M.; Azad, A.K.M.; Barros, A.; Moni, M.A. LungNet: A hybrid deep-CNN model for lung cancer diagnosis using CT and wearable sensor-based medical IoT data. Comput. Biol. Med. 2021, 139, 104961. [Google Scholar] [CrossRef] [PubMed]
  96. Hržić, F.; Tschauner, S.; Sorantin, E.; Štajduhar, I. XAOM: A method for automatic alignment and orientation of radiographs for computer-aided medical diagnosis. Comput. Biol. Med. 2021, 132, 104300. [Google Scholar] [CrossRef]
  97. Lv, J.; Li, G.; Tong, X.; Chen, W.; Huang, J.; Wang, C.; Yang, G. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction. Comput. Biol. Med. 2021, 134, 104504. [Google Scholar] [CrossRef]
  98. Cao, X.; Cao, T.; Xu, Z.; Zeng, B.; Gao, F.; Guan, X. Resilience Constrained Scheduling of Mobile Emergency Resources in Electricity-Hydrogen Distribution Network. IEEE Trans. Sustain. Energy 2022, 14, 1269–1284. [Google Scholar] [CrossRef]
Figure 1. Flowchart of WGWO.
Figure 1. Flowchart of WGWO.
Biomimetics 08 00484 g001
Figure 2. NML 2−D histogram.
Figure 2. NML 2−D histogram.
Biomimetics 08 00484 g002
Figure 3. The flow chart of the multi−threshold image segmentation process.
Figure 3. The flow chart of the multi−threshold image segmentation process.
Biomimetics 08 00484 g003
Figure 4. Face images from the image segmentation dataset.
Figure 4. Face images from the image segmentation dataset.
Biomimetics 08 00484 g004
Figure 5. Segmentation results of images (A,B).
Figure 5. Segmentation results of images (A,B).
Biomimetics 08 00484 g005
Figure 6. Segmentation results of WGWO.
Figure 6. Segmentation results of WGWO.
Biomimetics 08 00484 g006
Figure 7. Time cost of each algorithm.
Figure 7. Time cost of each algorithm.
Biomimetics 08 00484 g007
Figure 8. Convergence curves of each algorithm.
Figure 8. Convergence curves of each algorithm.
Biomimetics 08 00484 g008
Table 1. Details of IEEE CEC2020 benchmark functions.
Table 1. Details of IEEE CEC2020 benchmark functions.
Class.No.FunctionsFmin
Unimodal FunctionF1Shifted and Rotated Bent Cigar Function100
Basic FunctionsF2Shifted and Rotated Schwefel’s Function1100
F3Shifted and Rotated Lunacek bi-Rastrigin Function700
F4Expanded Rosenbrock’s plus Griewangk’s Function1900
Hybrid FunctionsF5Hybrid Function 1 (N = 3)1700
F6Hybrid Function 2 (N = 4)1600
F7Hybrid Function 3 (N = 5)2100
Composition FunctionsF8Composition Function 1 (N = 3)2200
F9Composition Function 2 (N = 4)2400
F10Composition Function 3 (N = 5)2500
Search range: [–100, 100]D
Table 2. Comparative results of the ablation experiment.
Table 2. Comparative results of the ablation experiment.
No.WGWOWGWO1WGWO2GWO
F12134
F21234
F32314
F42314
F51234
F61234
F71324
F83124
F93214
F103214
Result1 (1.90)3 (2.10)2 (2.00)4 (4.00)
Table 3. Sensitivity experiment of the parameter Ada.
Table 3. Sensitivity experiment of the parameter Ada.
No.[0, 1][0.1, 1][0.2, 1][0.3, 1][0.4, 1][0.5, 1][0.6, 1][0.7, 1][0.8, 1][0.9, 1]
F168410531927
F278106425193
F394123567810
F412435697108
F512685934710
F624967153108
F751231074698
F895124386710
F910821346579
F1014639581027
Result5 (5.1)4 (4.6)2 (4.5)1 (4.4)6 (5.5)2 (4.5)6 (5.5)8 (5.8)9 (7.1)10 (8)
Table 4. Sensitivity experiment of the parameter C.
Table 4. Sensitivity experiment of the parameter C.
No.C (1)C (2)C (3)C (4)C (5)C (6)C (7)C (8)C (9)
F1987653241
F2987563142
F3897652413
F4798231645
F5987461325
F6943712685
F7978456231
F8653894712
F9512437689
F10987132465
Result9 (8)8 (6.7)7 (5.9)6 (4.7)5 (4.6)1 (3.1)3 (4.1)3 (4.1)2 (3.8)
Table 5. Nine algorithm parameter settings.
Table 5. Nine algorithm parameter settings.
MethodsParametersCriteria
WGWO φ [2 0]Original paper [61]
Ada ∈[0.3 1]Parameter sensitivity analysis (Section 4.1)
C = 6Parameter sensitivity analysis (Section 4.1)
GWOa ∈[2 0]Original paper [61]
PSO ω  =  1 ,   c 1   = 2 ,   c 2 = 2Original paper [50]
WOA a 1 [ 2   0 ] ,   a 2 [−1 −2], b = 1Original paper [52]
BLPSOc  = 1.496 ,   I = 1 ,   E = 1 ,   ω   [0.9 0.2]Original paper [82]
IGWO β num =   10 ;   Ω num =   15 Original paper [70]
HLDDE J [ 0   2 ] ,   DR [0 0.4]Original paper [64]
SCADEa  =   2 ,   P c = 0.8Original paper [83]
IWOA a 1 [ 2   0 ] ,   a 2 [−1 −2]Original paper [84]
Table 6. Details of the three image evaluation metrics.
Table 6. Details of the three image evaluation metrics.
MetricsFormulasRemarks
FSIM [85] FSIM = x Ω S L ( x ) · PC m ( x ) x Ω PC m ( x ) FSIM is an image quality assessment method based on phase consistency features and gradient features complementing each other.
SSIM [86] SSIM = ( 2 μ I μ K + C 1 ) ( 2 σ IK C 2 ) ( μ I 2 + μ K 2 + C 1 ) ( σ I 2 + σ K 2 + C 2 ) SSIM is a similarity assessment based on the luminance, contrast, and structure of the original image and the segmented image, which is a full-reference image quality evaluation index more in line with human vision’s judgment of image quality.
PSNR [87] PSNR = 10 · log 10 ( ( peak 2 ) / MSE ) PSNR represents the ratio of the maximum possible power of a signal to the destructive noise power affecting its representation accuracy and is an objective full-reference image quality evaluation index.
Table 7. FSIM ranking of each algorithm at four thresholds.
Table 7. FSIM ranking of each algorithm at four thresholds.
Methods5-Level 8-Level 15-Level 18-Level
+/−/=MeanRank+/−/=MeanRank+/−/=MeanRank+/−/=MeanRank
WGWO~1.631~1.381~1.631~1.381
GWO2/3/32.3822/3/31.8822/0/6223/0/52.52
PSO6/1/1446/0/23.7536/0/23.8845/0/33.634
WOA5/0/35.6356/0/23.7534/0/42.8832/0/62.753
BLPSO5/0/36.0078/0/07.2578/0/07.3888/0/07.388
IGWO5/0/33.8838/0/05.1358/0/06.8878/0/06.887
HLDDE8/0/05.6358/0/05.3868/0/05.0058/0/05.135
SCADE8/0/08.5098/0/09.0098/0/09.0098/0/09.009
IWOA8/0/07.3888/0/07.5088/0/06.3868/0/06.386
Table 8. PSNR ranking of each algorithm at the four thresholds.
Table 8. PSNR ranking of each algorithm at the four thresholds.
Methods5-Level 8-Level 15-Level 18-Level
+/−/=MeanRank+/−/=MeanRank+/−/=MeanRank+/−/=MeanRank
WGWO~2.001~1.881~1.381~1.631
GWO3/2/32.2521/2/5221/1/62.1320/0/82.382
PSO6/1/13.6337/0/14.3847/0/14.8857/0/155
WOA5/0/34.8856/0/24.0033/0/52.8832/0/62.883
BLPSO4/0/45.2567/1/06.0077/0/16.6367/0/16.006
IGWO5/0/34.7547/0/15.2567/0/16.7578/0/06.887
HLDDE5/0/35.50 77/1/04.88 58/0/04.50 45/0/34.25 4
SCADE8/0/09.00 97/0/19.00 98/0/09.00 98/0/08.88 9
IWOA8/0/07.75 88/0/07.63 88/0/06.88 88/0/07.13 8
Table 9. SSIM ranking of each algorithm at the four thresholds.
Table 9. SSIM ranking of each algorithm at the four thresholds.
Methods5-Level 8-Level 15-Level 18-Level
+/−/=MeanRank+/−/=MeanRank+/−/=MeanRank+/−/=MeanRank
WGWO~2.501~1.631~1.501~1.251
GWO3/1/42.63 25/1/22.63 34/1/33.00 34/1/32.88 2
PSO2/2/42.75 32/2/42.00 22/0/62.75 24/0/43.00 3
WOA4/0/45.88 66/0/24.38 43/0/53.63 43/0/53.50 4
BLPSO7/0/15.63 58/0/07.00 87/0/16.88 77/0/17.13 8
IGWO4/0/43.63 47/0/14.75 58/0/06.88 78/0/07.00 7
HLDDE8/0/06.50 78/0/06.88 67/0/15.00 57/0/15.25 5
SCADE7/0/18.00 97/0/18.88 98/0/09.00 98/0/08.75 9
IWOA6/0/27.50 87/0/16.88 68/0/06.38 68/0/06.25 6
Table 10. Segmentation stability ranking and WSRT comparison results in FSIM, PSNR, and SSIM.
Table 10. Segmentation stability ranking and WSRT comparison results in FSIM, PSNR, and SSIM.
MetricsItemsGWOPSOWOABLPSOIGWOHLDDESCADEIWOAWGWO
FSIM+/−/=7/0/16/1/13/1/48/0/08/0/08/0/08/0/08/0/0~
Mean3.75 3.00 1.757.63 5.88 5.38 8.88 7.13 1.63
Rank432865971
PSNR+/−/=6/0/21/1/63/1/47/0/18/0/06/0/28/0/08/0/0~
Mean4.75 2.50 2.13 6.63 5.88 4.63 8.88 7.75 1.88
Rank532764981
SSIM+/−/=5/1/24/0/41/1/67/0/17/0/17/0/18/0/08/0/0~
Mean2.88 3.50 2.63 7.50 5.38 5.38 8.75 7.38 1.63
Rank342855971
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Cai, Z.; Xiao, L.; Heidari, A.A.; Chen, H.; Zhao, D.; Wang, S.; Zhang, Y. Face Image Segmentation Using Boosted Grey Wolf Optimizer. Biomimetics 2023, 8, 484. https://doi.org/10.3390/biomimetics8060484

AMA Style

Zhang H, Cai Z, Xiao L, Heidari AA, Chen H, Zhao D, Wang S, Zhang Y. Face Image Segmentation Using Boosted Grey Wolf Optimizer. Biomimetics. 2023; 8(6):484. https://doi.org/10.3390/biomimetics8060484

Chicago/Turabian Style

Zhang, Hongliang, Zhennao Cai, Lei Xiao, Ali Asghar Heidari, Huiling Chen, Dong Zhao, Shuihua Wang, and Yudong Zhang. 2023. "Face Image Segmentation Using Boosted Grey Wolf Optimizer" Biomimetics 8, no. 6: 484. https://doi.org/10.3390/biomimetics8060484

APA Style

Zhang, H., Cai, Z., Xiao, L., Heidari, A. A., Chen, H., Zhao, D., Wang, S., & Zhang, Y. (2023). Face Image Segmentation Using Boosted Grey Wolf Optimizer. Biomimetics, 8(6), 484. https://doi.org/10.3390/biomimetics8060484

Article Metrics

Back to TopTop