Next Article in Journal
Dual Quaternion Matrix Equation AXB = C with Applications
Previous Article in Journal
Rotating Machinery Fault Diagnosis with Limited Multisensor Fusion Samples by Fused Attention-Guided Wasserstein GAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Grey Wolf Optimizer with Multi-Strategies Coverage in Wireless Sensor Networks

1
School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
2
Faculty of Computing, Universiti Teknologi Malaysia, Skudai 81310, Malaysia
3
Academic Affairs Office, Jishou University, Jishou 416000, China
4
College of Computer Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(3), 286; https://doi.org/10.3390/sym16030286
Submission received: 2 February 2024 / Revised: 19 February 2024 / Accepted: 22 February 2024 / Published: 1 March 2024

Abstract

:
For wireless sensor network (WSN) coverage problems, since the sensing range of sensor nodes is a circular area with symmetry, taking symmetry into account when deploying nodes will help simplify problem solving. In addition, in view of two specific problems of high node deployment costs and insufficient effective coverage in WSNs, this paper proposes a WSN coverage optimization method based on the improved grey wolf optimizer with multi-strategies (IGWO-MS). As far as IGWO-MS is concerned, first of all, it uses Sobol sequences to initialize the population so that the initial values of the population are evenly distributed in the search space, ensuring high ergodicity and diversity. Secondly, it introduces a search space strategy to increase the search range of the population, avoid premature convergence, and improve search accuracy. And then, it combines reverse learning and mirror mapping to expand the population richness. Finally, it adds Levy flight to increase the disturbance and improve the probability of the algorithm jumping out of the local optimum. To verify the performance of IGWO-MS in WSN coverage optimization, this paper rasterizes the coverage area of the WSN into multiple grids of the same size and symmetry with each other, thereby transforming the node coverage problem into a single-objective optimization problem. In the simulation experiment, not only was IGWO-MS selected, but four other algorithms were also selected for comparison, namely particle swarm optimization (PSO), grey wolf optimizer (GWO), grey wolf optimization based on drunk walk (DGWO), and grey wolf optimization led by two-headed wolves (GWO-THW). The experimental results demonstrate that when the number of nodes for WSN coverage optimization is 20 and 30, the optimal coverage rate and average coverage rate using IGWO-MS are both improved compared to the other four comparison algorithms. To make this clear, in the case of 20 nodes, the optimal coverage rate of IGWO-MS is increased by 13.19%, 1.68%, 4.92%, and 3.62%, respectively, compared with PSO, GWO, DGWO, and GWO-THW; while IGWO-MS performs even better in terms of average coverage rate, which is 16.45%, 3.13%, 11.25%, and 6.19% higher than that of PSO, GWO, DGWO, and GWO-THW, respectively. Similarly, in the case of 30 nodes, compared with PSO, GWO, DGWO, and GWO-THW, the optimal coverage rate of the IGWO-MS is increased by 15.23%, 1.36%, 5.55%, and 3.66%; the average coverage rate is increased by 16.78%, 1.56%, 10.91%, and 8.55%. Therefore, it can be concluded that IGWO-MS has certain advantages in solving WSN coverage problems, which is reflected in that not only can it effectively improve the coverage quality of network nodes, but it also has good stability.

1. Introduction

Wireless sensor networks (WSNs) are an emerging computing and network model, which is a system composed of a large number of tiny, expensive, and highly intelligent device sensor nodes with wireless communication and computing capabilities. In addition, each sensor node can collect, store, and process environmental information, and send the collected information to the management center for statistics and analysis [1]. On the one hand, the coverage of WSN determines the performance of the network. On the other hand, since wireless sensor nodes can be distributed arbitrarily within the configuration area, the coverage problem can be the first thing to be solved in the configuration of WSN. As WSNs become more and more widely used, scholars have conducted more in-depth theoretical research on them. Due to different research angles, the coverage problem also manifests itself in different theoretical models. In addition, solutions related to coverage can even be found within computational geometry [2].
The WSN coverage problem lies in the combinatorial optimization problem, which is an NP-hard problem [3]. Therefore, traditional deterministic technologies and algorithms have difficulty solving such non-differentiable discontinuous problems within a reasonable computing time. However, with the rise and development of swarm intelligence algorithms, researchers have found that this type of algorithm has shown good advantages in solving various engineering optimization problems. Especially in combinatorial optimization problems, unexpected results have been achieved [4]. In addition, different combinatorial optimization problems require the design of appropriate swarm intelligence algorithms to improve the quality and efficiency of problem solving.
To this end, researchers study and improve various swarm intelligence algorithms and apply them to the coverage optimization problem of WSNs. For example, Wang et al. proposed an adaptive multi-strategy artificial bee colony (SaMABC) algorithm to improve the coverage of WSNs. Specifically, this algorithm introduces the simulated annealing method and dynamic search strategy into the artificial bee colony algorithm, which improves the solution accuracy of the algorithm. According to the characteristics of sensor nodes that need to be dynamically adjusted in the WSN coverage optimization problem, this algorithm designs a corresponding strategy library and adaptive selection mechanism. Compared with other comparison algorithms, simulation results in multiple scenarios have verified that SaMABC has good performance in improving coverage [5]. Dao et al. designed an improved honey badger algorithm (IHBA) to solve the problem of low coverage caused by uneven node deployment in WSNs. To be specific, the algorithm uses the elite reverse learning strategy to enhance the global search performance of the algorithm, and uses the multi-directional strategy to improve the individual update formula, thereby improving the standard HBA algorithm’s tendency to fall into local optimality when dealing with node coverage optimization situations. Similarly, compared with other algorithms, the comparison results verify that the IHBA algorithm has high feasible coverage and archive coverage [6]. Likewise, Wang et al. used the improved sparrow search algorithm (ISSA) to optimize the coverage problem of WSN, achieving the purpose of reducing node redundancy and improving coverage in WSN. This algorithm uses good point sets (GP) to initialize the population, and adjusts the individual update formula through adaptive factors to improve the accuracy of the algorithm. At the same time, a refraction reverse learning strategy is designed to avoid falling into local optimality. Experimental data show that the ISSA algorithm makes the distribution of nodes more even and improves the coverage of WSN [7]. It can be found in the literature [5,6,7] that they all study swarm intelligence algorithms improve the performance of WSNs, from aspects such as node redundancy and coverage of homogeneous WSNs. Similar research includes the improved sticky algorithm proposed by Wei et al., simplified slime mold algorithm (SSMA) [8], the improved fruit fly optimization algorithm (change step of fruit fly optimization algorithm (CSFOA)) proposed by Song et al. [9].
Some researchers aim to optimize the coverage of heterogeneous WSNs and study swarm intelligence algorithms that reduce node redundancy and improve coverage. For example, Zeng et al. proposed an improved mustang optimization algorithm (IWHO) for the coverage and connection problems of heterogeneous WSN. In this case, this algorithm integrates the golden sine algorithm into the mustang optimization algorithm and uses SPM chaos mapping to initialize the population, which improves the solution accuracy and speed of the standard mustang optimization algorithm. According to the existence of obstacles in heterogeneous WSNs, the IWHO algorithm enhances the global exploration capabilities of the algorithm by introducing opposition-based learning and Cauchy mutation mechanisms. Experimental data from three sets of different simulation environments show that the IWHO has better connectivity and coverage [10]. Another case in point is that Chen et al. proposed a competitive multi-objective marine predator algorithm (CMOMPA) to address the differences between nodes and the complexity of the three-dimensional environments in heterogeneous WSNs. This algorithm treats a biological group in the ocean as a marine predator. For individuals who use algorithms, the survival of the fittest mechanism is adopted between groups, while the competition mechanism is adopted within the group. Experimental results show that CMOMPA exhibits excellent performance in multimodal three-dimensional deployment environments [11]. What is more, Cao et al. aimed at maximizing coverage of heterogeneous WSNs and proposed an adaptive collaborative optimization seagull algorithm (PSO-SOA). This algorithm introduces the inertial weight of the PSO algorithm into the seagull algorithm, adaptively adjusts the individual displacement through the scale factor, and designs a reverse learning strategy to increase the probability of jumping out of the local optimum. The simulation verified that the PSO-SOA algorithm can improve network coverage and effectively avoid coverage blind spots and coverage redundancy in the network [12].
In addition to WSN coverage problems, there is also the service life of WSN that needs to be considered. Obviously, reducing node energy loss can extend network life. Therefore, on the premise of ensuring coverage, some researchers take reducing node energy loss as the second goal. In particular, Yarinezhad et al. proposed a collaborative particle swarm optimization algorithm based on fuzzy logic [13] with the goal of maximizing coverage and network life. Li et al. proposed an improved multi-objective ant optimization algorithm (NSIMOALO) based on fast non-dominated sorting [14] with the goal of coverage and extending network life. Furthermore, with the goal of coverage and extending network life, Cheng et al. proposed a WSN coverage optimization method based on the fruit fly optimization algorithm [15]. What is certain is that as research continues to deepen, swarm intelligence algorithms may continue to be integrated into other aspects of coverage optimization problems in WSNs, such as the symmetry of network topology, perceived reliability, network reliability, etc.
Compared with other swarm intelligence algorithms, the grey wolf optimizer (GWO) algorithm has the advantages of a clear concept, fewer control parameters, simple structure, low computational time complexity, and easy implementation. It also has a good global search and local search switching mechanism [16], so it is widely used in optimization problems such as feature selection, neural network optimization, path planning, etc. Nevertheless, while the GWO algorithm has these advantages, it also has certain limitations compared to other algorithms, such as insufficient global search capabilities and low accuracy when solving high-dimensional optimization problems. In response to these problems existing in the standard GWO algorithm, domestic and foreign scholars have made many improvements to the GWO in terms of parameter configuration and combination with other algorithms. To illustrate, Kohli et al. proposed a grey wolf optimization algorithm based on chaos strategy (CGWO). This algorithm introduces a chaotic strategy and uses different chaotic mappings and different functions to adjust the key parameters of global optimization, thereby searching the search space more dynamically and comprehensively, and improving the convergence speed [17]. Liu et al. proposed a hybrid grey wolf optimization algorithm based on drunkard grey wolf optimizer and reverse learning (DGWO). In the iterative process, the dominant wolves and the worst wolves in each generation of the population are reversely learned, compared, and re-learned. After sorting, the wolves with the top three fitness values are retained, and the drunken walk mechanism is used to update the leading wolves, which enhances the global search capability in high-dimensional space and speeds up the convergence speed of the algorithm [18]. Narinder et al. proposed a new hybrid algorithm (GWO-PSO) based on the combination of grey wolf algorithm and particle swarm algorithm. This hybrid algorithm combines the advantages of the two algorithms and further improves the search intensity of the algorithm on the basis of improving the exploration ability of GWO and the mining ability of PSO [19]. Ou et al. proposed a two-headed wolf-led grey wolf algorithm (GWO-THW) under the nonlinear double convergence factor strategy. This algorithm uses the average fitness value to divide the wolf population into hunting wolves and scout wolves. Predator wolves and scout wolves hunt under the leadership of their respective alpha wolves. The difference in fitness values within the unit Euclidean distance between the wolf pack and the three wolves is used as the weight coefficient for position update. Experiments have verified that this algorithm has high optimization accuracy and convergence speed [20].
In view of the shortcomings of the GWO and the low coverage rate caused by dealing with WSN coverage problems, this paper proposes an improved grey wolf optimizer with multi-strategies (IGWO-MS). This algorithm improves the performance of the GWO by introducing strategies such as range space search, nonlinear convergence factor, elite reverse learning, and dimension-by-dimension updating. At the same time, this paper also transforms the node coverage problem in WSN into a single-objective optimization problem, and uses the fence discretization method to transform the area coverage into point coverage, and uses IGWO-MS to achieve the purpose of WSN coverage optimization, thereby solving the problems of high node deployment cost and low effective coverage for WSN coverage. In terms of performance verification, this paper compares the proposed IGWO-MS with PSO [21], GWO [16], DGWO [18], and GWO-THW [20] in simulation experiments with node numbers of 20 and 30. The experimental results show that IGWO-MS has more advantages in solving the coverage problem of WSNs when the number of nodes is 20 and 30, respectively. For example, in terms of the optimal coverage rate, IGWO-MS has been improved to varying degrees compared with its competitors namely, PSO, GWO, DGWO, and GWO-THW, proving that it can effectively improve the coverage quality of network nodes. On the other hand, in terms of the average coverage rate, IGWO-MS has a greater improvement compared to the optimal coverage rate, thus exhibiting better stability than its counterparts.

2. Wireless Sensor Network Coverage Problem and Standard GWO

2.1. Sensor Network Node Coverage Model

In WSNs, the sensing range of a sensor node is a circular area with the sensor node as the center and R as the radius. In addition, only monitoring points within this area can be sensed by the sensor node. Therefore, the radius R of this circular area is called the sensing radius. In addition, this paper treats the WSN coverage area as a two-dimensional plane to simplify the problem and makes the following ideal assumptions:
Assumption 1. 
All nodes have the same structure and sensing range;
Assumption 2. 
The sensing range of all nodes is circular and not affected by obstacles;
Assumption 3. 
All nodes can sense and obtain the positions of other nodes within their sensing range in real time.
Based on the above assumptions, the WSN coverage model is as follows.
First, the monitoring area is set as a rectangle with a length of L and a width of W, and its area is L W . Second, the four vertex coordinates of the monitoring area in the plane coordinate system are (0, 0), (0, W), (L, 0), (L, W). Third, the monitoring area is discretized into m grids with equal area and mutual symmetry. Then, the center point of the grid is the monitoring point, and its set is A = { A j = x j , y j ( j = 1 , 2 , , m ) } . Due to the symmetry between grids, the coordinate transformation of these monitoring points should become simple, which can effectively reduce the calculation load of the model. Next, n sensor nodes are randomly deployed in the monitoring area, and their set is B = { B i = x i , y i ( i = 1 , 2 , , n ) } . Last, all nodes adopt the Boolean sensing model, and the sensing radius of each node is R.
The Euclidean distance between the sensor node and the monitoring point is as follows.
d ( B i , A j ) = ( x i x j ) 2 + ( y i y j ) 2
In Equation (1), d ( B i , A j ) represents the Euclidean distance between sensor node B i and monitoring point A j . x i , y i is the coordinate of the node B i , and x j , y j is the coordinate of monitoring point A j . If monitoring point A j is located in the sensing range of sensor node B i with itself as the center and radius R, it means that the monitoring point is sensed by the sensor node, that is, the grid where it is located is covered by WSN. In addition, the probability that monitoring point A j is sensed by sensor network node B i is as follows:
P ( B i , A j ) = 1 i f   d ( B i , A j ) R 0 o t h e r w i s e
In Equation (2), P ( B i , A j ) represents the probability that the monitoring point A j can be sensed by the sensor node B i , and R is the sensing radius. Meanwhile, each monitoring point A j may be repeatedly perceived by multiple other sensor nodes, so the joint perception probability of all sensor nodes towards monitoring point A j is shown in Formula (3).
P C ( B all , A j ) = 1 i = 1 n ( 1 p ( B i , A j ) )
In Equation (3), P C ( B all , A j ) represents the joint probability that monitoring point A j can be sensed by all sensor nodes. B all represents all wireless sensor nodes within the monitoring area. n is the number of sensor nodes deployed within the monitoring area.
In WSN, coverage is an important indicator for evaluating its performance. Therefore, the coverage of the coverage model is defined as the ratio of the coverage area to the monitoring area, and the coverage area is defined as the sum of the joint perception probabilities of all monitoring points and the product of the grid area. The formula for solving coverage is shown below.
C r = j = 1 m P C ( B all , A j ) K L W
In Equation (4), C r represents the coverage rate. K represents the network area, and its value is K = L W m . L W represents the area of the monitoring area, and m represents the number of networks.
The relationship between model elements is shown in the following figure.
In Figure 1, the monitoring area is divided into several symmetrical grids of the same size, and the center point of each grid is recorded as the monitoring point. If the Euclidean distance between the monitoring point and a node in WSN is less than the sensing radius R, it indicates that the monitoring point has been sensed, and the grid where the monitoring point is located is recorded as covered. By calculating the ratio of the covered grid area to the monitoring area, the coverage rate of WSN can be obtained. The more grids the monitoring area is divided into, the closer the calculated coverage is to the true value.

2.2. Coverage Optimization Model

To optimize the coverage problem of WSN and improve the coverage rate, this paper takes the coverage rate in the WSN coverage model as the objective function, sets the monitoring area as S, and establishes the following optimization model.
max C r s . t . A j S B i S
To solve the optimization model and obtain optimal coverage, this paper introduces the GWO algorithm. The objective function in the optimization model is used as the fitness function of GWO, the monitoring area S is used as the search range, and the current coordinate values of all n sensor nodes deployed in the monitoring area are used as the dimension values of grey wolf individuals. Therefore, each individual is a 2n-dimensional vector. The first n dimensions store the X-axis coordinates of all deployed nodes, and the last n dimensions store the Y-axis coordinates of all deployed nodes. Each individual in the population represents a node deployment scheme of WSN, and the optimal individual obtained by iterative solution is the best deployment scheme of the WSN.

2.3. GWO

The research background of GWO algorithm is an optimization algorithm based on the predation behavior of grey wolf populations. The GWO algorithm simulates the hierarchical system in wolf packs, dividing the wolves in the population into wolf α , wolf β , wolf δ , and wolf ω . Different wolves play different roles in hunting to achieve the purpose of searching. The mathematical model of this algorithm is as follows.
X t + 1 = X t A D
In Equation (6), t represents the current number of iterations, X ( t ) is the current position of the wolf, and X ( t + 1 ) is the new location of the wolf. A is a parameter control vector that controls the movement direction of individual grey wolf. D represents the distance between an individual wolf and its prey. The calculation method of A and D is shown in Equations (7) and (8).
D = C X p t X t
A = 2 a r 1 a
C = 2 r 2
a = 2 ( 1 t T )
X p t represents the current position of the prey. C is the disturbance parameter used to correct the prey position, determined by Equation (9). r 1 , r 2 are both random numbers between (0, 1), and a a is the convergence factor whose value decreases linearly from 2 to 0, calculated by Formula (10). The t represents the current number of iterations, and T represents the total number of iterations.
Wolf α , wolf β , and wolf δ are the optimal solution, suboptimal solution, and third optimal solution in the GWO algorithm. Wolf ω will continuously update their positions during the iteration process based on the information of the three solutions, and the movement formulas are shown in (11) and (12).
D α = C 1 X α X ( t ) D β = C 2 X β X ( t ) D δ = C 3 X δ X ( t )
X 1 = X α A 1 D α X 2 = X β A 2 D β X 3 = X δ A 3 D δ
where D α , D β , D δ represent the distances between individual grey wolves and wolf α , wolf β , and wolf δ , respectively. A 1 , A 2 , A 3 are parameter control vectors that control the movement direction of individual grey wolf, calculated by Equation (8), and C 1 , C 2 , C 3 are perturbation parameters used to correct the positions of wolf α , wolf β , and wolf δ , calculated by Equation (9). X α , X β , X δ represent the position vectors of wolf α , wolf β , and wolf δ , respectively. X 1 , X 2 , X 3 are temporary locations.
Throughout the entire algorithm process, wolf α , wolf β , and wolf δ determine the position of their prey and move towards them to pursue them. Meanwhile, wolf α , wolf β , and wolf δ guide other wolves to move towards them, collaborate to complete the encirclement and hunting of prey, and ultimately achieve the solution of the optimal solution. The position update of the wolf pack is shown in Formula (13), the entire process is shown in Figure 2.
X ( t + 1 ) = X 1 + X 2 + X 3 3

3. Improved GWO Based on Multiple Improvement Strategies

Analyzing the iterative process of the standard GWO algorithm, the optimal three-headed wolves mainly guide the optimization process of the GWO algorithm. When searching globally, the step size is large, the search accuracy needs to be improved, and it is easy to converge too early and fall into local optima. Moreover, compared to other standard swarm intelligence algorithms, the standard GWO has better advantages in solving WSN coverage problems, but there is still room for improvement. Therefore, this paper proposes a multi-improvement strategy grey wolf search algorithm (IGWO-MS) to effectively solve GWO’s shortcomings. The specific improvement strategies are as follows.

3.1. Sobol Sequence Initialization

In the initialization of the meta-heuristic algorithm, GWO randomly initializes the population of individuals, which makes it challenging to ensure the balance of the initial population distribution. An evenly distributed random number means a better sample distribution, and the results obtained by the algorithm are stable. Therefore, the initial value of the population should be evenly distributed in the search space as far as possible to ensure the diversity of samples and the stability of the algorithm so as to improve its performance.
Low discrepancy sequence, also known as the quasi-Monte Carlo (QMC) method, is a sequence that is more evenly distributed in a given space than pseudo-random sequences due to the fact that QMC fills the multidimensional hypercube unit with as uniform points as possible by choosing a reasonable sampling direction [22]. Therefore, QMC has higher efficiency and uniformity when dealing with probability problems. In particular, the Sobol sequence, as a kind of QMC method, is not only uniformly distributed, but also has high generation efficiency [23]. As a result, these properties make the Sobol sequence suitable for use in the initialization of high-dimensional space populations. For comparison, two initial populations with size 500 and dimension 2 in the range [−100,100] are generated using the random distribution method and the Sobol sequence method, respectively, as shown in Figure 3a,b. It can be easily seen that the initialized population obtained by the Sobol sequence method in Figure 3b is more uniformly distributed and covers the solution space more completely than that obtained by the random distribution method in Figure 3a.

3.2. Introducing Nonlinear Convergence Factors and Reverse Learning Strategies

The standard GWO algorithm uses a linear function as the convergence factor, whose value decreases linearly from 2 to 0. This mechanism allows the algorithm to use half of the iterations for global exploration and half for local development, achieving a good overall performance in solving simple practical problems. However, in complex problems, this generality is difficult to meet the requirements of specific problems. For complex problems requiring high accuracy in solving, more time is needed for early exploration to have a greater chance of discovering the global optimal solution. Therefore, this paper adopts a nonlinear convergence factor to enhance the global exploration ability of the algorithm. In the early stage of iteration, the value scaling of the convergence factor slows down, and more time is spent on the global exploration. The calculation method is shown in Equation (14), and the comparison curve with the original algorithm’s convergence factor is shown in Figure 4. A reverse learning strategy is introduced to enhance the global search capability further, and the calculation method is shown in Equation (15). The reverse learning strategy is used to reversely solve the three wolves with the best and worst fitness values in the current population. At the same time, the three wolves in the wolf pack are randomly mirrored using Equation (16), and the reverse solution and the mirrored solution are randomly retained and updated. On the one hand, it can enhance the diversity of the population in the next generation and increase the probability of the algorithm jumping out of local optima; on the other hand, the reverse solution retains the beneficial search information of the elite individuals in the current population, accelerating the convergence speed of the algorithm.
a = ( ( 4 ( t 2 ) / T ) 2 / 2 ) 2 / 2
X * ( t ) = U B + L B X ( t )
X ( t ) = U B + L B X ( t ) ,   i f   X ( t ) < ( U B + L B ) / 2 X ( t ) ,   o t h e r w i s e
where a represents the convergence factor, t represents the current number of iterations, and T represents the total number of iterations. X ( t ) is the current position of the wolf, X * ( t ) is the inverse solution, and X ( t ) is the mirror solution. UB and LB, respectively, represent the upper and lower limits of the search space.

3.3. Introducing Scope Space Search

In GWO, due to the large convergence step size in the early stage, GWO is prone to falling into local optima. To expand the search space and improve population diversity, the improved algorithm, IGWO-MS, incorporates a range space search mechanism. Individual grey wolves in a wolf pack search within a certain range based on the positions of the three-headed wolves. If the fitness value of the search position is lower than the fitness value of the original algorithm’s position to move, they move towards the search position. During the search process, the grey wolf individuals target wolf α , wolf β , and wolf δ within a certain range of their vicinity, calculate the corresponding fitness value, and finally move towards the position with the lowest fitness value. This method can increase the search range of the population, avoid getting stuck in local optima too early, and improve search accuracy. The specific calculation formula is shown in Equation (17). The schematic diagram of position update is shown in Figure 5.
F ( X ( t + 1 ) ) = min F ( ± X 1 2 + X 2 2 + X 3 2 G ) , F ( ± b D α r 3 + X 1 ) , F ( ± b D β r 4 + X 2 ) , F ( ± b D δ r 5 + X 3 )
where F ( x ) is the fitness function, b is the search radius, which decreases nonlinearly with the number of iterations, and the calculation method is shown in Equation (18). G is determined by Equation (19), and r 3 , r 4 , r 5 are all random numbers between (0,1).
b = ε 2 σ 2 ε 2 / ε
G = X 1 + X 2 + X 3 + θ
In the formula, ε is the maximum radius for range search, and parameter σ increases linearly from 0 to ε . In this paper, the value of ε is 1, and σ is determined by t T . t represents the current number of iterations, and T represents the total number of iterations. θ takes a value of 10 10 to avoid G being zero.

3.4. Dimension-by-Dimension Update Strategy

The standard GWO algorithm updates all dimension values of the solution as a whole for fitness comparison during iteration. Although it reduces the complexity of the algorithm, it ignores the impact of a better value in a certain dimension on the fitness of the solution, thereby affecting the accuracy of the algorithm [16]. Therefore, this paper replaces the overall update strategy of the original algorithm with a dimension-by-dimension update strategy for the position update mechanism of the three-headed wolves, reducing the mutual influence between dimensions in high-dimensional optimization problems. When the j-th dimension value of a wolf’s position is updated, its fitness value is immediately calculated and compared with the original fitness value. If the updated fitness value is better, the j-th dimension update value is retained. Otherwise, the current updated value is discarded and the original j-th dimension value is retained. Afterwards, the next dimension value is updated. The formula of dimension-wise update is shown in Equation (20).
X α [ j ] = X α [ j ] + r 6 L e v y λ [ j ] ( X β [ j ] X δ [ j ] ) X β [ j ] = X β [ j ] + r 7 L e v y λ [ j ] ( X α [ j ] X δ [ j ] ) X δ [ j ] = X δ [ j ] + r 8 L e v y λ [ j ] ( X α [ j ] X β [ j ] )
where X α [ j ] , X β [ j ] and X δ [ j ] are the j-th dimensional values of the current three-headed wolves. r 5 , r 6 , r 7 are all random numbers between (0,1). L e v y λ [ j ] represents the j-th dimensional random search path, and L e v y λ is a random number that follows the Levy distribution. The calculation of Levy random number is shown in Equation (21).
L e v y λ = ϕ × μ ν 1 λ
In the Formula (21), μ and ν both obey the standard normal distribution, and μ N 0 , ϕ 2 , ν N 0 , 1 . λ = 1.2 , where λ = 1.2 controls the distribution variable parameters. The expression of ϕ is shown in Equation (22).
ϕ = Γ 1 + λ × sin π × λ 2 λ × Γ 1 + λ 2 × 2 λ 1 2 1 λ
where Γ ( x ) represents gamma function.

3.5. Algorithm Steps

Based on the above improvements, the algorithm implementation proposed in this article can be divided into the following seven steps:
Step1:
Determine the relevant parameters of the algorithm, including population size N, maximum number of iterations T, and search domain range.
Step2:
Initialize the wolf pack and generate a wolf pack using Sobol sequences within the search domain.
Step3:
Calculate fitness values and update wolf α , wolf β , and wolf δ .
Step4:
Calculate the convergence factor and use Equations (15) and (16) to perform reverse learning and mirror mapping on the wolf pack.
Step5:
Grey wolf individuals update their positions according to Equation (17).
Step6:
Perform the dimension-by-dimension update operation on the three-headed wolves according to Equation (20).
Step7:
Determine whether the current algorithm satisfies the optimal solution or the maximum number of iterations. If it does, terminate the algorithm and output the optimal solution. Otherwise, jump to Step 3.
The algorithm flow chart is shown in Figure 6.

4. Experimental Design and Analysis

To verify the effectiveness of the algorithm improvement and its application performance in WSNs, ablation experiments and application simulations were conducted on the algorithm. The experimental running environment is Intel (R) Corei5-10500 CPU, with a main frequency of 3.10 GHz, 8 GB of memory, Windows 10 64 bit operating system, and an integrated development environment of Python 3.7.

4.1. Design of Ablation Experiments

To verify the effectiveness of the improvement strategy, ablation experiments were conducted on the improved algorithm, namely IGWO-MS. The experimental comparison algorithms are shown in Table 1. A total of 15 benchmark functions were selected as test functions in Table 2 for the experiment [24]. The population size of the algorithm was set to 30, and the number of iterations was set to 1000. To ensure the stability of the experimental data, the algorithm was run independently 30 times, and the average and standard deviation were taken as performance comparison metrics. The experimental results are shown in Table 3.
In Table 3, F1–F15 corresponds to the 15 benchmark test functions from Table 2, MEAN represents the average value of 30 running results, and STD represents the standard deviation of 30 running results. From the comparative data analysis of ablation experiments, all four strategies have played a certain role in promoting the IGWO-MS algorithm. The strategy in Section 3.1 applies to the population initialization stage of the algorithm to make the wolf pack distribution as uniform as possible, so that the algorithm can obtain better results when solving some problems where the optimal solutions are in remote areas, avoid the situation of large differences in multiple solution results, and ensure the stability of the algorithm. From the comparison of experimental data, there is little difference between GWO1 and IGWO-MS. However, from the analysis of the maximum and minimum values of the entire result, the difference between the maximum and minimum values of GWO1 is more significant than that of IGWO-MS in 30 independent runs due to the influence of population distribution. The strategy in Section 3.2 acts on the global search stage of the algorithm, increasing the possibility of discovering the global optimal solution and increasing the probability of jumping out of local optimum, which helps improve the solution quality when the algorithm solves complex problems. From the comparison of experimental data, it can be seen that among the 15 test functions, the results obtained by GWO2 are all less than or equal to the results of IGWO-MS, which verifies that this strategy helps to improve the quality of solutions. The strategy in Section 3.3 aims to improve the formula for updating the position of individual grey wolves, increase the search range and speed of individuals, comprehensively enhance the optimization accuracy and speed of the algorithm, and achieve the best improvement effect on the algorithm. Overall, GWO3 has little improvement effect on the standard GWO algorithm, and even performs worse than the standard GWO algorithm in the F5, F6, F12, and F13 test functions, indicating that this strategy has the most significant contribution in the overall improvement strategy. The strategy in Section 3.4 is to update the positions of the three-headed wolves dimensionally, providing a small range of correction for the three-headed wolves, which helps the algorithm solve problems with minor changes in the solution near the optimal solution. From the test data of functions F3, F5, F9, F10, etc., optimization accuracy of GWO4 is slightly lower than that of GWO1, indicating that this strategy can further optimize the results. It can be understood that different improvement strategies impact different testing functions due to their different internal mechanisms.

4.2. Wireless Sensor Network Coverage Experiment

The coverage rate, C r (see Equation (4)), is used as the fitness value to evaluate the effectiveness of IGWO-MS in optimizing the WSN coverage problem. The experimental parameters are shown in Table 4. Specifically, standard PSO [21], standard GWO [16], DGWO [18], GWO-THW [20], and IGWO-MS, respectively, optimize WSN coverage. By comparing the coverage rate Cr of each algorithm, the superiority of IGWO-MS compared to other algorithms is verified. The parameter settings of the above algorithm are shown in Table 5.

4.2.1. WSN Application Simulation Results

To ensure the accuracy and stability of the experimental data, two scenarios of 20 and 30 nodes are selected for comparison in the simulation experiment. Each comparison algorithm is run independently 10 times, and the final coverage optimal value, average value, and standard deviation are taken as comparison data. The experimental results are shown in Table 6.
As can be seen from Table 6, the optimal coverage rate of the IGWO-MS algorithm is 13.19%, 1.68%, 4.92%, and 3.62% higher than that of the PSO, GWO, DGWO, and GWO-THW, respectively, when the number of nodes is 20. In addition, the average coverage rate is 16.45%, 3.13%, 11.25%, and 6.19% higher than PSO, GWO, DGWO, and GWO-THW, respectively. In the same way, when the number of nodes is 30, the optimal coverage of IGWO-MS is 15.23%, 1.36%, 5.55%, and 3.66% higher than PSO, GWO, DGWO, and GWO-THW, respectively. In addition, the average coverage is 16.78%, 1.56%, 10.91%, and 8.55% higher than PSO, GWO, DGWO, and GWO-THW, respectively. According to the experimental results, IGWO-MS has good performance in optimizing WSN coverage compared to the algorithms involved in the comparison.

4.2.2. Sensor Node Distribution Map

To more intuitively demonstrate the good performance of the IGWO-MS algorithm in optimizing the WSN coverage problem, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 shows the node distribution maps of each comparative algorithm after optimizing the WSN coverage problem. In these figures, * represents the sensor node, and the circular area represents the sensing range of the sensor node.
From Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, it can be visually seen that in the case of 20 and 30 sensor nodes, the optimization effects of PSO, GWO, DGWO, and GWO-THW are obviously inferior to those of IGWO-MS, resulting in varying degrees of redundant monitoring areas and incomplete coverage of monitoring areas. Instead, the IGWO-MS algorithm can achieve more uniform node distribution, less redundant monitoring areas, and wider wireless network coverage.

4.2.3. Coverage Convergence Curve

Figure 12 shows the coverage convergence curve of each algorithm after running 200 iterations in the two cases of 20 and 30 nodes. From the perspective of the convergence accuracy and speed of the algorithm, we can further understand the performance of the algorithm in optimizing the WSN coverage problem.
From Figure 12, it can be seen that in optimizing the coverage of WSN, IGWO-MS enhances the early optimization ability and later convergence accuracy while retaining the advantages of GWO. The other two improved GWO algorithms mentioned above, namely DGWO and GWO-THW enhance the early optimization ability, but do not balance the convergence speed and accuracy in the later stage, making it difficult to jump out of local optima in the later stage of algorithm iteration. Therefore, compared to comparative algorithms, the IGWO-MS algorithm has stronger comprehensive competitiveness in search accuracy, convergence speed, and stability.

5. Conclusions

To achieve maximum coverage of WSN, this paper proposes an improved grey wolf optimizer with multi-strategies (IGWO-MS) algorithm. Firstly, the IGWO-MS algorithm uses Sobol sequences to initialize the population, making the population distribution more uniform and covering the solution space more completely, and improving the stability of the algorithm. By introducing a range space search strategy, the search range of the population is increased, avoiding premature falling into local optima and improving search accuracy. Secondly, the algorithm performs reverse solution operations on elite individuals and disadvantaged individuals. It randomly selects individuals from the population for mirror mapping, enhancing the diversity of the population, enhancing the algorithm’s ability to jump out of local optima, and better balancing the algorithm’s global optimization and local development capabilities. Finally, the algorithm utilizes a dimension-by-dimension update strategy to update the dimensions of the three-headed wolves independently, accelerating the convergence speed of the algorithm. What is more, this paper applies the IGWO-MS algorithm to the coverage optimization problem of WSN to improve the adequate coverage of nodes. Simulation experiments have verified that IGWO-MS can more effectively improve WSN coverage, make node distribution more uniform, and reduce coverage redundancy compared to standard PSO, standard GWO, and two variants of GWO, DGWO, and GWO-THW. In summary, applying IGWO-MS can effectively optimize the coverage problem of WSNs and reduce deployment costs.

Author Contributions

Conceptualization, Y.O.; methodology, Y.O. and P.-F.Y.; validation, F.Q. and A.M.Z.; formal analysis, K.-Q.Z.; investigation, L.-P.M.; writing—original draft, Y.O.; writing—review and editing, F.Q. and A.M.Z.; funding acquisition, L.-P.M. and K.-Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (62266019 and 62066016); Scientific Research Project of Education Department of Hunan Province (22B0549 and 22C0282).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rawat, P.; Singh, K.D.; Chaouchi, H.; Bonnin, J.M. Wireless sensor networks: A survey on recent developments and potential synergies. J. Supercomput. 2014, 68, 1–48. [Google Scholar] [CrossRef]
  2. Jin, M.; Gu, X.; He, Y.; Wang, Y. Wireless Sensor Networks. In Conformal Geometry; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  3. Li, R.; Gong, D.; Tan, Y. Research on Optimal Design of Wireless Sensor Network with Particle Swarm Optimization and Improved Firefly Algorithm. In Proceedings of the 2023 IEEE 3rd International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 29–31 January 2023; pp. 1099–1104. [Google Scholar]
  4. Wang, C.; Zhu, R.; Jiang, Y.; Liu, W.; Jeon, S.-W.; Sun, L.; Wang, H. A Scheme Library-Based Ant Colony Optimization with 2-Opt Local Search for Dynamic Traveling Salesman Problem. Comput. Model. Eng. Sci. 2023, 135, 1209–1228. [Google Scholar] [CrossRef]
  5. Wang, J.; Liu, Y.; Rao, S.; Zhou, X.; Hu, J. A novel self-adaptive multi-strategy artificial bee colony algorithm for coverage optimization in wireless sensor networks. Ad Hoc Netw. 2023, 150, 103284. [Google Scholar] [CrossRef]
  6. Dao, T.K.; Nguyen, T.D.; Nguyen, V.T. An Improved Honey Badger Algorithm for Coverage Optimization in Wireless Sensor Network. J. Internet Technol. 2023, 24, 363–377. [Google Scholar]
  7. Wang, J.; Zhu, D.; Ding, Z.; Gong, Y. WSN Coverage Optimization based on Improved Sparrow Search Algorithm. In Proceedings of the 2023 15th International Conference on Advanced Computational Intelligence (ICACI), Seoul, Republic of Korea, 6–9 May 2023; pp. 1–8. [Google Scholar]
  8. Wei, Y.; Wei, X.; Huang, H.; Bi, J.; Zhou, Y.; Du, Y. SSMA: Simplified slime mould algorithm for optimization wireless sensor network coverage problem. Syst. Sci. Control Eng. 2022, 10, 662–685. [Google Scholar] [CrossRef]
  9. Song, R.; Xu, Z.; Liu, Y. Wireless Sensor Network Coverage Optimization Based on Fruit Fly Algorithm. Int. J. Online Eng. (iJOE) 2018, 14, 58–70. [Google Scholar] [CrossRef]
  10. Zeng, C.; Qin, T.; Tan, W.; Lin, C.; Zhu, Z.; Yang, J.; Yuan, S. Coverage Optimization of Heterogeneous Wireless Sensor Network Based on Improved Wild Horse Optimizer. Biomimetics 2023, 8, 70. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, L.; Xu, Y.; Xu, F.; Hu, Q.; Tang, Z. Balancing the trade-off between cost and reliability for wireless sensor networks: A multi-objective optimized deployment method. Appl. Intell. 2022, 53, 9148–9173. [Google Scholar] [CrossRef]
  12. Cao, L.; Wang, Z.; Wang, Z.; Wang, X.; Yue, Y. An Energy-Saving and Efficient Deployment Strategy for Heterogeneous Wireless Sensor Networks Based on Improved Seagull Optimization Algorithm. Biomimetics 2023, 8, 231. [Google Scholar] [CrossRef] [PubMed]
  13. Yarinezhad, R.; Hashemi, S.N. A sensor deployment approach for target coverage problem in wireless sensor networks. J. Ambient. Intell. Humaniz. Comput. 2020, 14, 5941–5956. [Google Scholar] [CrossRef]
  14. Li, Y.; Yao, Y.; Hu, S.; Wen, Q.; Zhao, F. Coverage Enhancement Strategy for WSNs Based on Multiobjective Ant Lion Optimizer. IEEE Sens. J. 2023, 23, 13762–13773. [Google Scholar] [CrossRef]
  15. Cheng, J.; Fang, Y.; Jiang, N. Research on Wireless Sensor Networks Coverage Based on Fruit Fly Optimization Algorithm. In Proceedings of the 2023 IEEE 3rd International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 29–31 January 2023; pp. 1109–1115. [Google Scholar]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Kohli, M.; Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comput. Des. Eng. 2017, 5, 458–472. [Google Scholar] [CrossRef]
  18. Liu, L.; Fu, S.C.; Huang, H.X. A Grey Wolf Optimization Algorithm based on drunkard strolling and reverse learning. Comput. Eng. Sci. 2021, 43, 1558–1566. (In Chinese) [Google Scholar]
  19. Singh, N.; Singh, S.B. Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence Performance. J. Appl. Math. 2017, 2017, 2030489. [Google Scholar] [CrossRef]
  20. Ou, Y.; Zhou, K.Q.; Yin, P.F.; Liu, X.W. Improved grey wolf algorithm based on dual convergence factor strategy. J. Comput. Appl. 2023, 43, 2679–2685. (In Chinese) [Google Scholar]
  21. Ozcan, E.; Mohan, C. Particle Swarm Optimization: Surfing the Waves. In Proceedings of the Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999. [Google Scholar]
  22. Eliáš, J.; Vořechovský, M.; Sadílek, V. Periodic version of the minimax distance criterion for Monte Carlo integration. Adv. Eng. Softw. 2020, 149, 102900. [Google Scholar] [CrossRef]
  23. Liu, X.; Zheng, S.; Wu, X.; Chen, D.; He, J. Research on a seismic connectivity reliability model of power systems based on the quasi-Monte Carlo method. Reliab. Eng. Syst. Saf. 2021, 215, 107888. [Google Scholar] [CrossRef]
  24. Ye, S.; Zhou, K.; Zain, A.M.; Wang, F.; Yusoff, Y. A modified harmony search algorithm and its applications in weighted fuzzy production rule extraction. Front. Inf. Technol. Electron. Eng. 2023, 24, 1574–1590. [Google Scholar] [CrossRef]
Figure 1. The correlation between the monitoring area, monitoring point, sensor node, and sensing radius.
Figure 1. The correlation between the monitoring area, monitoring point, sensor node, and sensing radius.
Symmetry 16 00286 g001
Figure 2. Location update schematic diagram in GWO algorithm.
Figure 2. Location update schematic diagram in GWO algorithm.
Symmetry 16 00286 g002
Figure 3. (a) Schematic diagram of randomly initialized population; (b) schematic diagram of Sobol initialization population.
Figure 3. (a) Schematic diagram of randomly initialized population; (b) schematic diagram of Sobol initialization population.
Symmetry 16 00286 g003
Figure 4. Convergence factor comparison curve.
Figure 4. Convergence factor comparison curve.
Symmetry 16 00286 g004
Figure 5. Location update schematic diagram in IGWO-MS algorithm.
Figure 5. Location update schematic diagram in IGWO-MS algorithm.
Symmetry 16 00286 g005
Figure 6. IGWO-MS algorithm flow chart.
Figure 6. IGWO-MS algorithm flow chart.
Symmetry 16 00286 g006
Figure 7. Optimize coverage of sensor networks using PSO.
Figure 7. Optimize coverage of sensor networks using PSO.
Symmetry 16 00286 g007
Figure 8. Optimize coverage of sensor networks using GWO.
Figure 8. Optimize coverage of sensor networks using GWO.
Symmetry 16 00286 g008
Figure 9. Optimize coverage of sensor networks using DGWO.
Figure 9. Optimize coverage of sensor networks using DGWO.
Symmetry 16 00286 g009
Figure 10. Optimize coverage of sensor networks using GWO-THW.
Figure 10. Optimize coverage of sensor networks using GWO-THW.
Symmetry 16 00286 g010
Figure 11. Optimize coverage of sensor networks using IGWO-MS.
Figure 11. Optimize coverage of sensor networks using IGWO-MS.
Symmetry 16 00286 g011
Figure 12. Convergence curves of each algorithm.
Figure 12. Convergence curves of each algorithm.
Symmetry 16 00286 g012
Table 1. Comparison algorithm name for ablation experiment.
Table 1. Comparison algorithm name for ablation experiment.
Algorithm NameFunction Description
GWOStandard grey wolf algorithm
GWO1Only remove the strategy in Section 3.1 of this article
GWO2Only remove the strategy in Section 3.2 of this article
GWO3Only remove the strategy in Section 3.3 of this article
GWO4Only remove the strategy in Section 3.4 of this article
IGWO-MSCombining all strategies in this article
Table 2. Detailed description of the test functions F1–F15.
Table 2. Detailed description of the test functions F1–F15.
FunctionDimensionInterval f m i n
F 1 = i = 1 d x i 2 30[−100,100]0
F 2 = i = 1 d x i + i = 1 d x i 30[−10,10]0
F 3 = i = 1 d 1 100 x i + 1 x i 2 2 + x i 1 2 30[−30,30]0
F 4 = i = 1 d x i + 0.5 2 30[−100,100]0
F 5 = i = 1 d i x i 4 + r a n d o m 0 , 1 30[−1.28,1.28]0
F 6 = i = 1 d x i 2 10 cos 2 Π x i + 10 30[−5.12,5.12]0
F 7 = 20 exp 0.2 1 n i = 1 d x i 2 exp 1 n i = 1 d cos 2 Π x i + 20 + e   30[−32,32]0
F 8 = 1 4000 i = 1 d x i 2 i = 1 d cos x i i + 1 30[−600,600]0
F 9 = sin 2 ( 3 π x 1 ) + ( x 1 1 ) 2 [ 1 + sin 2 ( 3 π x 2 ) ] + ( x 2 1 ) 2 [ 1 + sin 2 ( 2 π x 2 ) ] 2[−10,10]0
F 10 = sin 2 ( π w 1 ) + i = 1 d 1 ( w i 1 ) 2 [ 1 + 10 sin 2 ( π w i + 1 ) ] + ( w d 1 ) 2 [ 1 + sin 2 ( 2 π w d ) ] , w i = 1 + x i 1 4 , i = 1 , 2 , d 30[−10,10]0
F 11 = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) 0.4 cos ( 4 π x 2 ) + 0.7 2[−100,100]0
F 12 = i = 1 d | x i sin x i + 0.1 x i | 30[−10,10]0
F 13 = i = 1 d / 4 ( x 4 i 3 + 10 x 4 i 2 ) 2 + 5 ( x 4 i 1 x 4 i ) 2 + ( x 4 i 2 2 x 4 i 1 ) 4 + 10 ( x 4 i 3 x 4 i ) 4 30[−4,5]0
F 14 = ( x 2 + 47 ) sin ( x 2 + x 1 2 + 47 ) x 1 sin ( x 1 ( x 2 + 47 ) ) 2[−512,512]959.6407
F 15 = i = 1 d i x i 2 30[−10,10]0
Table 3. Ablation experimental data.
Table 3. Ablation experimental data.
FunctionIndexGWOGWO1GWO2GWO3GWO4IGWO−MS
F1MEAN1.28196 × 10−67006.50489 × 10−7800
STD2.259 × 10−67003.49025 × 10−7700
F2MEAN5.92048 × 10−38004.68638 × 10−5200
STD8.5519 × 10−38009.17064 × 10−5200
F3MEAN2.94292 × 10−71.2163 × 10−101.52901 × 10−103.40109 × 10−93.80469 × 10−104.10762 × 10−11
STD6.60568 × 10−72.79 × 10−104.78877 × 10−101.09997 × 10−86.92608 × 10−105.35137 × 10−11
F4MEAN000000
STD000000
F5MEAN0.0010317836.11784 × 10−55.2724 × 10−50.0013445977.59683 × 10−54.7024 × 10−5
STD0.0004207525.38 × 10−54.3414 × 10−50.0007914636.4438 × 10−53.79316 × 10−5
F6MEAN4.745155920022.1467380400
STD5.553888960017.0891276500
F7MEAN1.55135 × 10−14007.34227 × 10−1500
STD2.87314 × 10−15001.85036 × 10−1500
F8MEAN0.003840763000.00184575900
STD0.0094247000.00732380500
F9MEAN1.0595 × 10−73.13719 × 10−131.48034 × 10−127.44414 × 10−133.1437 × 10−113.11513 × 10−13
STD9.22767 × 10−81.27 × 10−125.24444 × 10−122.29998 × 10−123.07837 × 10−111.2374 × 10−12
F10MEAN1.82298 × 10−76.9659 × 10−133.22518 × 10−132.4864 × 10−123.04255 × 10−111.00321 × 10−13
STD1.50501 × 10−73.31× 10−129.47229 × 10−131.04973 × 10−113.26386 × 10−113.7922 × 10−13
F11MEAN000000
STD000000
F12MEAN0.000429345000.02636982900
STD0.000581895000.12262707300
F13MEAN1.62758 × 10−5000.00032795100
STD1.7224 × 10−5000.00020921200
F14MEAN−873.0826528−959.6406627−959.6406627−959.6406627−959.6406627−959.6406627
STD97.307953955.78 × 10−134.43333 × 10−135.78152 × 10−132.36403 × 10−81.28477 × 10−9
F15MEAN2.4622 × 10−64004.70441 × 10−7700
STD4.03569 × 10−64001.01831 × 10−7600
Table 4. Parameter setting of wireless sensor network coverage experiment.
Table 4. Parameter setting of wireless sensor network coverage experiment.
ParameterValue
Monitoring area100 m × 100 m
Number of nodes20/30
Node perception radius r12 m
Total number of iterations T200
Number of grids100 × 100
Table 5. Initialization parameters of all algorithms.
Table 5. Initialization parameters of all algorithms.
Comparison AlgorithmParameter Settings
IGWO-MS N = 30 ,   ε = 1 ,   θ = 10 10
PSO [21] N = 30 ,   ω = 0.8 ,   c 1 = 2 ,   c 2 = 2
GWO [16] N = 30
DGWO [18] N = 30 ,   s t e p = 2
GWO-THW [20] N = 30
Table 6. Comparison data of WSN application experiments.
Table 6. Comparison data of WSN application experiments.
Comparison AlgorithmPerformance IndexNumber of Nodes 20Number of Nodes 30
PSOBest69.83%82.77%
Mean65.75%79.74%
Std0.0241755710.014594554
GWOBest81.34%96.64%
Mean79.07%94.96%
Std0.0427545190.009961755
DGWOBest78.1%92.45%
Mean70.95%85.61%
Std0.04194170.050176905
GWO-THWBest79.40%94.34%
Mean76.01%87.97%
Std0.0480424810.065210228
IGWO-MSBest83.02%98.00%
Mean82.20%96.52%
Std0.0062357480.008023196
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ou, Y.; Qin, F.; Zhou, K.-Q.; Yin, P.-F.; Mo, L.-P.; Mohd Zain, A. An Improved Grey Wolf Optimizer with Multi-Strategies Coverage in Wireless Sensor Networks. Symmetry 2024, 16, 286. https://doi.org/10.3390/sym16030286

AMA Style

Ou Y, Qin F, Zhou K-Q, Yin P-F, Mo L-P, Mohd Zain A. An Improved Grey Wolf Optimizer with Multi-Strategies Coverage in Wireless Sensor Networks. Symmetry. 2024; 16(3):286. https://doi.org/10.3390/sym16030286

Chicago/Turabian Style

Ou, Yun, Feng Qin, Kai-Qing Zhou, Peng-Fei Yin, Li-Ping Mo, and Azlan Mohd Zain. 2024. "An Improved Grey Wolf Optimizer with Multi-Strategies Coverage in Wireless Sensor Networks" Symmetry 16, no. 3: 286. https://doi.org/10.3390/sym16030286

APA Style

Ou, Y., Qin, F., Zhou, K. -Q., Yin, P. -F., Mo, L. -P., & Mohd Zain, A. (2024). An Improved Grey Wolf Optimizer with Multi-Strategies Coverage in Wireless Sensor Networks. Symmetry, 16(3), 286. https://doi.org/10.3390/sym16030286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop