1. Introduction
Wireless sensor networks (WSNs) are an emerging computing and network model, which is a system composed of a large number of tiny, expensive, and highly intelligent device sensor nodes with wireless communication and computing capabilities. In addition, each sensor node can collect, store, and process environmental information, and send the collected information to the management center for statistics and analysis [
1]. On the one hand, the coverage of WSN determines the performance of the network. On the other hand, since wireless sensor nodes can be distributed arbitrarily within the configuration area, the coverage problem can be the first thing to be solved in the configuration of WSN. As WSNs become more and more widely used, scholars have conducted more in-depth theoretical research on them. Due to different research angles, the coverage problem also manifests itself in different theoretical models. In addition, solutions related to coverage can even be found within computational geometry [
2].
The WSN coverage problem lies in the combinatorial optimization problem, which is an NP-hard problem [
3]. Therefore, traditional deterministic technologies and algorithms have difficulty solving such non-differentiable discontinuous problems within a reasonable computing time. However, with the rise and development of swarm intelligence algorithms, researchers have found that this type of algorithm has shown good advantages in solving various engineering optimization problems. Especially in combinatorial optimization problems, unexpected results have been achieved [
4]. In addition, different combinatorial optimization problems require the design of appropriate swarm intelligence algorithms to improve the quality and efficiency of problem solving.
To this end, researchers study and improve various swarm intelligence algorithms and apply them to the coverage optimization problem of WSNs. For example, Wang et al. proposed an adaptive multi-strategy artificial bee colony (SaMABC) algorithm to improve the coverage of WSNs. Specifically, this algorithm introduces the simulated annealing method and dynamic search strategy into the artificial bee colony algorithm, which improves the solution accuracy of the algorithm. According to the characteristics of sensor nodes that need to be dynamically adjusted in the WSN coverage optimization problem, this algorithm designs a corresponding strategy library and adaptive selection mechanism. Compared with other comparison algorithms, simulation results in multiple scenarios have verified that SaMABC has good performance in improving coverage [
5]. Dao et al. designed an improved honey badger algorithm (IHBA) to solve the problem of low coverage caused by uneven node deployment in WSNs. To be specific, the algorithm uses the elite reverse learning strategy to enhance the global search performance of the algorithm, and uses the multi-directional strategy to improve the individual update formula, thereby improving the standard HBA algorithm’s tendency to fall into local optimality when dealing with node coverage optimization situations. Similarly, compared with other algorithms, the comparison results verify that the IHBA algorithm has high feasible coverage and archive coverage [
6]. Likewise, Wang et al. used the improved sparrow search algorithm (ISSA) to optimize the coverage problem of WSN, achieving the purpose of reducing node redundancy and improving coverage in WSN. This algorithm uses good point sets (GP) to initialize the population, and adjusts the individual update formula through adaptive factors to improve the accuracy of the algorithm. At the same time, a refraction reverse learning strategy is designed to avoid falling into local optimality. Experimental data show that the ISSA algorithm makes the distribution of nodes more even and improves the coverage of WSN [
7]. It can be found in the literature [
5,
6,
7] that they all study swarm intelligence algorithms improve the performance of WSNs, from aspects such as node redundancy and coverage of homogeneous WSNs. Similar research includes the improved sticky algorithm proposed by Wei et al., simplified slime mold algorithm (SSMA) [
8], the improved fruit fly optimization algorithm (change step of fruit fly optimization algorithm (CSFOA)) proposed by Song et al. [
9].
Some researchers aim to optimize the coverage of heterogeneous WSNs and study swarm intelligence algorithms that reduce node redundancy and improve coverage. For example, Zeng et al. proposed an improved mustang optimization algorithm (IWHO) for the coverage and connection problems of heterogeneous WSN. In this case, this algorithm integrates the golden sine algorithm into the mustang optimization algorithm and uses SPM chaos mapping to initialize the population, which improves the solution accuracy and speed of the standard mustang optimization algorithm. According to the existence of obstacles in heterogeneous WSNs, the IWHO algorithm enhances the global exploration capabilities of the algorithm by introducing opposition-based learning and Cauchy mutation mechanisms. Experimental data from three sets of different simulation environments show that the IWHO has better connectivity and coverage [
10]. Another case in point is that Chen et al. proposed a competitive multi-objective marine predator algorithm (CMOMPA) to address the differences between nodes and the complexity of the three-dimensional environments in heterogeneous WSNs. This algorithm treats a biological group in the ocean as a marine predator. For individuals who use algorithms, the survival of the fittest mechanism is adopted between groups, while the competition mechanism is adopted within the group. Experimental results show that CMOMPA exhibits excellent performance in multimodal three-dimensional deployment environments [
11]. What is more, Cao et al. aimed at maximizing coverage of heterogeneous WSNs and proposed an adaptive collaborative optimization seagull algorithm (PSO-SOA). This algorithm introduces the inertial weight of the PSO algorithm into the seagull algorithm, adaptively adjusts the individual displacement through the scale factor, and designs a reverse learning strategy to increase the probability of jumping out of the local optimum. The simulation verified that the PSO-SOA algorithm can improve network coverage and effectively avoid coverage blind spots and coverage redundancy in the network [
12].
In addition to WSN coverage problems, there is also the service life of WSN that needs to be considered. Obviously, reducing node energy loss can extend network life. Therefore, on the premise of ensuring coverage, some researchers take reducing node energy loss as the second goal. In particular, Yarinezhad et al. proposed a collaborative particle swarm optimization algorithm based on fuzzy logic [
13] with the goal of maximizing coverage and network life. Li et al. proposed an improved multi-objective ant optimization algorithm (NSIMOALO) based on fast non-dominated sorting [
14] with the goal of coverage and extending network life. Furthermore, with the goal of coverage and extending network life, Cheng et al. proposed a WSN coverage optimization method based on the fruit fly optimization algorithm [
15]. What is certain is that as research continues to deepen, swarm intelligence algorithms may continue to be integrated into other aspects of coverage optimization problems in WSNs, such as the symmetry of network topology, perceived reliability, network reliability, etc.
Compared with other swarm intelligence algorithms, the grey wolf optimizer (GWO) algorithm has the advantages of a clear concept, fewer control parameters, simple structure, low computational time complexity, and easy implementation. It also has a good global search and local search switching mechanism [
16], so it is widely used in optimization problems such as feature selection, neural network optimization, path planning, etc. Nevertheless, while the GWO algorithm has these advantages, it also has certain limitations compared to other algorithms, such as insufficient global search capabilities and low accuracy when solving high-dimensional optimization problems. In response to these problems existing in the standard GWO algorithm, domestic and foreign scholars have made many improvements to the GWO in terms of parameter configuration and combination with other algorithms. To illustrate, Kohli et al. proposed a grey wolf optimization algorithm based on chaos strategy (CGWO). This algorithm introduces a chaotic strategy and uses different chaotic mappings and different functions to adjust the key parameters of global optimization, thereby searching the search space more dynamically and comprehensively, and improving the convergence speed [
17]. Liu et al. proposed a hybrid grey wolf optimization algorithm based on drunkard grey wolf optimizer and reverse learning (DGWO). In the iterative process, the dominant wolves and the worst wolves in each generation of the population are reversely learned, compared, and re-learned. After sorting, the wolves with the top three fitness values are retained, and the drunken walk mechanism is used to update the leading wolves, which enhances the global search capability in high-dimensional space and speeds up the convergence speed of the algorithm [
18]. Narinder et al. proposed a new hybrid algorithm (GWO-PSO) based on the combination of grey wolf algorithm and particle swarm algorithm. This hybrid algorithm combines the advantages of the two algorithms and further improves the search intensity of the algorithm on the basis of improving the exploration ability of GWO and the mining ability of PSO [
19]. Ou et al. proposed a two-headed wolf-led grey wolf algorithm (GWO-THW) under the nonlinear double convergence factor strategy. This algorithm uses the average fitness value to divide the wolf population into hunting wolves and scout wolves. Predator wolves and scout wolves hunt under the leadership of their respective alpha wolves. The difference in fitness values within the unit Euclidean distance between the wolf pack and the three wolves is used as the weight coefficient for position update. Experiments have verified that this algorithm has high optimization accuracy and convergence speed [
20].
In view of the shortcomings of the GWO and the low coverage rate caused by dealing with WSN coverage problems, this paper proposes an improved grey wolf optimizer with multi-strategies (IGWO-MS). This algorithm improves the performance of the GWO by introducing strategies such as range space search, nonlinear convergence factor, elite reverse learning, and dimension-by-dimension updating. At the same time, this paper also transforms the node coverage problem in WSN into a single-objective optimization problem, and uses the fence discretization method to transform the area coverage into point coverage, and uses IGWO-MS to achieve the purpose of WSN coverage optimization, thereby solving the problems of high node deployment cost and low effective coverage for WSN coverage. In terms of performance verification, this paper compares the proposed IGWO-MS with PSO [
21], GWO [
16], DGWO [
18], and GWO-THW [
20] in simulation experiments with node numbers of 20 and 30. The experimental results show that IGWO-MS has more advantages in solving the coverage problem of WSNs when the number of nodes is 20 and 30, respectively. For example, in terms of the optimal coverage rate, IGWO-MS has been improved to varying degrees compared with its competitors namely, PSO, GWO, DGWO, and GWO-THW, proving that it can effectively improve the coverage quality of network nodes. On the other hand, in terms of the average coverage rate, IGWO-MS has a greater improvement compared to the optimal coverage rate, thus exhibiting better stability than its counterparts.
2. Wireless Sensor Network Coverage Problem and Standard GWO
2.1. Sensor Network Node Coverage Model
In WSNs, the sensing range of a sensor node is a circular area with the sensor node as the center and R as the radius. In addition, only monitoring points within this area can be sensed by the sensor node. Therefore, the radius R of this circular area is called the sensing radius. In addition, this paper treats the WSN coverage area as a two-dimensional plane to simplify the problem and makes the following ideal assumptions:
Assumption 1. All nodes have the same structure and sensing range;
Assumption 2. The sensing range of all nodes is circular and not affected by obstacles;
Assumption 3. All nodes can sense and obtain the positions of other nodes within their sensing range in real time.
Based on the above assumptions, the WSN coverage model is as follows.
First, the monitoring area is set as a rectangle with a length of L and a width of W, and its area is . Second, the four vertex coordinates of the monitoring area in the plane coordinate system are (0, 0), (0, W), (L, 0), (L, W). Third, the monitoring area is discretized into m grids with equal area and mutual symmetry. Then, the center point of the grid is the monitoring point, and its set is . Due to the symmetry between grids, the coordinate transformation of these monitoring points should become simple, which can effectively reduce the calculation load of the model. Next, n sensor nodes are randomly deployed in the monitoring area, and their set is . Last, all nodes adopt the Boolean sensing model, and the sensing radius of each node is R.
The Euclidean distance between the sensor node and the monitoring point is as follows.
In Equation (1),
represents the Euclidean distance between sensor node
and monitoring point
.
is the coordinate of the node
, and
is the coordinate of monitoring point
. If monitoring point
is located in the sensing range of sensor node
with itself as the center and radius R, it means that the monitoring point is sensed by the sensor node, that is, the grid where it is located is covered by WSN. In addition, the probability that monitoring point
is sensed by sensor network node
is as follows:
In Equation (2),
represents the probability that the monitoring point
can be sensed by the sensor node
, and
R is the sensing radius. Meanwhile, each monitoring point
may be repeatedly perceived by multiple other sensor nodes, so the joint perception probability of all sensor nodes towards monitoring point
is shown in Formula (3).
In Equation (3), represents the joint probability that monitoring point can be sensed by all sensor nodes. represents all wireless sensor nodes within the monitoring area. n is the number of sensor nodes deployed within the monitoring area.
In WSN, coverage is an important indicator for evaluating its performance. Therefore, the coverage of the coverage model is defined as the ratio of the coverage area to the monitoring area, and the coverage area is defined as the sum of the joint perception probabilities of all monitoring points and the product of the grid area. The formula for solving coverage is shown below.
In Equation (4), represents the coverage rate. K represents the network area, and its value is . represents the area of the monitoring area, and m represents the number of networks.
The relationship between model elements is shown in the following figure.
In
Figure 1, the monitoring area is divided into several symmetrical grids of the same size, and the center point of each grid is recorded as the monitoring point. If the Euclidean distance between the monitoring point and a node in WSN is less than the sensing radius R, it indicates that the monitoring point has been sensed, and the grid where the monitoring point is located is recorded as covered. By calculating the ratio of the covered grid area to the monitoring area, the coverage rate of WSN can be obtained. The more grids the monitoring area is divided into, the closer the calculated coverage is to the true value.
2.2. Coverage Optimization Model
To optimize the coverage problem of WSN and improve the coverage rate, this paper takes the coverage rate in the WSN coverage model as the objective function, sets the monitoring area as S, and establishes the following optimization model.
To solve the optimization model and obtain optimal coverage, this paper introduces the GWO algorithm. The objective function in the optimization model is used as the fitness function of GWO, the monitoring area S is used as the search range, and the current coordinate values of all n sensor nodes deployed in the monitoring area are used as the dimension values of grey wolf individuals. Therefore, each individual is a 2n-dimensional vector. The first n dimensions store the X-axis coordinates of all deployed nodes, and the last n dimensions store the Y-axis coordinates of all deployed nodes. Each individual in the population represents a node deployment scheme of WSN, and the optimal individual obtained by iterative solution is the best deployment scheme of the WSN.
2.3. GWO
The research background of GWO algorithm is an optimization algorithm based on the predation behavior of grey wolf populations. The GWO algorithm simulates the hierarchical system in wolf packs, dividing the wolves in the population into wolf
, wolf
, wolf
, and wolf
. Different wolves play different roles in hunting to achieve the purpose of searching. The mathematical model of this algorithm is as follows.
In Equation (6),
t represents the current number of iterations,
is the current position of the wolf, and
is the new location of the wolf.
A is a parameter control vector that controls the movement direction of individual grey wolf.
D represents the distance between an individual wolf and its prey. The calculation method of
A and
D is shown in Equations (7) and (8).
represents the current position of the prey.
C is the disturbance parameter used to correct the prey position, determined by Equation (9).
are both random numbers between (0, 1), and
a is the convergence factor whose value decreases linearly from 2 to 0, calculated by Formula (10). The
t represents the current number of iterations, and
T represents the total number of iterations.
Wolf
, wolf
, and wolf
are the optimal solution, suboptimal solution, and third optimal solution in the GWO algorithm. Wolf
will continuously update their positions during the iteration process based on the information of the three solutions, and the movement formulas are shown in (11) and (12).
where
represent the distances between individual grey wolves and wolf
, wolf
, and wolf
, respectively.
are parameter control vectors that control the movement direction of individual grey wolf, calculated by Equation (8), and
are perturbation parameters used to correct the positions of wolf
, wolf
, and wolf
, calculated by Equation (9).
represent the position vectors of wolf
, wolf
, and wolf
, respectively.
are temporary locations.
Throughout the entire algorithm process, wolf
, wolf
, and wolf
determine the position of their prey and move towards them to pursue them. Meanwhile, wolf
, wolf
, and wolf
guide other wolves to move towards them, collaborate to complete the encirclement and hunting of prey, and ultimately achieve the solution of the optimal solution. The position update of the wolf pack is shown in Formula (13), the entire process is shown in
Figure 2.
3. Improved GWO Based on Multiple Improvement Strategies
Analyzing the iterative process of the standard GWO algorithm, the optimal three-headed wolves mainly guide the optimization process of the GWO algorithm. When searching globally, the step size is large, the search accuracy needs to be improved, and it is easy to converge too early and fall into local optima. Moreover, compared to other standard swarm intelligence algorithms, the standard GWO has better advantages in solving WSN coverage problems, but there is still room for improvement. Therefore, this paper proposes a multi-improvement strategy grey wolf search algorithm (IGWO-MS) to effectively solve GWO’s shortcomings. The specific improvement strategies are as follows.
3.1. Sobol Sequence Initialization
In the initialization of the meta-heuristic algorithm, GWO randomly initializes the population of individuals, which makes it challenging to ensure the balance of the initial population distribution. An evenly distributed random number means a better sample distribution, and the results obtained by the algorithm are stable. Therefore, the initial value of the population should be evenly distributed in the search space as far as possible to ensure the diversity of samples and the stability of the algorithm so as to improve its performance.
Low discrepancy sequence, also known as the quasi-Monte Carlo (QMC) method, is a sequence that is more evenly distributed in a given space than pseudo-random sequences due to the fact that QMC fills the multidimensional hypercube unit with as uniform points as possible by choosing a reasonable sampling direction [
22]. Therefore, QMC has higher efficiency and uniformity when dealing with probability problems. In particular, the Sobol sequence, as a kind of QMC method, is not only uniformly distributed, but also has high generation efficiency [
23]. As a result, these properties make the Sobol sequence suitable for use in the initialization of high-dimensional space populations. For comparison, two initial populations with size 500 and dimension 2 in the range [−100,100] are generated using the random distribution method and the Sobol sequence method, respectively, as shown in
Figure 3a,b. It can be easily seen that the initialized population obtained by the Sobol sequence method in
Figure 3b is more uniformly distributed and covers the solution space more completely than that obtained by the random distribution method in
Figure 3a.
3.2. Introducing Nonlinear Convergence Factors and Reverse Learning Strategies
The standard GWO algorithm uses a linear function as the convergence factor, whose value decreases linearly from 2 to 0. This mechanism allows the algorithm to use half of the iterations for global exploration and half for local development, achieving a good overall performance in solving simple practical problems. However, in complex problems, this generality is difficult to meet the requirements of specific problems. For complex problems requiring high accuracy in solving, more time is needed for early exploration to have a greater chance of discovering the global optimal solution. Therefore, this paper adopts a nonlinear convergence factor to enhance the global exploration ability of the algorithm. In the early stage of iteration, the value scaling of the convergence factor slows down, and more time is spent on the global exploration. The calculation method is shown in Equation (14), and the comparison curve with the original algorithm’s convergence factor is shown in
Figure 4. A reverse learning strategy is introduced to enhance the global search capability further, and the calculation method is shown in Equation (15). The reverse learning strategy is used to reversely solve the three wolves with the best and worst fitness values in the current population. At the same time, the three wolves in the wolf pack are randomly mirrored using Equation (16), and the reverse solution and the mirrored solution are randomly retained and updated. On the one hand, it can enhance the diversity of the population in the next generation and increase the probability of the algorithm jumping out of local optima; on the other hand, the reverse solution retains the beneficial search information of the elite individuals in the current population, accelerating the convergence speed of the algorithm.
where
represents the convergence factor,
t represents the current number of iterations, and
T represents the total number of iterations.
is the current position of the wolf,
is the inverse solution, and
is the mirror solution.
UB and
LB, respectively, represent the upper and lower limits of the search space.
3.3. Introducing Scope Space Search
In GWO, due to the large convergence step size in the early stage, GWO is prone to falling into local optima. To expand the search space and improve population diversity, the improved algorithm, IGWO-MS, incorporates a range space search mechanism. Individual grey wolves in a wolf pack search within a certain range based on the positions of the three-headed wolves. If the fitness value of the search position is lower than the fitness value of the original algorithm’s position to move, they move towards the search position. During the search process, the grey wolf individuals target wolf
, wolf
, and wolf
within a certain range of their vicinity, calculate the corresponding fitness value, and finally move towards the position with the lowest fitness value. This method can increase the search range of the population, avoid getting stuck in local optima too early, and improve search accuracy. The specific calculation formula is shown in Equation (17). The schematic diagram of position update is shown in
Figure 5.
where
is the fitness function,
b is the search radius, which decreases nonlinearly with the number of iterations, and the calculation method is shown in Equation (18). G is determined by Equation (19), and
are all random numbers between (0,1).
In the formula, is the maximum radius for range search, and parameter increases linearly from 0 to . In this paper, the value of is 1, and is determined by . t represents the current number of iterations, and T represents the total number of iterations. takes a value of to avoid G being zero.
3.4. Dimension-by-Dimension Update Strategy
The standard GWO algorithm updates all dimension values of the solution as a whole for fitness comparison during iteration. Although it reduces the complexity of the algorithm, it ignores the impact of a better value in a certain dimension on the fitness of the solution, thereby affecting the accuracy of the algorithm [
16]. Therefore, this paper replaces the overall update strategy of the original algorithm with a dimension-by-dimension update strategy for the position update mechanism of the three-headed wolves, reducing the mutual influence between dimensions in high-dimensional optimization problems. When the
j-th dimension value of a wolf’s position is updated, its fitness value is immediately calculated and compared with the original fitness value. If the updated fitness value is better, the
j-th dimension update value is retained. Otherwise, the current updated value is discarded and the original
j-th dimension value is retained. Afterwards, the next dimension value is updated. The formula of dimension-wise update is shown in Equation (20).
where
,
and
are the
j-th dimensional values of the current three-headed wolves.
are all random numbers between (0,1).
represents the
j-th dimensional random search path, and
is a random number that follows the Levy distribution. The calculation of Levy random number is shown in Equation (21).
In the Formula (21),
μ and
ν both obey the standard normal distribution, and
,
.
, where
controls the distribution variable parameters. The expression of
ϕ is shown in Equation (22).
where
represents gamma function.
3.5. Algorithm Steps
Based on the above improvements, the algorithm implementation proposed in this article can be divided into the following seven steps:
- Step1:
Determine the relevant parameters of the algorithm, including population size N, maximum number of iterations T, and search domain range.
- Step2:
Initialize the wolf pack and generate a wolf pack using Sobol sequences within the search domain.
- Step3:
Calculate fitness values and update wolf , wolf , and wolf .
- Step4:
Calculate the convergence factor and use Equations (15) and (16) to perform reverse learning and mirror mapping on the wolf pack.
- Step5:
Grey wolf individuals update their positions according to Equation (17).
- Step6:
Perform the dimension-by-dimension update operation on the three-headed wolves according to Equation (20).
- Step7:
Determine whether the current algorithm satisfies the optimal solution or the maximum number of iterations. If it does, terminate the algorithm and output the optimal solution. Otherwise, jump to Step 3.
The algorithm flow chart is shown in
Figure 6.
5. Conclusions
To achieve maximum coverage of WSN, this paper proposes an improved grey wolf optimizer with multi-strategies (IGWO-MS) algorithm. Firstly, the IGWO-MS algorithm uses Sobol sequences to initialize the population, making the population distribution more uniform and covering the solution space more completely, and improving the stability of the algorithm. By introducing a range space search strategy, the search range of the population is increased, avoiding premature falling into local optima and improving search accuracy. Secondly, the algorithm performs reverse solution operations on elite individuals and disadvantaged individuals. It randomly selects individuals from the population for mirror mapping, enhancing the diversity of the population, enhancing the algorithm’s ability to jump out of local optima, and better balancing the algorithm’s global optimization and local development capabilities. Finally, the algorithm utilizes a dimension-by-dimension update strategy to update the dimensions of the three-headed wolves independently, accelerating the convergence speed of the algorithm. What is more, this paper applies the IGWO-MS algorithm to the coverage optimization problem of WSN to improve the adequate coverage of nodes. Simulation experiments have verified that IGWO-MS can more effectively improve WSN coverage, make node distribution more uniform, and reduce coverage redundancy compared to standard PSO, standard GWO, and two variants of GWO, DGWO, and GWO-THW. In summary, applying IGWO-MS can effectively optimize the coverage problem of WSNs and reduce deployment costs.