1. Introduction
Recently, wireless communications have been widely used since they provide a robust and flexible way to transmit information. Wireless communications refer to the association between mobility and connectivity, considering the air as a transmission medium. [
1]. Wireless sensor networks represent the most popular technology among the existing wireless schemes. To operate wireless sensor networks, efficient processes from a computational and energetical point of view are required. Under such conditions, communication protocols are methodological structures that allow guaranteeing the efficiency of such processes.
A wireless network of sensors involves a set of electromechanical devices distributed within a defined geographical section. These elements can provide important information about the area in which they are distributed. This information is exchanged and transmitted among the sensors of the network by using a certain transmission channel [
2]. The performance of wireless sensors depends on battery management. This fact represents the main problem since it determines its energy autonomy. Therefore, it is important to use a base station (BS) to improve information processing through a centralized transmission. In the context of WSNs, the BS is also referred to as the sink or fusion center due to its aggregation capabilities.
Cluster-based or hierarchical routing are popular schemes with interesting characteristics such as efficiency and scalability in communications. The generic ideas of hierarchical routing have been integrated with several methodologies to save energy in WSNs. In the structure of a hierarchical method, nodes with high energy are only considered for processing and transmit information [
3]. On the other hand, nodes with low energy are used to collect information in regions near the target. Hierarchical routing corresponds to a powerful approach to diminish the energy consumption inside the cluster structure through the aggregation of information. Hierarchical routing methods integrate processes to reduce the size of the transmission packet to the determined sink [
4].
Under such conditions, the assignation of nodes with special purposes can significantly increase the lifetime of the network and the scalability of the architecture. Cluster Heads (CHs) are special nodes used as references in cluster-based or hierarchical schemes. Scalability represents an important property in WSNs. This concept is an open problem that has not been resolved in most of the routing methods due to the constraints imposed by the initial assumptions. In a generical WSN approach, cluster-based protocols consider a single sink and some cluster heads in order to extend their influence of the covering space [
5]. With this architecture, WSNs maintain low scalability producing expensive processes in terms of energy consumption. Incrementing the number of nodes or the influence diameter of the network provokes an energetical overload and a bottleneck in the transmission of the information.
An efficient protocol should present mechanisms to save energy in each node. The objective is to increase the frequency of the recharging process from the batteries. This fact is important since the recharging process is usually difficult and even not possible. Under this architecture, software processes inside the nodes distribute the power load among different nodes [
6]. Hierarchical routing incorporates two roles in the behavior of each node. The existence of such operation modes allows considerable energy saving. These different transmission modes, as cluster head and generical sensor, can be important when this architecture is also implemented in all layers [
7].
On the other hand, optimization strategies such as metaheuristic methods refer to optimization methods that can solve complex systems. These algorithms are extracted from the observation of biological or social processes, that according to a determined perspective, can be interpreted as search strategies. Due to its high application potential, several metaheuristic methods have been introduced in the literature. Some of the most popular metaheuristic schemes involve schemes such as the Differential Evolution (DE) method [
8], Genetic Algorithms [
9], the Particle Swarm Optimization (PSO) [
10], the Artificial Bee Colony (ABC) algorithm [
11], the Gravitational Search Algorithm (GSA) [
12], and the Grey Wolf Optimizer [
13], to name a few. Metaheuristic methods do not assume continuity differentiability, convexity, or determined initial conditions. These properties represent the most important advantages in comparison to other optimization approaches. Although these schemes produce interesting results, they maintain different difficulties when used to solve high multimodal formulations.
The Locust Search (LS-II) [
14] scheme is a metaheuristic technique extracted from the modeling of the biological behavior of the desert locusts. A locust entity behaves considering two opposite processes: solitary and social. Under the solitary process, elements avoid being in contact with other entities with the objective to explore new promising areas for food sources. On the other hand, the social process considers the high concentration of the entities to consume the areas with abundant food resources. This mechanism of concentration involves the attraction of entities located in areas of low food levels to those elements that are in positions with the best food sources. With the combination of both behaviors, the LS-II approach presents very effective global and local search properties. These characteristics have motivated its use in a wide variety of complex optimization formulations such as parameter estimation of chaotic systems [
15], image processing [
16], and pattern recognition [
17], to name a few.
In this paper, a novel clustering routing protocol for Wireless Sensor Networks is presented. The approach is designed based on the Locust Search (LS-II) method. In the proposed protocol, the number of cluster head nodes and their selection is identified by the LS-II approach. Once cluster heads nodes are selected, the rest of the sensor nodes are assigned to their nearest cluster head. Under such conditions, the network structure is constantly modified by the LS-II until finding the optimal distribution of the node structure that reduces the transmission distance. Numerical simulations exhibit competitive configurations, which demonstrate that our routing protocol obtains minimal energy consumption, improving the lifetime and extending the stability period of the network in comparison with other popular clustering routing protocols.
The main contributions of this research can be summarized as follows: (1) a new clustering routing protocol for Wireless Sensor Networks is proposed, (2) the scheme translates the clustering routing task in an optimization formulation, (3) the solution of the optimization problem (number of cluster head nodes and their selection) is identified by the Locust Search (LS-II) method which has never been used for these purposes.
The next sections of the paper are structured as follows. In
Section 2, the related work is discussed. In
Section 3, the main characteristics of the Locust Search (LS-II) method are analyzed.
Section 4 explains our clustering protocol.
Section 5 exhibits the numerical simulations of our protocol compared with other well-known methods. Finally, the conclusions are discussed in
Section 6.
2. Related Work
Multiple applications of WSNs, such as those represented by the Internet of Things and monitoring dangerous geographical spaces, have motivated the enhancement of different aspects of WSNs, namely, topology optimization [
18], distributed and decentralized detection [
19,
20,
21], multipath routing protocols [
22], and communication failures [
23,
24]. From the wide range of enhancements, routing protocols have attracted the attention of several scientific communities due to their applications in energy savings. From all routing protocols, clustering schemes represent the most popular approaches to manage energy consumption. A clustering scheme allows the efficient administration of the energy consumption for the working cycle of WSNs [
25,
26]. The Low Energy Adaptive Clustering Hierarchy Protocol (LEACH) [
27] corresponds to the most well-known clustering protocol scheme in the case of homogeneous WSNs. In its operation, the LEACH method considers a set of cluster heads (CHs) which are randomly identified while the rest of the sensor nodes are frequently interchanged as CHs or generic nodes. Therefore, the energy consumption is uniformly distributed in each node with the objective to extend the network operation. Multiple other protocols that use the LEACH method as a part of their structure have been proposed in the literature producing distinct levels of performance [
28,
29,
30,
31,
32]. Another popular routing protocol represents the Vice-CH-enabled (VCH) scheme [
33]. It reduces energy consumption by the use of a two-step process. In the mechanism of VCH, the CH nodes are selected, considering only the nodes that currently maintain power possibilities. The numerical results produced by VCH indicate that its operative properties are better than those included in the LEACH protocol [
34].
The LEACH and VCH schemes are designed to work with homogeneous WSNs. In spite of interesting characteristics, they present a low performance when they are used in Heterogeneous Wireless Sensor Networks (HWSNs). For this reason, elements of both approaches have been incorporated to design other new routing protocols [
35,
36] that can operate appropriately with HWSNs. From these schemes, the Stable Election Protocol (SEP) [
37] protocol is one of the most popular. Under the operation of SEP, it is considered two-node prototypes: advanced elements and normal nodes. Advanced elements have a higher probability than normal nodes to present a CHs behavior. Therefore, the SEP approach considers distinct threshold patterns according to the node behavior. With this mechanism, SEP manages the energy of advanced elements efficiently and eliminates the consumption of normal nodes to extend the lifetime of the network. The mechanisms of the SEP protocols have been used to produce several other approaches. Some interesting methods involve the Modified Stable Election Protocol (M-SEP) [
38]. Under the operation of the M-SEP, the consumed energy of each node is evaluated with regard to the complete network to identify the nodes that have a high potential to become CHs. Therefore, nodes with a better energy level have a higher probability of being CHs than those that present low energy levels. The incorporation of this process by M-SEP considerably enhanced the life cycle of the network and its transmission rate. The P-SEP [
39] protocol is another method based on the original SEP approach. The P-SEP determines probabilistically the uncertainty of the energy contained in each node through the modeling of this level according to its behavior. With this information, it is avoided the selection of a node with a lower energy that is lower than a threshold value. Therefore, only the elements that surpass this threshold have a possibility to become CHs nodes. With a similar mechanism as P-SEP, the clustering protocol Distributed Energy-Efficient Clustering Algorithm (DEEC) [
40] has been designed as an effective method to operate HWSNs. In DEEC, it is also employed a threshold limit to identify the CHs. The sensor energy level limits the period of time that a sensor behaves as a CH. The mechanism of DEEC considers that high energy elements generate stable transmission behaviors.
As an alternative to traditional schemes, the design of clustering protocols has also been conducted considering metaheuristic algorithms [
41,
42,
43]. According to the existent literature, metaheuristic approaches have exhibited better performance levels than those schemes based on classical computing strategies in terms of robustness and accuracy. Under the metaheuristic methodology, the problem of clustering protocol is turned into an optimization formulation where an objective function is modeled to estimate the quality of a solution. With the information provided by the objective function, the metaheuristic method tries different sensor configurations until finding the solution that obtains a longer lifetime of the network [
44,
45]. In the literature, different clustering protocols have been designed following metaheuristic concepts. Some representative schemes involve the Energy Centers using Particle Swarm Optimization (EC-PSO) [
46]. Under EC-PSO, the CHs are initially determined considering a geometric approach. Then, the PSO method is applied to select the sensor that will assume the role of the CH node from the network. The EC-PSO scheme also considers a process to avoid the selection of nodes that maintain low energy levels. Another important approach is the Genetic-Algorithm-Based Energy-Efficient Clustering (GAEEC) [
47]. Under this protocol, a Genetic Algorithm is considered to select the CH nodes through a similarity metric. Therefore, the GA method employs as an objective function a model that evaluates the transmission cost produced by each node according to its energy level. In recent years, a routing protocol that considers the Grey Wolf Optimizer has been also introduced. Under this protocol, distinct objective functions are considered to evaluate the node properties of each sensor. The values provided by each objective function represent weights that are dynamically adapted according to the distance from the nodes of the network. Under this approach, it is identified the node configuration that reduces the total sum of weights. In spite of their interesting results, these metaheuristic methods present a critical disadvantage, such as premature convergence. It refers to the act of detecting a suboptimal node configuration as the best solution for an optimization problem. Different from other protocols based on metaheuristic principles, our proposed approach is the mechanism to determine the CH nodes. In our LS-II scheme, candidate solutions are encoded to automatically identify the number of CHs. Under this approach, it is avoided the consideration of a fixed percentage of CHs or a predefined probability to obtain the CHs. Since the LS-II method prevents premature convergence, our technique ensures that the optimal number of CHs is selected and not a suboptimal configuration. The proposed scheme models a distinct objective function to guide the search process towards the node configuration, which better fits the constraints imposed by energy savings and load balancing.
Due to the large number of acronyms involved, a list of them and their meaning is presented in
Table 1.
3. The Locust Search (LS-II) Method
The Locust Search (LS-II) method refers to a metaheuristic technique obtained from the modeling of the gregarious behavior observed in swarms of locusts. The mechanisms incorporated in LS-II avoid the concentration of individuals in promising areas and improve the exploration of the search space through the redistribution of agents [
48]. With the combination of both behaviors, the LS-II approach presents very effective global and local search properties. These characteristics have motivated its use in the solution of very complex problems [
49]. The LS-II proceeds from the original LS (Locust Search method). Different from LS, the new LS-II incorporates new operators and mechanisms to increase its capacity to avoid the agent concentration. Such mechanisms allow a better balance between exploration and exploitation to find the global solution when it faces several local minima.
Under LS-II, a population of locusts symbolize a set of candidate solutions (where corresponds to the whole population size). The set of elements from interacts with each other examining a -dimensional search space. Each solution is identified inside a constrained space (where and symbolize the lower and upper limits for the -th decision variable, respectively).
As with any other metaheuristic approach, LS-II involves an iterative model in which candidate solutions modify their location at each iteration during its execution. The position of each candidate solution is modified through the application of a set of operators modeled from the two behavioral processes observed in locust insects: solitary process and social process.
3.1. Solitary Process
Candidate solutions in the solitary process move in distinct trajectories examining promising areas of food sources (good solutions). During this process, search agents avoid concentrating on other elements. This mechanism is designed considering a model of attraction and repulsion forces undergone among solutions within the population
. Under such conditions, at each iteration
, the resultant attraction and repulsions forces (called social force) experimented by a specific search agent
is modeled by the following formulation:
where
represents the pairwise attraction-repulsion between the candidate solutions
and
which is modeled by the following expression:
where the function
denotes the dominance value between
and
. The value
corresponds to the social factor while
symbolized the Euclidian distance
. The vector is the unit vector from
to
.
symbolizes a random vector of dimension
whose values are uniformly distributed.
The social factor
is defined by the following relationship:
where the elements
and
represent the attraction-repulsion factor and influence size, respectively. The function
expresses the relative dominance between search agents. To implement
, each search agent
is ranked with a number within the interval
. The best candidate solution is associated with the rank
while the worst element with the rank
. The concepts of best or worst are considered in terms of the produced fitness value. Once assigned the ranks for each candidate solution, the dominance value is modeled as follows:
Therefore, due to the influence of the total social force
, each search agent
present a determined tendency of being attracted or repelled to or from other elements within the population
. Under such circumstances, the new location
assumed by the search agent
because of the influence of the total force is computed as follows:
Therefore, the set of candidate solutions has been modified to the updated population .
3.2. Social Process
The social process is a mechanism applied to improve the accuracy of the best candidate solutions identified from
during the solitary process. During the social process, it is generated a subset of search agents
that involves the best
elements of the population
. Then, for each candidate solution
, a set
of
new random solutions
are generated inside a limited subspace
. The subspace
in which it is generated
new solutions around
presents the following limits:
where
and
corresponds to the upper and lower limits of each subspace
for the
decision variable (
). On the other hand, the perturbation
is computed by the following expression:
where
and
symbolize the lower and upper limits for the
-th decision variable, respectively.
is the total number of dimensions, while
corresponds to a scale element that regulates the limit size of
. The value of
is within the interval [0,1].
Finally, the new position of the search agent
is obtained as the best element of the set integrated by the value of
and all its
respective random solutions
. This task can be modeled by the following formulation:
The complete search procedure of LS-II is an iterative process that starts with random initialization of the population
in the first iteration
. Then, it is applied the solitary operator over the current population
. As a result, a temporary population
is produced. Finally, the social operator is considered to generate the next population
. This process is repeated until the number of iterations has been reached.
Figure 1 shows the flowchart of the complete process.
4. The New Clustering Routing Approach
In this part, our proposed clustering routing protocol based on the Locust Search (LS-II) method is explained. The process of operation in our proposed scheme is divided into two main steps: the configuration stage and the operation stage.
4.1. Configuration Step
In its initial stage, network parameters and their architecture need to be defined. This fact is because it is possible to define several network configurations depending on the requirements of the application. The network is mainly defined by the characteristics of its sensor nodes.
The WSNs involve a set of nodes that define a topological structure where the information should be collected. Therefore, our protocol considers some network constraints and a determined topology. These assumptions have been taken into account in order to maintain compatibility with other works reported in the literature.
Network assumptions:
- 1.
The network presents one base station a group of cluster heads elements , and a set of sensor nodes
- 2.
The energy in the base station is supplied through an external source, while the power level of each sensor node is bounded.
- 3.
A particular sensor node is considered dead or useless when its energy level is empty.
- 4.
All sensor elements have a homogeneous characteristic.
Network topology:
In its initial stage, all nodes are randomly distributed in the area where the information is collected.
The position of each node does not change during the network operation.
The base station is located in the middle of the sensing section.
It is not fixed the number of clusters of the structure.
Every generic node (also known as a leaf node) is assigned to its nearest cluster head element.
Once the network structure is already defined, the process for the generation of the connections for the whole network is executed. This step is the configuration phase. Under this phase, an initial set of CHs is selected to produce the initial cluster structure. In the selection process of the optimal set of CHs, our proposed scheme adopts the following energy model of consumption.
Model of Energy Consumption
In a generical WSNs, the processes that consume most of the energy stored in the system are data reception and transmission. The amount of necessary energy for transmitting or receiving information varies according to the distance
and the packet size where the information is encoded. Under this perspective, the required energy level for transmitting a packet with
-bit is expressed by the following formulation.
where
represents the level of energy consumption to transmit information,
symbolizes the encoded data size,
corresponds to the dissipation of energy when it is transmitted or received only one bit of data,
denotes the energy factor of dissipation in the free space model,
refers to the factor of energy dissipation considering a multipath attenuation formulation, and
represents a threshold value that defines the maximal transmission distance. It is computed by the following equation.
The energy needed for receiving a packet of
-bit is computed by Equation (10).
A generical sensor node
can only transmit information to the cluster head. Under such conditions, the energy consumption of node
can be calculated by the following model:
On the other hand, the energy consumption of a cluster head
integrates several elements: the energy consumption when packets are received, the collection of the data information, and the transmission of the collected data to the base station. Therefore, the energy consumption for
is computed by the following expression:
where
denotes the number of nodes that belong to the cluster
.
corresponds to the consumed energy for 1 bit of collected data. Under this scheme, the residual energy of a generic node
is determined by Equation (13).
On the other hand, the residual energy consumed by a cluster head
is computed by the formulation defined in Equation (14).
In the configuration step, the LS-II is used to identify the initial set of CHs. Likewise, it is generated the initial network structure according to the energy consumption model. Once the LS-II has selected the cluster head nodes, the rest of all nodes are assigned to the nearest CH. However, if the distance of the generical node to its nearest CH is longer than the distance to the base station, then the generical node should not be considered as part of this cluster. Instead, its information will be transmitted directly to the base station.
4.2. Operation Step
The operation step is the process where the CH nodes are selected to produce an optimal network structure by the LS-II algorithm. In this stage, once the optimal set of CHs has been selected, the information is transmitted to the BS. Each round, the data transmission is verified. One round refers to the invested time in which the information is transmitted from generical nodes to CHs and then to the BS. The network structure is updated in each round by executing the LS-II scheme in order to identify the optimal structure. The mechanism to select the CHs and the optimal network structure is explained in the following subsections.
4.2.1. Determination of Optimal Cluster Heads
In our scheme, the LS-II method identifies the set of CHs. In the identification, two main concepts are adopted: the residual energy of CH and the distance from a CH to the BS.
The proposed scheme identifies a sensor node as CH when its residual energy is quite high in comparison with the rest of the sensors. According to the energy consumption model, cluster heads consume more energy than generical nodes. Therefore, sensor nodes that present the highest residual energy levels should be considered as CHs in order to balance their effects.
Determining the optimal node structure allows saving energy. Under such conditions, the identification of CHs is achieved through a reduction of the distance among nodes. For this operation, it is considered that crosslinks are eliminated since they generate interference through the data transmission.
Different from other schemes, the number of CH nodes is not represented by a fixed constant. Instead, the number of CHs is constantly modified in order to obtain the best network structure in each round. Likewise, the determination of the set of CHs is not random or probabilistic produced, such as in the DEEC or LEACH approaches. Instead of this, in our scheme, the LS-II technique automatically selects the best sensor nodes to become CHs in each round. This mechanism allows the use of the proposed scheme for a wide range of WSN applications without considering this constraint imposed for other protocols.
Then, each sensor node is assigned to the closest cluster head once the optimal set of clusters has been determined. However, there is an exception: If the distance of the sensor node is shorter to the BS than the distance to the CH, then the sensor node is not assigned to this group. Therefore, the information from this node is transmitted to the base station. Once this process is finished, a candidate network structure is used for a transmission simulation in order to evaluate its performance. The process for determining the best network structure is discussed in the following section.
4.2.2. Identification of Optimal Network Structure
In our approach, the LS-II scheme selects the optimal cluster network structure
by determining the optimal group of cluster heads. Therefore, in each execution, cluster head nodes are identified to integrate a new network topology. Then, the network performance is evaluated with regard to its fitness value. In this paper, an objective function to assess the effectiveness of the network structure is implemented. This cost function integrates four elements: the total distance from cluster heads to the base station, the total intra-cluster distance, the residual energy of cluster heads, and the energy consumption. The objective function is modeled by the following Equation (17).
The first element denotes the total intra-cluster distance, which is computed as follows:
Assuming the subset of sensor elements belonging to the group and representing the cluster head, the maximum and minimum total intra-cluster distance of each sensor node in to its respective cluster head is and , respectively. Being the total intra-cluster distance.
The second element corresponds to the total distance from cluster heads with regard to the base station. This term is modeled as follows:
where the maximum and minimum total distance from each
to
is
and
, respectively. Being
the total intra-cluster distance.
The third element refers to energy consumption which is computed as follows:
where
denotes the energy consumption of a generical sensor node
which is assigned to cluster
. The number of nodes in the group
is
.
and
represent the maximum and minimum total energy consumption of the network, respectively.
The fourth element represents the relationship between sensor node members of the sum of the residual energy and its cluster head:
where
represents the residual energy of the cluster head.
corresponds to the residual energy of the sensor node.
and
denote the maximum and minimum total residual energy, respectively.
A lower value of
expresses that the identified set of clusters represents a better cluster network structure. Therefore, the LS-II method selects a set of cluster heads to produce distinct cluster structures in each execution with the objective to determine the optimal network structure. Our proposed protocol is exhibited in Algorithm 1.
Algorithm 1. LS-II routing protocol |
Input: Number of alive nodes Output: The number of clusters while | Identify cluster heads considering the search strategy of LS-II | if the distance from a node to the closest CH is smaller than to the BS | | Allocate the node to the closest CH | else | | The node transmits the information right to the BS | end if else | Compute the cost function of the produced cluster network structure | if | | Update the identified cluster network structure as: | | | end if end while |
5. Experiments and Simulation
5.1. Metrics
A simulation was carried out to measure the performance of the proposed method. The metrics used to validate the technique contemplate the network lifetime, the total residual energy, the network stability period, the number of received data packets by the sink node, and the residual energy deviation. In the following, the description of these metrics is presented in detail.
The network lifetime considers the period in which the network starts the transmission and when the first node dies. This measure allows the analysis of the network stability in which all the sensors are alive and working. On the other hand, the network instability contemplates the period from the first dead node to the end of the network life, which occurs when all nodes are dead.
The residual energy is determined by the sum of the residual energy of the alive nodes in the network divided by the sum of the initial energy of all nodes. This metric indicates the percentage of available energy in the network in every round and can be formulated as:
where
is the current round, while
is the residual energy of the node
in the round
. The initial energy node
is defined by
. Therefore, the percentage of the total residual energy is given by
.
The residual energy deviation measures the difference between the node with the highest residual energy and the node with the lowest residual energy, divided by the total initial energy. This metric is expressed in percentage and is calculated as follows:
In the round , the node with the maximum residual energy in the network and the node with the minimum residual energy in the network is given by and , respectively.
The last metric considered in the experiments is the throughput, which measures the total number of packets sent to the BS by every cluster head.
5.2. Methods of Comparison and Parameter Settings
To prove the efficiency of the proposed method, it has been compared against one of the most popular clustering routing protocols: the Low Energy Adaptive Clustering Hierarchy Protocol (LEACH). Furthermore, the proposed technique has also been compared against other recent metaheuristic schemes used in clustering routing protocols, namely the Gray Wolf Optimization (GWO) algorithm.
The simulation requires the parameter settings listed in
Table 2. To ensure a fair comparison, these parameters have been configured with the same values for all the routing protocols considered in the experiments. The simulation parameters include the size of the sensing area, the number of sensor nodes in the network, the size of the packets, the initial energy of the nodes, and other values necessary to simulate the behavior of the network during the transmission-reception process.
Additionally to the simulation’s parameter settings, the proposed method and the protocols in comparison need particular parameter configurations to ensure their best performance. These parameters are reported in
Table 3.
The LEACH protocol only requires the percentage of cluster heads assigned in every round, whose value usually is at least 5% of the alive nodes. Therefore, we have configured this value to 0.05. The YSGA method needs the number of search agent groups as an initial parameter. According to the author’s recommendation, this value has been set to 4 since extensive experiments have demonstrated that it reaches its best performances when four groups are configured for the search process. On the other hand, the GWO employs one parameter to regulate the exploration-exploitation in the search process. According to the author’s recommendation, this parameter variates linearly from 2 to 0 throughout the optimization stage. Finally, the LS-II method only requires the number of best individuals needed for the social phase. This value has been set to 10 since the method’s best performance is achieved using this value. As additional parameters, the YSGA, GWO, and LS-II algorithm require the population size and the maximum number of iterations, which have been set to 20 and 50, respectively.
All the experiments have been carried out using the MATLAB R2019a software on a PC with the Intel® Core ™ i7-8550u processor at 1.80 GHz.
5.3. Network Lifetime
The network lifetime analyzes the evolution of the alive and dead nodes in every round. This evolution is illustrated in
Figure 2 and
Figure 3, respectively, where the graphics represent the obtained simulation results for all comparison methods.
Figure 2 shows how many alive sensor nodes remain in the network in every round, while
Figure 3 reveals the number of accumulated dead nodes in each round. From the graphics, it is clear that the proposed scheme outperforms the protocols in comparison since it manages to maintain more alive nodes in every round than the other methods. In other words, the proposed scheme manages to further reduce the number of nodes out of energy compared to the other protocols in every round.
Contemplating the attained outcomes, the proposed method manages to extend the network’s lifetime more than the protocols in comparison. The LS-II scheme can reach such results because it can automatically build the optimal number of clusters to stabilize the power load and extend the network lifetime. Moreover, the proposed protocol can change the network configuration in each round to increase the sensors’ lifetime and reduce the power expenditure.
The simulation of the network lifetime is also illustrated in
Figure 4, where the cluster configuration selected by the proposed scheme for different periods is shown.
Figure 4a–d show the optimal cluster configuration in rounds 1, 5, 11, and 15, respectively. The figure exemplifies the clusters’ formation given the set of cluster heads chosen by the proposed method in distinct rounds. In the figure, it can be appreciated that the number of cluster heads can change from one round to another. Similarly, the number of clusters and the size of every cluster can be different. Furthermore, the figure shows the nodes that have died over time and how they are excluded from the clustering-transmission-reception process.
In
Figure 4, the base station is placed at the center of the sensing area and symbolized with a gray color diamond marker. On the other hand, normal sensor nodes are represented as gray color circles, while cluster heads are indicated with gray color squared markers. In contrast, dead sensor nodes are illustrated with black color circles. Regarding the communication links, every normal sensor node that belongs to a specific cluster has a connection line to its cluster head identified by a particular line style, which is a dotted line. Similarly, the communication between every cluster head and the base station is represented with a dashed line. Finally, a simple line represents the communication link of every normal sensor node that is not part of a cluster and transmits directly to the base station.
5.4. Total Residual Energy
The total residual energy of the network is calculated by summing every sensor node’s remaining energy in every round. The evolution over time of this calculation for each protocol is reported in
Figure 5. A closer inspection of this statistic reveals that the proposed protocol reaches the highest residual energy throughout the network lifetime. From the figure, it can be observed that the level of energy in all protocols is 100% at the beginning of the simulation. However, the power consumption gradually reduces the total residual energy in every round until the residual energy percentage is zero. Nevertheless, the proposed approach uncovers the cluster structure that consumes the minimum energy in every single round. Consequently, it achieves the lowest energy expenditure and maximizes the network’s residual energy, outperforming the other protocols in comparison.
5.5. Network Instability Period
The statistics of the network instability period are shown in
Figure 6. The figure reveals the round in which the first sensor node dies for every protocol in comparison. Likewise, the figure shows the round where half of the sensors are out of power and the time where all sensor nodes are dead.
From the achieved outcomes, it is remarked that the proposed protocol accomplishes the maximum values. The first dead node appears at round 8 in the proposed method. Fifty percent of the network dies in round 12, while the complete set of sensor nodes dies in round 22. The GWO and LEACH protocols reached lower indicators concerning the first, half, and last node out of power in the wireless sensor network, proving that the proposed method provides a longer lifetime to the network.
5.6. Throughput
The throughput measures the number of packets that have been sent to the base station. It is desirable to send the largest number of packages throughout the life of the network. This measure is illustrated in
Figure 7. The throughput is inherent to the number of alive sensor nodes transmitting data packets in every round, which means that the collection of information is more significant.
Figure 7 clearly shows that the proposed protocol has the highest throughput. The number of packets sent to the base station reaches 1200 for the LS-II, while the LEACH protocol sends less than 1000. These results are as expected since the proposed method extends the life of the network, and therefore, the amount of gathered data is improved.
5.7. Energy Deviation
The difference between the two sensors with the maximum and minimum residual energy determines the energy deviation. This statistic measures the load balancing, which indicates how the energy expenditure is distributed among all the sensor nodes. When the load is not balanced, the network presents a higher instability period, a shorter lifetime, and the sensor nodes’ premature death. The energy consumption is not balanced if the energy deviation is high. Therefore, it is desirable to reduce this metric for better load balancing, and at the same time, it is desirable to prolong the time in which the highest peak is reached.
The energy deviation obtained from the experiments is illustrated in
Figure 8. The statistic in the figure reveals that the LEACH protocol generates the maximum peak, reaching the highest peak in round three. Therefore, this protocol has the worst load balance. In contrast, the lowest energy deviation is attained by the proposed method LS-II, which archives the highest peak in round nine. Therefore, the proposed scheme shows the best performance when the energy deviation is analyzed.
6. Conclusions
A novel clustering routing protocol based on the LS-II scheme has been proposed in this article. The approach achieves the optimal cluster structure to decrease the energy expenditure in each transmission-reception process, extending the network lifetime. The LS-II protocol regulates the number of cluster heads and automatically decides the nodes’ role as regular sensors or cluster heads in each round. The proposed approach is compared against the LEACH protocol, which is one of the most popular clustering routing protocols. Additionally, another similar protocol scheme based on the GWO has been used in the performance analysis. Simulation results have demonstrated that the proposed technique performs better than the comparison methods. Several metrics were implemented to evaluate the proposed protocol, such as the network lifetime, the energy consumption, the energy deviation, the number of packets delivered, and the network instability period. The statistical analysis has demonstrated that the proposed strategy handles very well, improving the network’s lifetime, and at the same time, it provides a better balance in load energy. As further research directions that deserve attention, we can establish A) the use of metaheuristic mechanism to improve the accuracy of the solutions, B) the use of the opposition-based phenomenon to increase the exploration capacities of the LS-II scheme, and C) the dimensionality reduction of the optimization problem for the scalability of the network.