Over the recent era, Wireless Sensor Network (WSN) has attracted much attention among industrialists and researchers owing to its contribution to numerous applications including military, environmental monitoring and so on. However, reducing the network delay and improving the network lifetime are always big
[...] Read more.
Over the recent era, Wireless Sensor Network (WSN) has attracted much attention among industrialists and researchers owing to its contribution to numerous applications including military, environmental monitoring and so on. However, reducing the network delay and improving the network lifetime are always big issues in the domain of WSN. To resolve these downsides, we propose an Energy-Efficient Scheduling using the Deep Reinforcement Learning (DRL) (E
2S-DRL) algorithm in WSN. E
2S-DRL contributes three phases to prolong network lifetime and to reduce network delay that is: the clustering phase, duty-cycling phase and routing phase. E
2S-DRL starts with the clustering phase where we reduce the energy consumption incurred during data aggregation. It is achieved through the Zone-based Clustering (ZbC) scheme. In the ZbC scheme, hybrid Particle Swarm Optimization (PSO) and Affinity Propagation (AP) algorithms are utilized. Duty cycling is adopted in the second phase by executing the DRL algorithm, from which, E
2S-DRL reduces the energy consumption of individual sensor nodes effectually. The transmission delay is mitigated in the third (routing) phase using Ant Colony Optimization (ACO) and the Firefly Algorithm (FFA). Our work is modeled in Network Simulator 3.26 (NS3). The results are valuable in provisions of upcoming metrics including network lifetime, energy consumption, throughput and delay. From this evaluation, it is proved that our E
2S-DRL reduces energy consumption, reduces delays by up to 40% and enhances throughput and network lifetime up to 35% compared to the existing cTDMA, DRA, LDC and iABC methods.
Full article