1. Introduction
In the modern-day digital era, the number of wireless devices becoming part of the connected world is ever increasing. Most of the wireless devices are small in size, with limited computational capacity, and with a limited energy source [
1]. When these devices have some intensive jobs to process, due to their limited computing capacity they require a long processing time, which will indirectly consume a huge amount of energy. The problem can be addressed by means of computation offloading, i.e., by transferring computation-intensive jobs to a more powerful remote device [
2]. Currently, cloud computing is widely used for these aims [
3]. Despite the superior services that cloud computing provides to end-users, it has a very serious limitation: the connectivity between the end devices and the cloud is over the Internet, and the distance between them can be thousands of kilometers. This cripples the communication for delay-sensitive cloud-based applications, such as connected vehicles, fire detection and firefighting, smart grid, and content delivery applications.
Fog computing is a term coined by Cisco [
4], which refers to bringing the resources and services originally provided by cloud computing closer to the user-end devices, including computation, networking services, and storage [
5]. The ever-increasing number of smaller devices, with reduced computational speeds, smaller memory size, and power constraints demand a networking scheme that provides these devices all the services they need. These devices can be positioned anywhere, e.g., in factories, in water-bodies, alongside a railway track, in a vehicle, or in an oil refinery plant, with the only requirement being a network connection. As its name implies,
fog is a cloud that is closer to the ground. Likewise, fog computing brings all the functionalities of cloud computing to the edge of the network. As noted in the literature [
3,
6], cloud computing is not a good choice for IoT applications, and hence fog could be used as an alternative.
IoT can be seen as a framework enabling the interconnection of any object, provided that each object can be uniquely identified and that no human intervention is required, so as to enable an effective object-to-object interaction. The IoT is the interconnection of various devices that are communicating with each other on the web, all collecting and sharing data. The advent of powerful integrated circuits (ICs), and the ubiquity of wireless networks, enabled the creation of things varying from the smallest ant-sized objects up to objects as big as an airplane. Adding sensors to these devices makes them intelligent and able to share real-time data without the intervention of humans. The word “thing” is used because virtually anything in the physical world can be connected to the Internet, exploiting connected sensors to collect data, and communication protocols to transceive data so that the thing becomes an IoT device.
IoT has diverse applications, from the manufacturing industry to automotive, from transportation and logistics to retail, and from healthcare to safety across all industries. Thanks to the availability of low-cost and low-power sensors, data collection becomes an easy task. Moreover, concerning devices that are either deployed massively or in difficult and dangerous places, whenever these devices run out of energy, battery replacement becomes complex. Therefore, in addition to computation offloading, which reduces the energy consumption and the processing delay, energy harvesting (EH) allows for their lifetime to be increased [
7].
The upcoming 6G communication systems foresee the introduction of several new innovations aimed at fostering the 5G standard [
8]. Among others, there is an increasing interest towards the concept of sustainability [
9]. Indeed, it was clear from recent studies that telecommunication equipment has an ever-increasing impact on global energy consumption. The ultimate idea is that of going toward a net-zero-emission approach with the possibility of introducing zero-energy devices, i.e., devices able to operate with no impact in terms of energy consumption. This is possible thanks to an efficient exploitation of EH solutions, making it possible for devices to recharge their batteries. Indeed, powering mobile devices using wires is not viable. The rechargeable battery is the conventional source of power for most portable devices. The bigger the batteries, the better capacity; however, this is achieved at the cost of an increased physical size, which will increase the overall weight of the device significantly. Since these batteries have limited capacity, they have to be frequently recharged. Nevertheless, it is very inconvenient to connect those devices to cables, and hence a mechanism must be sought to recharge them through wireless connections. As a result, two novel wireless power supply methods are developed: EH [
10] and wireless power transfer (WPT) [
11].
EH, named also as power harvesting or energy scavenging, is the process of harvesting energy from external power sources, such as solar power, heat, wind, electromagnetic wave radiation, and kinetic energy, etc. The harvested energy is stored and mostly used for small mobile devices, such as wearable electronics and wireless sensors. Even though the harvested energy is significantly reduced, and there were concerns whether this energy is sufficient for computation-intensive applications, thanks to the rapid development of silicon technology, even a tiny amount of energy can perform plenty of operations. With EH, the need for replacing batteries on devices is reduced, especially those devices that are used in hazardous environments. Some papers indeed considered the possibility of integrating EH solutions, exploiting solar panels with fog computing nodes. As an example, in study [
12], the authors proposed a forecasting procedure able to increase the node lifetime. Despite the proposed solution, which allows an increase to the lifetime, its value still remains limited.
However, EH is unreliable because of fluctuations, high dependency on climate conditions, and human motions. Besides this, since the harvested energy is insignificant, it cannot be used for power-hungry devices, such as smartphones. Unlike the passive EH mechanism, WPT provides a stable and manageable wireless power supply by using a permanent power source. In WPT, the wireless links are used as a source of energy, hence the wireless links themselves can be exploited for recharging batteries and activating devices at distance. WPT is at the basis of a self-sustainable implementation of the upcoming 6G networks [
13]. Simultaneous wireless and information power transfer (SWIPT) is a particular type of WPT, where the transmission of information (data), as well as power, is performed simultaneously based on either time switching or power splitting [
14]. To achieve this simultaneity, a receiver is designed that is able to separately process information and harvest RF energy [
15]. This is particularly useful for cooperative networks that depend on energy and information relaying. Those nodes which are more energy-constrained harvest energy from nearby powerful nodes that have ample amounts of energy.
Figure 1 shows a typical cooperative communication network employing SWIPT. Here, the source
S is connected to a direct source of power and hence does not have any energy constraints. The intermediate relay node (R) can be either the same device as the end node
D or different. It can be assumed that the relay is relatively close to the source
S compared to the end node
D. Therefore, the relay can harvest energy from the source, and at the same time, the end node can harvest energy from the relay. Information can also be transmitted from the source to the relay and then to the end node, or vice versa.
It is worth noting that several different SWIPT implementations exist, where the energy and the information are managed in different ways. For instance, in a time-switching SWIPT, the time frame is divided into slots, where one is assigned to the information sharing, while the other is assigned to the power transfer. In separate receivers, it is instead assumed that each device is equipped with two receivers, one for EH and one for information decoding. There are also two other schemes: in power splitting, the idea is to use a basic amount for information decoding and the surplus energy for EH, while in antenna switching, it is assumed that each device has one antenna for EH and one for information decoding. In the following study, we will implicitly refer to the time-switching architecture.
1.1. Related Works
In this sub-section, we list the most relevant activities in the areas of mobile edge computing (MEC)/fog computing and WPT. Recently, there have been some review papers in these areas. The authors in reference [
16] present a comprehensive review on fog computing technology, and provide a comparison with other technologies, such as cloud computing, MEC, and cloudlet computing. A comprehensive survey in reference [
17] studies the methodologies in WPT-enabled MEC offloading.
There have also been some technological designs to improve the performance of the WPT technology. The authors in reference [
18] present the non-maximally coefficient symmetry multi-rate filter bank in order to reduce the delay and hardware complexity for a wideband channelizer. The authors in reference [
19] design a quadratic sandwich rectenna circuit that increases the output power, making them suitable for wireless devices in wireless telecommunication systems.
A consideration of WPT-enabled MEC systems has also been observed in several works. The authors in reference [
20] propose an energy-efficient approach for the NOMA grouping selection. In particular, based on the exploitation of WPT, they demonstrate a user grouping method in a NOMA-based system for implementing a cooperative scheme, while also maximizing their communication efficiency. There have also been some works focusing on optimizing the transmitted power for energy minimization. The authors in reference [
21] jointly optimize the time assignments for EH and offloading, and the transmit power at the device for offloading. Similar works are observed in references [
22,
23]. A game-theoretical approach is proposed in reference [
24] for the resource allocation problem in a wireless-powered MEC system for IoT applications. They try to obtain the optimal power transfer for both the access point (AP) and the harvesting devices. The authors in reference [
25] propose an optimization framework, aiming to maximize a utility function based on data transfer while minimizing energy consumption; the optimization is performed by jointly considering wireless power transfer at the access-point side and offloading power consumption at the end-device side, while taking into account delay constraints. The authors in reference [
26] optimize the computing mode selection for mobile devices along with the system transmission time allocation in an MEC system allowing a WPT. Similar work is studied in reference [
27], whereby the authors use the perfect and imperfect channel state information for the transmission mode selection in a wireless-powered sensor network. In reference [
28], the authors focus on maximizing the harvested energy and minimizing the consumed energy of mobile devices in an MEC-WPT system by formulating a problem to optimize the EH time, task offloading time, task offloading size, and the devices’ CPU frequencies.
The authors in reference [
29] analyzed the performance of an amplify-and-forward system with RF, EH, and information processing. The authors proposed a time-switching-based relaying protocol, where the receiver allocates a portion of its time for EH and the rest of it for information processing, and a power-splitting-based relaying protocol, where a portion of the received power is used for EH and the rest of it for information processing. The work in reference [
15] was extended to the systems in study [
29], with decode-and-forward relaying. The authors analyzed the throughput of each cooperative network that employs the time-switching protocol and the power-splitting protocol. Assuming a relay node with an EH constraint, the exact analytical expressions of the achievable throughput and ergodic capacity are derived for the decode-and-forward relaying networks for both the time-switching and power-splitting schemes. Chen et al. [
30] studied a time-switching cooperative network with a downlink information transfer and an uplink information transmission. The researchers suggested a harvest-then-cooperate protocol, where the intermediate relay and the source harvest energy from the same hybrid AP. Unlike the conventional cooperative networks, the relay and the source have no batteries, and hence the only source of energy is the harvested power from the AP. A delay-limited transmission mode is considered, and the result is supported both by theoretical results as well as numerical simulations.
1.2. Paper Contribution
The possibility of integrating energy and communication results in the introduction of a novel concept, where the integration among communication, energy, and computing could lead to lots of advantages, is promising for the implementation of the upcoming 6G networks [
31]. The developed idea behind our work is indeed within this realm. The aim of this paper is to formulate the computation offloading for a SWIPT-based fog network. By doing so, computation-intensive processes are offloaded in case a more energy-efficient situation appears. This has a two-fold advantage. One is that energy consumption is reduced, thereby increasing the lifetime of the nodes. Reduction in the packet processing delay is the second advantage. The objective of the work is to develop an algorithm that outlines the thresholds for computation offloading, while in reference [
32], we defined the main elements of the fog architecture for joint offloading and SWIPT limited to a two-nodes scenario; in this work, a more general scenario is considered, where an intermediate node is also used for both energy transfer and computation offloading. In addition to this, two policies have been proposed, aiming at optimizing the performance of the system by increasing the node lifetime under different conditions. Theoretical analysis has been also extended to a more general case including three nodes, as well the numerical results.
The rest of the paper is structured as follows: In
Section 2, the system model is presented, and both energy- and offloading-related parameters are introduced. In
Section 3, the feasibility analysis is carried out, allowing for an understanding as to what extent the proposed solution is viable regarding the energy constraints. In
Section 4, the proposed algorithm for optimizing the computation offloading is described, while in
Section 5, the numerical results obtained through computer simulation are shown. In
Section 6, a final discussion of the proposed idea is given together with some proposals for further developments, while in
Section 7, the conclusions are drawn.
3. Feasibility Analysis
In an energy-constrained network, the primary goal is to improve the lifetime of the whole network. For fog networks with no SWIPT capability, the network is able to work as the battery at the nodes that are not depleted. As soon as the battery depletes, the network starts to fail. This becomes even worse when the intermediate relay node battery gets depleted faster. A failed end node does not affect system behavior as much. However, a failed relay node affects all the end nodes which are connected to it.
In order to understand the feasibility of the SWIPT-enabled fog network, we assume that given any network, with a given bandwidth, packet generation rate, and packet size, if the harvested energy up to the time instant t is greater than or equal to the consumed energy, the network is alive at least up to time instant t. Hence, we define the stability states for the end node and the relay node as: and .
3.1. Local and Offloading Thresholds
We assume the packet generation rate follows a Poisson distribution with a mean value . This means the higher the value of , the more packets are generated in a given interval of time, which makes the packet inter-arrival very short. However, the smaller the lambda value, the fewer packets that are generated, and this makes the packet inter-arrival longer. Therefore, whether a given packet should be processed locally or offloaded depends on the packet inter-arrival after the local processing and offloading thresholds are calculated from the other parameters.
3.2. Threshold Values for the End Node
Assuming an arbitrary packet inter-arrival time
, the stable state for the end node is:
Exploiting (
1) and (
4), we can rewrite (
13) as:
The above equation is the generalized formula for a stable state for both the local processing and the offloading. In the case of local processing (i.e.,
= 0), (
14) becomes:
Rearranging (
15) and solving for
, we obtain:
In order to respectthe stability state under the condition of local processing, the inter-arrival time (
) should be:
For offloaded processing (
= 1), (
14) becomes:
Rearranging (
17) and solving for
, we obtain:
For offloaded processing, the minimum inter-arrival time that fulfills the stability condition is:
The associated processing delays for both the local processing and the offloaded operations are given by:
3.3. Threshold Values for the Relay Node
As already discussed above, the stability condition is respected only if:
It is possible to write the energy stability equation, similar to (
14), for the relay node as:
For local processing (
= 0), after algebraic simplification and solving for
, we get:
For a stable state condition, the minimum inter-arrival time that must be respected for local processing is:
while for offloaded processing (
= 1), simplifying the expression and solving for
, we obtain:
For offloaded processing, the minimum packet inter-arrival that must be fulfilled for the stable state condition is:
The processing delays associated for both the local and offloaded processing results are:
5. Numerical Results
The numerical results are obtained through computer simulations in MATLAB. The values for the different parameters are given in
Table 2. The simulation parameters are fixed throughout the simulations. The packet size, packet arrival rate, and bandwidth are variable parameters. By varying these three parameters, various simulation results are obtained.
Each simulation runs for a maximum of 500,000 iterations. Each time slot has a 6 ms duration. In total, each simulation is considered to run for 3000 s. The environmental propagation parameter
in (
2) has been set to 2.7. The optimization algorithms, with both policies, as well as the four benchmark procedures, have been considered in each of the following scenarios. In each scenario, different value ranges for the parameters are considered, and three comparisons are plotted: the lifetime of the network, the harvested energy, and the delay. For the algorithms optimizing the behavior, the percentage of the locally processed and offloaded packets is also plotted.
Lifetime of the network: Both the end node and the relay node are battery operated. Each is assumed to have an initial energy equal to 20 J. The lifetime of the network is calculated based on the remaining energy on the nodes. Whichever of the two, either the relay node or the end node, whose remaining energy falls below the threshold value will let the network down, and that determines the lifetime of the network. A node is assumed to be off if the remaining energy falls below 10% of its battery capacity.
Harvested energy: The system architecture is based on the time-switching SWIPT. The antenna is used for EH if there is no packet transmission or reception. The relay node can harvest energy from the AP. The end node, on the other hand, harvests energy from the relay node. The harvested efficiency of the relay node is assumed to be twice that of the end node.
Locally processed/offloaded percentage: Generated packets can either be processed locally or offloaded for remote processing. Depending on the threshold values, packets can be offloaded or not. Based on comparing the packet inter-arrival with the local and offloaded thresholds, the decision is made. The end node decides to process the packets locally or offloads them to the relay node. The relay node, in turn, makes its own decision either to process the packets locally or offload them to the AP.
Packet generation: The packets are generated randomly following Poisson distribution. The average packet arrival rate is provided to the simulation for each time slot.
As already described in the previous section, our proposed algorithm with the two policies and four benchmark procedures are considered. The local with-SWIPT and without-SWIPT algorithms process the packets locally. The local with-SWIPT algorithmcan harvest energy from a remote node. For example, the end node harvests energy from the relay node and the relay node, on the other hand, harvests energy from the AP.
The two other benchmarks are the offloading with SWIPT and without SWIPT. Both of them offload their packets, i.e., the end node offloads the packets to the relay node and the relay node offloads its own packets and the packets coming from the end node to the AP. The offloading with SWIPT can harvest energy whenever its antennas are not transmitting or receiving packets.
The last two algorithms are the SWIPT with Optimization P1 (Policy 1) and SWIPT with Optimization P2 (Policy 2). Both of them make the offloading decisions based on the local and offloading threshold values already derived in the previous section. By comparing these threshold values with the packet inter-arrival time, they decide whether to offload a packet or process it locally. The difference is that Policy 1 uses this decision just to decide whether to offload or not. However, in this scenario, the end node can harvest at any time but this consumes a lot of energy from the relay node. As the end node does not have any information about the energy status of the relay node, the relay node can run out of energy and hence the lifetime of the network is impacted. Policy 2 is introduced to alleviate this problem. The end node does not harvest energy from the relay node all the time. It harvests energy from the relay only if its remaining energy drops below 50% of its initial energy, and also if the packet inter-arrival time at the relay node lets the relay node harvest more energy than consume energy.
5.1. Threshold Differences between the Local and Offloading Computations
As described previously, in the optimization algorithms, the offloading decision is based on the comparison between the packet inter-arrival time, and the local and offloading thresholds. Based on the decision, we can have three regions, i.e., (a) inter-arrival is less than both thresholds, (b) inter-arrival is in between the two thresholds, and (c) the inter-arrival is above both thresholds. The first two regions depend on the threshold with the lower value, and hence the offloading decision is based on the minimum of the two. However, the third region depends on the operation that has the minimum packet processing time. The regions are shown in
Figure 4.
The goal of the optimization is to properly manage the offloading decision based on the threshold values. As shown in the tables below, the difference between the two thresholds (
) is evaluated for different bandwidth and packet size values; the evaluation of the differences allows a better understanding of the decision to be taken among option A and option B in
Figure 4.
Table 3 shows the different values for the end node, while
Table 4 shows these values for the relay node.
The same results reported in
Table 3 and
Table 4 are also reported in
Figure 5 and
Figure 6, allowing a better understanding of the impact of the parameters over the offloading thresholds.
As can be observed from
Table 3 and
Table 4, and from
Figure 5 and
Figure 6, the threshold difference reduces for both cases, with small values of bandwidth and small values of packet size. As the packet size increases, keeping the bandwidth constant, the offloading threshold increases. It is also possible to notice that for a fixed packet size, when the bandwidth is increased, the threshold become negative; this corresponds with the local and offloading thresholds being swapped, which is what is represented in the scheme labeled with (B) and with respect to that labeled with (A) in
Figure 4. Moreover, there is a trade-off between bandwidth and packet size that is around 500 kHz, while the impact of packet size is linear. It can be seen that the magnitude of the values for the end node is much higher than that for the relay node. This is expected, since the impact of the optimization is much higher for the end node that is able to harvest only a limited portion of the energy transmitted by the relay. It is worth being reminded that in case the packet inter-arrival becomes big enough, which means its value is greater than both thresholds, the decision is made based on the computation that introduces the lowest packet processing delay. This is due to the fact that the inter-arrival time is sufficient for EH.
5.2. Variable Bandwidth, Fixed Packet Arrival Rate, and Fixed Packet Size
An average packet arrival equal to
= 0.03 is assumed for both the end node and the relay node for each time slot. This corresponds to generating five packets per second. The packet is fixed to 2 kB. The simulation is performed for the following values of bandwidth: 200 kHz, 500 kHz, 1 MHz, 2 MHz, and 5 MHz. The bandwidth has a positive correlation with the lifetime. As shown in
Figure 7, except for the local computation, which is independent of the variability of the bandwidth, all the algorithms show an increase in the lifetime as bandwidth increases. This is because as bandwidth increases, the interaction time of the antenna during the transmission and reception of the packets is lower, i.e., the data rate is higher. The lower the interaction for the antenna means the lower the energy consumption. All the algorithms incorporating the SWIPT are indeed showing better performance than those without SWIPT.
The harvested energy is directly related to the bandwidth. Since there is one antenna employing a time-switching technique to be used for both packet transmission/reception and EH, the amount of time it becomes idle from the transmission/reception of packets is crucial for the amount of harvested energy. Therefore, if the bandwidth is higher, the antenna spends less time in transmitting and receiving packets, and hence more energy can be harvested. As shown in
Figure 8, SWIPT & Local is the asymptotic behaviour. In this procedure, since the antenna is idle all the time, it can be used to harvest energy. This is the theoretical maximum energy that can be achieved. Whichever algorithm we employ, the harvested energy always approaches the SWIPT & Local but never crosses it. The average harvested energy is increasing for all scenarios as bandwidth increases. Since the Local and Offloading without SWIPT do not harvest energy, they are shown with a zero value at the bottom of the plot.
Task processing delay is the amount of time it takes a packet to be processed whether performed locally or offloaded. If the packet is processed locally, the task delay is the same as the computation time of the node. However, if the packet is offloaded, the task processing delay is equal to the sum of the retransmission time plus the packet computation time at the remote device. Therefore, the variability of the bandwidth does not have any effect if the packet is processed locally. However, if the packet is offloaded, the higher the bandwidth, the lower will be the packet processing time. As depicted in
Figure 9, both the offloading procedures (i.e., with and without SWIPT) show a decrease in average task delay as bandwidth increases.
Figure 10 shows the percentage of packets that are locally processed at the end node and offloaded to the relay node. For small values of bandwidth, the end node realizes it is energy-consuming to offload the packet. Hence, it processes the packets locally. For a bandwidth equal to 500 kHz, around 15% of all the generated packets are processed locally, while the rest are offloaded to the relay node. However, for larger values of bandwidth, it is economical in terms of energy consumption to offload the packets, and therefore the end node offloads all of its packets to the relay node.
5.3. Variable Packet Size, Fixed Bandwidth, and Fixed Packet Arrival Rate
An average packet arrival = 0.3 is assumed for both the end node and the relay node for each time slot. This corresponds to generating 50 packets per second. The bandwidth is assumed to be 600 kHz. The simulations are made for the following packet sizes: 500 B, 1 kB, 2 kB, 5 kB, and 10 kB.
The packet size and lifetime of the network have a negative correlation. Larger packet size means the nodes need a lot of power to process it. For local computation, the processor takes a lot of time to do the operation and hence consumes a lot of energy. For the offloaded operation, the antennas spend a long time during the transmission and reception of packets. An increased antenna activity means more consumed energy. As shown in
Figure 11, all the scenarios are decreasing for increasing packet sizes. The local ones are decreasing their energy due to the local activity, while the procedures employing offloading are consuming energy due to increased antenna activity. As expected, the two optimization algorithms are showing better performance than the others. Particularly, Policy 2 is showing a better network lifetime than Policy 1 because of the interactive nature of the energy transfer between the end node and the relay node.
The average task delay also has a positive correlation with the packet size. The larger the packet size, the longer it takes for the packet to be processed. For local computation, the delay is only due to the computation time. For the offloading, the larger the packet size, the longer it takes to transmit it through a limited-bandwidth channel. As shown in
Figure 12, the average task delay increases linearly with the packet size.
The energy harvested in the local computation is still unaffected because antennas are not employed. However, for all the scenarios that employ offloading, the increase in packet size impacts the harvested energy negatively. As the packet size increases, whenever the antenna wants to offload the packets to the remote device, it spends a longer time during transmission and reception. Since a single antenna is being used in a time-switching manner, this affects the harvested energy. Therefore, as can be seen in
Figure 13, the plot shows a decrease in the average harvested energy as packet sizes increase.
Based on the packet inter-arrival time, and comparing this value with the local and offloading threshold values derived in the previous section, packets are either offloaded or processed locally. The end node offloads its packets to the relay node (if it has to). However, the relay node has two roles: it has to process its own generated packets and also packets coming from the end node. As depicted in
Figure 14, for small-value packets, the end node can process a fraction of the total-generated packets locally.
However, as the sizes of the packet become larger, it offloads the packets to the relay node for remote processing. However, offloading packets to the relay node is an extra burden to the relay node. As shown in
Figure 15, for a small-size packet, the relay node processes a fraction of the total packets (both generated and received from the end node) locally. However, as packet size increases, energy consumption increases due to the increase in transmission and reception time. Therefore, the relay node processes a majority of the packets locally, while a fraction of them are offloaded.
5.4. Variable Packet Arrival Rate, Fixed Bandwidth, and Fixed Packet Size
In this simulation, different values of packet arrival rates are used. The bandwidth is 600 kHz and the packet size is 2 kB. The following
packet arrival rates are assumed for all simulations: 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6. The average packet arrival rate shows the number of packets that are generated in a single time slot. Therefore, the bigger the number, the more packets are generated in a given time slot. The higher packet arrival rate shortens the packet inter-arrival time. More packets mean more energy is consumed. For the local processing, more packets mean the local processor is always busy; however, since the antenna is idle, it can harvest energy throughout this time. For those scenarios employing offloading, the generation of more packets makes the antenna always busy transmitting and receiving packets. This consumes energy on the one hand, and decreases the energy to be harvested on the other hand. As shown in
Figure 16, in all the algorithms, the life time decreases as the average packet arrival rate increases.
The average task delay is unaffected by the change in the average packet arrival rate. Even though more packets are generated and the total task delay changes accordingly, the average does not change. This is shown in
Figure 17.
The more packets generated, the busier the antennas will be transmitting and receiving packets. This compromises the time that the antennas could have been used to harvest energy from the remote node. Hence, the harvested energy decreases as the average packet arrival rate increases, as shown in
Figure 18.
6. Discussion
Due to the increased number of fog and IoT devices, energy consumption is an issue. Since most of these wireless devices are battery operated, their lifetime is very limited. Whether standalone or cooperating with other devices, the failure of one device might affect the network. Different approaches are proposed to power these devices, of which this paper has focused on simultaneous wireless EH and information transfer. By doing so, the energy requirements of the devices are fulfilled, and computation offloading is performed based on some thresholds. Three parameters, the bandwidth, the packet size, and the packet arrival rate, are considered, to see how the different scenarios behave as these parameters change. At each simulation, one parameter is made variable while fixing the other two. By doing so, different values are plotted. The network lifetime, the average task delay, and the harvested energy are plotted as a function of the changing parameter to compare and contrast among the different scenarios. The optimization algorithm calculates the local threshold and the offloading threshold. It compares these values with the packet inter-arrival interval. Based on the decision, the packets are either processed locally or offloaded to the remote device for offloaded operation. Both the end node and the relay node can harvest energy. However, the end node harvests energy from the relay node only if its remaining energy is below a threshold or the relay node can harvest enough energy from the AP in any packet inter-arrival interval. This work was done for a two-hope fog network. This work can be expanded by using multiple end nodes, multiple relay nodes, and multiple APs. For example, the end node will wisely select the one with the best computational capacity or/and ample energy. Moreover, it will be applicable for mesh-type networks where there are multiple relay points.