Previous Article in Journal
IoT Integration of Failsafe Smart Building Management System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Age of Information-Aware Networks for Low-Power IoT Sensor Applications

1
Arcfield, Herndon, VA 20151, USA
2
Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802, USA
3
The U.S. Naval Research Laboratory, Washington, DC 20375, USA
*
Author to whom correspondence should be addressed.
IoT 2024, 5(4), 816-834; https://doi.org/10.3390/iot5040037
Submission received: 2 October 2024 / Revised: 7 November 2024 / Accepted: 13 November 2024 / Published: 19 November 2024

Abstract

:
The Internet of Things (IoT) is a fast-growing field that has found a variety of applications, such as smart agriculture and industrial processing. In these applications, it is important for nodes to maximize the amount of useful information transmitted over a limited channel. This work seeks to improve the performance of low-powered sensor networks by developing an architecture that leverages existing techniques such as lossy compression and different queuing strategies in order to minimize their drawbacks and meet the performance needs of backend applications. The Age of Information (AoI) provides a useful metric for quantifying Quality of Service (QoS) in low-powered sensor networks and provides a method for measuring the freshness of data in the network. In this paper, we investigate QoS requirements and the effects of lossy compression and queue strategies on AoI. Furthermore, two important use cases for low-powered IoT sensor networks are studied, namely, real-time feedback control and image classification. The results highlight the relative importance of QoS metrics for applications with different needs. To this end, we introduce a QoS-aware architecture to optimize network performance for the QoS requirements of the studied applications. The proposed network architecture was tested with a mixture of application traffic settings and was shown to greatly improve network QoS compared to commonly used transmission architectures such as Slotted ALOHA.

1. Introduction and Background

This section provides a summary of the current problem statement. Additionally, it provides background information on two concepts that are critical to this work, namely, AoI and lossy compression.

1.1. Challenges in Low-Power Sensor Networks

IoT devices are increasingly finding use in sensor networks used for smart agriculture [1,2]. The IoT devices in these networks often consist of low-cost battery-operated sensors that communicate over channels with low bandwidth. Due to their use in application areas such as smart agriculture, the processes that are monitored and controlled are often located in remote locations without access to reliable internet infrastructure. Other studies [3,4] have shown that the LoRa (Long-Range) waveform is an effective radio frequency (RF) modulation technique for these applications. Often, these nodes must transmit data to a server that runs a backend application, which can range from classifying sensor data using a neural network to controlling processes in real-time with feedback from a sensor. Due to the limited bandwidth of the shared channels, it is vital for nodes to maximize the usefulness of the transmitted data.

1.2. Significance of QoS in Low-Power Networks

Due to these limitations, it is necessary to take into account the QoS (Quality of Service) of the applications being serviced. If outdated or inaccurate data are transmitted, they provide no benefit to the underlying applications and valuable channel bandwidth is wasted. In safety-critical applications, missed deadlines and low-quality data with can have serious consequences. Thus, it is crucial to have a scheduling algorithm that is able to manage these requirements while also maximizing the amount of information that can be transmitted. This work proposes an architecture for a low-power IoT sensor network capable of transmitting data with mixed QoS requirements over a low-bandwidth channel. In addition, we introduce a novel method for modeling the effect of data compression on data freshness, as well as scheduling algorithms that seek to meet the QoS requirements of IoT sensor data.

1.3. Age of Information

Age of Information (AoI) is a useful metric for this work, as it provides a readily available performance metric that measures the freshness, and consequently the usefulness, of data from a given sensor. It was first introduced in [5] and has found a wide range of uses in the Wireless Sensor Network (WSN) field. The concept has been well studied by researchers [6,7,8]. In previous work [9], the effects of queue mechanisms on the AoI on a low power network were analyzed. The current work seeks to extend these concepts to other mechanisms and apply the findings. To measure the AoI of a message, a timestamp is included in each packet at the time of transmission from the source. When the packet reaches its destination, this timestamp is compared against previous packet timestamps received from the same source node. If the received timestamp is newer than the last most recent timestamp from that same node, then it is saved; otherwise, the previously obtained timestamp is kept. At any given time, the difference between the current time and the latest recorded timestamp is defined as the age of information Δ ( t ) , that is,
Δ ( t ) = t u ( t ) ,
where t is the current time and u ( t ) is the most recent timestamp from the node being measured. When used as a means to evaluate the performance of a network over a range of time T , the time average age of information (AAoI) is often used, which is calculated as follows [10]:
Δ T = 1 T 0 T Δ ( t ) d t .
In this work, we use these metrics extensively as a method for quantifying the freshness of data in a network over a range of congestion levels. The simplest network for demonstrating the concept of AoI is a one-hop single-sender single-receiver arrangement, as shown in Figure 1. The AoI of the packet at transmission is measured as Δ 1 ( t ) , and the AoI of the message when it is received is measured as Δ 2 ( t ) .
When the arrival rate of packets λ exceeds the service rate of the network μ and the queue is not managed properly, the network will have poor performance. In these cases, the AoI becomes unstable and grows over time without bound. In a well-managed network, such as the one in Figure 2, the average AoI remains bounded over time even at very high packet arrival rates.
This can be formalized using the concept of rate stability [11,12]. According to Little’s Law,
W ¯ = Q ¯ λ ,
where W ¯ is the average latency, Q ¯ is the average queue length, and λ is the arrival rate. Thus, a queue that grows without bound will lead to a network with latency that grows without bound. Therefore, it is crucial to design a network such that the queue length is stable. This can be stated as follows:
lim t Q ( t ) t = 0 with probability 1
where t is the time and Q ( t ) is the queue length as a function of the time [13].

1.4. Lossy Compression

Lossy compression is a commonly used method for reducing the amount of transmitted information while retaining enough information to reconstruct the signal at the receiving node. It has found widespread use in many applications, particularly video streaming [14]. Its connection with AoI is an emerging field, and researchers have studied the effects with both lossless [15] and lossy [16,17,18] algorithms. In general, lossy compression can provide much higher compression ratios; however, the recovered signal x ^ is not identical to the transmitted signal x, instead being an approximation with an acceptable amount of distortion. This induced error must be properly managed to allow for effective signal construction. Many application-specific compression algorithms, called codecs, have been developed to highly compress signals for specific applications, such as spoken audio for telecommunications [19]. This work seeks a general-purpose compressor for a wide range of applications in order for the architecture to be signal-agnostic. For low-powered sensors, compression algorithms must also be computationally inexpensive and lightweight. To this end, we selected the FPZIP lossy floating point compression algorithm, which is described in [20]. This algorithm was initially developed to efficiently compress large scientific datasets of floating point data. We selected it because of its good blend of speed and compression performance and its ready availability on a wide range of platforms. FPZIP allows for a range of effective compression rates by adjusting the required level of precision. Predictably, Figure 3 shows that the average compression ratio increases as the precision decreases, and the signal error increases as well.
It can be seen clearly that different compression algorithms distort the original data in different ways. In Figure 4, this can be clearly noted in the distribution of the errors δ , defined in (5), from the two lossy compression algorithms. FPZIP induces an error that has stepwise regions of uniform error, whereas ZFP, another floating point compression algorithm described in [21], induces a smooth Gaussian distribution [22].
δ [ n ] = x [ n ] x ^ [ n ]

1.5. Paper Layout

The rest of this work is laid out as follows: first, an in-depth study of different applications and their QoS requirements is provided; next, the proposed architecture is described and the major components and algorithms are examined; finally, the testing setup is presented and the results are discussed.

2. Quality of Service

In QoS-aware networks, several metrics are measured and taken into consideration in network design and operation. QoS has found widespread use in modern communications applications, as different types of data streams have different usability requirements. While many QoS metrics exist, in this work we focus on timeliness and reliability [23].

2.1. Timeliness

Timeliness is of particular importance for applications that run in real time. Data must arrive often enough that the application can form an accurate model of the real world and react accordingly. One such example is the control of a system with a feedback controller. As the information in these systems ages, it becomes less and less useful, and may eventually become dangerous to use. Studies have shown that large delays can cause otherwise stable systems to become unstable. In traditional systems, timeliness has been measured in a variety of ways, such as by measuring the latency of messages or the jitter. It has been shown [24] that AoI can be a useful tool for measuring the timeliness of information in networks. In this approach, the average AoI is used to quantify the timeliness of a message and its requirements.

2.2. Reliability

Reliability is another key requirement for many applications. If the received packets have a high Bit Error Rate (BER) or packet drop rate, then the information may be useless. One example of this is the transmission of images, which can quickly become unrecognizable if large portions are missing or corrupted. Several methods are useful for measuring this corruption. In this work, we use the metric of distortion, which is closely linked AoI. The distortion is calculated as the sum of the mean square error between the original signal x and the recovered signal x ^ , as provided in (6).
P = 1 n i = 1 n ( x [ n ] x ^ [ n ] ) 2
Distortion provides a good metric for encapsulating the difference between the original signal and the recovered one.

2.3. Application Types

QoS-aware transmission policies have been used effectively in applications where packets are highly sensitive to latency, such as Voice over Internet Protocol (VoIP). Different QoS requirements are required for different types of applications. The names and numbers of application classifications vary greatly between specific network protocols, but largely fall into two broad categories, which we use for the purposes of this paper [23].

2.3.1. Elastic Applications

This category refers to applications where data are not made obsolete by newer messages, such as images for classifiers. Such information is not time-critical. Thus, although it may still be desirable to reduce the latency of each message, it is more critical that messages be delivered reliably regardless of latency or order. Such data have minimum tolerable precision requirements, which correlate to a maximum amount of distortion that is allowable. Below this value, the classification performance becomes unacceptable because it can cause incorrect classifications. The level of lossy compression can have a major effect on performance. Above a specific bit precision, the image distortion is no longer reduced, while the number of packets needed to transmit the information increases. In this way, the distortion has strict requirements that must be met, while the AoI is reduced in a best-effort manner.
To represent an elastic application, we used an image classifier based on a pretrained neural network originally trained to classify grayscale images of clothing from the Fashion MNIST dataset using one of ten labels [25]. This classifier was chosen because it was readily available from [26] and because the pretrained classifier was highly accurate at labeling the uncorrupted images. In order for the data to be useful, the distortion of the images must be low enough that the classification accuracy is not degraded. In Figure 5, it is clear that an increase in distortion from lossy compression causes a loss of image detail. This highlights an important tradeoff: using fewer bits to represent an image reduces the required data rate; however, the data are less useful for the end application.
This is illustrated in Figure 6, As expected, the image distortion increases and the classification accuracy decreases as the bit precision decreases. If the distortion is too high, the transmitted data will be worthless to the end application, which wastes precious bandwidth.
In this network, the goal is to use compression to transmit as many useful images as possible as quickly as possible. This application uses a First-Come First-Served (FCFS) queuing strategy, older packets are not rendered obsolete by newer packets, making it important that all packets arrive and that they do so in the correct order.

2.3.2. Real-Time Applications

These applications need data to be delivered in a timely manner in order for the data to be useful. Examples of this type of application include VoIP, event-generated messages such as error messages, and time-sensitive messages such as system state information for real-time control. For these data, the freshness of the data must meet strict latency requirements in order to be useful to the underlying application. It is often the case that more recently generated data render older values obsolete; in these instances, the distortion of the signal caused by dropped packets is irrelevant.
To represent a real-time application, we selected an inverted pendulum controlled using a feedback control loop. This is a commonly used model in the field of control theory, as the pendulum is inherently unstable without constant interventions. In this system, the application can choose to apply force to the left or right of the cart depending on information from position sensors. The chosen model is from OpenGym, which is a useful tool for testing and training algorithms in a simulation environment [27]. The goal is to balance the pendulum and prevent the pendulum arm from exceeding a certain angle threshold. The sensor data are generated at the sensor node, and are transmitted without compression and using the LCFS with removal. Compression is not used for these data, as older data are rendered obsolete and consequently can be quietly discarded before transmission. Instead, a periodic sampler is used to poll the sensor in order to minimize the AoI of the data. Distortion induced by downsampling is not a concern for the controller in this case, as only the most recent sample is used. As can be seen in the contour graph in Figure 7, the quality of the control is dependent on both the freshness and accuracy of the data. The figure shows a score defined as the average number of time-steps in which the pendulum angle is less than 10 degrees.

3. Architecture

3.1. Overview

In order to explore this problem and develop an architecture capable of meeting our needs, a toy environment was developed. Figure 8 shows a diagram of this setup and its major components.
A sensor node generates data that must be transmitted over a shared channel to a gateway node which runs one of two applications, each representing one of the two main application types. The management of the data is then dictated by several major components. If the sensor handles elastic data such as the data required by the image classifier, then an FCFS queue is used in conjunction with lossy compression. The exact compression settings are determined by an adaptive algorithm on a packet-by-packet basis. Conversely, in the event of a real-time application, an LCFS queue is used. A distributed scheduler is then used, with which the node determines the times at which it should transmit data. The key components studied in this regard are as follows: (1) Queue Policy, (2) Adaptive Compression Algorithm, and (3) Scheduling Policy.

3.2. Queue Policy

Optimizing the queue policy for the data is an established method to improve the average AoI of a network. In this method, policies are decided for determining the order in which packets are transmitted and whether they are quietly dropped. Simple changes in policy, for instance, from First-Come First-Served (FCFS) to Last-Come First-Served (LCFS), can have huge consequences on network performance. The former can more easily lead to unbounded growth in the average AoI, but maintains packet order; on the other hand, the latter is more susceptible to packet loss but has stable average AoI under high traffic loads [24].

3.2.1. First-Come First-Served

First-Come First-Served (FCFS) is a common queue policy due to its simplicity. In FCFS, the packets are retrieved from the queue in the same order that they were added. This has the benefit of keeping the packets in order. A drawback is that the average AoI of the network grows without bound in situations where the network is saturated, as the nodes are unable to process old packets quickly enough. This behavior is ideal for data that do not become obsolete with subsequent transmissions and are not AoI sensitive. FCFS queues have been well studied, and researchers have shown that the expected AoI for a network with a single sender and single receiver with an M/M/1 queue is provided by [10]
Δ M / M / 1 = 1 μ 1 + 1 ρ + ρ 2 1 ρ ,
where μ is the service rate and ρ is the utilization provided by
ρ = λ μ ,
where λ is the arrival rate.
A D/M/1 queue is modeled as [10]
Δ D / M / 1 = 1 2 μ 1 + 1 1 γ ( ρ ) ,
where γ ( ρ ) is related to the Lambert W function W and is defined as
γ ( ρ ) = ρ W ( ρ 1 e 1 / ρ ) .
To verify the models, testing was conducted in simulations and hardware. In these tests, a node transmitted to a gateway over a shared channel, as shown in Figure 9. The simulation and hardware results from the setup with FCFS queuing, one hop, a single sender, and a single server follow the theoretical results for the D/M/1 queue fairly closely, as shown in (9).

3.2.2. Last-Come First-Served

Last-Come First-Served (LCFS) is another commonly used queuing policy. In LCFS, the most recent packets are transmitted first. This has the benefit of addressing the main drawback of the FCFS queue, as the average AoI remains low in saturated networks; however, this can lead to other issues, as packets may become stuck at the back of the queue and never transmitted. This makes LCFS more suitable for data that are highly time-sensitive and quickly become obsolete, such as data used in real-time applications. Similar to FCFS, LCFS has been well studied and modeled for both M/M/1 and D/M/1 queues, as shown in (11) and (12), respectively [10].
Δ M / M / 1 = 1 μ 1 + 1 ρ
Δ D / M / 1 = 1 μ 1 + 1 2 ρ
Additionally, Figure 10 shows that the results for the setup with LCFS queuing, one hop, a single sender, and a single server also closely match the theoretical D/M/1 results of (12).

3.3. Adaptive Compression

To optimize network performance, it is critical that the amount of information in the channel be maximized given the limited channel bandwidth. One technique for accomplishing this is through lossy compression. In this work, we seek to reduce the amount of packets required to send data while minimizing the induced error. This can be achieved through careful selection of the compression settings. One area of major concern when transmitting with a compression algorithm is determining the optimal compression rate for a given data source. When too few bytes of data are generated when a high compression level is used, the receiving node needs to wait an excessively long amount of time for the packet to be filled before it can transmit. Conversely, if the compression ratio is too low, then the data arrival rate can exceed the service rate, leading to an AoI that grows without bound. In this work, two adaptive algorithms are developed: a model-based algorithm and a greedy algorithm. These were tested to compare their performance while ensuring that the induced error was sufficiently small to avoid any impact on the downstream application.

3.3.1. Model-Based Algorithm

A model-based scheme was developed to allow nodes to select the best compression rate for the given situation. In an attempt to select the optimal setting, a model was developed to estimate the AAoI for each compression setting available. To achieve this, we used a model that estimated the AoI following the well-studied FCFS model with some adaptations. The arrival rate λ was estimated based on the amount of data generated at the node during each time slot:
λ d r
where d is the number of data samples added to each slot, r is the compression ratio for a given compression algorithm setting and is found experimentally, and μ is the service rate, which is assumed to be constant. This yields
Δ ˜ ( r ) = 1 + r d + ( d r ) 2 1 d r .
Based on this, the algorithm seeks to minimize Δ ˜ ( r ) with the set of compression ratios K = { r 1 , r 2 , , r k } [28].
argmin r K Δ ˜ ( r )
Often, a fixed number of compression levels are available for a given compression algorithm, each introducing a different level of distortion which can be measured at the node. With the AAoI model, it is possible to then calculate the expected AAoI for each available setting and select the one that reduces the AAoI the most while still meeting the distortion criteria. This setting is used to compress the payload, after which the information on the compression scheme is encoded in the header so the receiving node can decompress the packet.

3.3.2. Greedy Algorithm

Another approach utilizes a feedback mechanism to determine the best compression ratio on a packet-to-packet basis. Taking a feedback approach can allow for a more robust control scheme that can adapt to changes in network link quality. Other researchers have shown how greedy algorithms that seek to minimize the queue length can ensure a bounded queue length within a feasible region [18]. It is well known according to Little’s Law that the average queue length is a good estimator of average latency in a network. Because of this, the greedy algorithm seeks to minimize the queue length and greedily selects whatever transmission option reduces it the most. Each time a node is able to transmit, it checks the size of its queue and the compression settings that it can use to transmit the data. The nodes are initialized using an estimate of how many samples can be contained in a packet for a given compression setting. Then, the compression setting that fits the most samples into a packet is selected. The algorithm compresses the samples and measures the size of the bit string. If it exceeds the size of the packet, then a higher compression setting is used. If the highest compression setting is already being used, then fewer packets are used and compression is performed again. If the result is smaller than the packet size, then a lower compression setting is used. While not optimal, greedy algorithms have been shown to be effective in optimizing networks for a set of constraints [29]. When a node decides to transmit a packet, it looks at the length of the queue and uses the r values of each compression ratio to select the compression ratio that reduces the queue length the most while still meeting the distortion requirements.

3.4. Scheduler

Inherent tradeoffs naturally arise in a low-cost network where messages for mixed applications must be transmitted over a shared constrained channel. Transmitting elastic application data directly leads to increased AAoI in real-time application data, as only one type can be transmitted at a time. Therefore, it is important for the network to actively manage these needs. QoS schedulers have been developed for the IEEE802.11 standard [30], and have increasingly been developed for IoT WSNs as well [31,32]. These schedulers are designed so as to be aware of the constraints and tradeoffs of the backend applications and to intelligently select which data packets to transmit in order to optimize the usefulness of the network.
Researchers have shown that a lightweight clock synchronization algorithm can be used to enable slotted ALOHA [33]. In past work, researchers have demonstrated this approach for low-powered networks [34]. In this work, we study three schedulers: (1) Slotted ALOHA, (2) Round-Robin, and (3) Round-Robin with Priority.

3.4.1. Slotted ALOHA

Slotted ALOHA is an evolution of the more rudimentary ALOHA algorithm, and is typically used in low-power networks [35]. It is well known that Slotted ALOHA can provide improved data rates compared to standard ALOHA [36]. In Slotted ALOHA, nodes synchronize with each another to keep track of the transmission slots in which nodes can transmit. The nodes are then free to transmit at the beginning of any slot. In networks where multiple nodes are transmitting, researchers have shown that optimal data rates can be achieved when nodes transmit randomly with a probability of 1 / n , where n is the number of nodes in a network.

3.4.2. Round-Robin

In this scheme, while nodes still track the transmission slots in a synchronized manner, they are assigned a specific slot in which to transmit. In networks where the nodes remain relatively stable, this can be achieved by having the gateway assign ID values to the nodes, which are then used to generate a schedule.

3.4.3. Round-Robin with Priority Scheduler

In this modification of the round-robin scheduler, nodes transmitting real-time data are provided with slots at periodic intervals correlating to their AoI needs. All other nodes are provided with slots around the reserved slots in a round-robin manner.

4. Testing

A test bench was required in order to quantify the performance of the developed algorithms. To allow for more rapid iterations in design, subcomponents such as the compression and queue strategies were tested using low-cost off-the-shelf hardware. In addition, a simulation environment was developed for more complex components such as the scheduling algorithm which required multiple nodes to transmit traffic in real time.

4.1. Hardware

To test the performance of the adaptive compression algorithms, a hardware test bench was developed. In this architecture, LoRa was chosen due to its good suitability for low-cost and low-power sensor networks. LoRa has a long range for its low power consumption, which is largely due to its Chirp Spread Spectrum (CSS) modulation technique. The data rates and range are greatly affected by two settings, namely, the bandwidth and the Spread Factor (SF). Together, these determine the amount of time on air that each symbol takes; the longer the time on air, the lower the data rate and the higher the range, as it becomes more tolerant to noise. For this work, we used the configuration with the highest data rate in North America, which is an SF of 7 and a bandwidth of 500 kHz. Using tools made available by Semtec, we determined that the largest allowable payload with these settings was 222 bytes, which has a packet airtime of 92 ms [37]. This packet size was selected because compression is more effective with larger packets. Adafruit Feather 32u4 RFM95 LoRa Radio transceivers were used in conjunction with two computers running Python scripts, as shown in Figure 11. This device was chosen due to its low cost, readability, and availability of parts.

4.2. Adaptive Compression Results

To verify the AoI models and quantify the performance of the compression algorithms, audio files were transmitted from a sensor node to a gateway node. During each time slot, a fixed number of bits from the file were added to an FCFS queue along with the current time. If the node had enough data to transmit, it would then compress enough data to adequately fill the 222 byte packet with the compression settings specified by the tested algorithm. The information on the compression settings and the timestamp specifying when the data were generated was then stored in the packet header to allow for decompression at the gateway node and calculation of the AoI. After the packet was successfully decompressed, the gateway then calculated the AoI of the data.
Figure 12 shows the average AoI performance of each compression precision setting as a function of the amount of data generated by the sensor node, measured in samples added per slot. In general, when the number of samples added per slot is small, the average AoI increases, since the time between new samples grows. Inversely, when the number of samples added per slot increases, the average AoI also increases, since the FIFO queue size increases. The modeled results closely follow the results measured with the experimentally collected data. As predicted, when attempting to fill a packet, if too little compression is used for a given amount of traffic, then the AoI increases. This is caused by a steady increase in the queue because the node is unable to transmit more samples than arrive. Conversely, if too much compression is used, the sensor node wastes time waiting for data to efficiently fill the packet, leading to a suboptimal AoI.
After the model was verified, the test was repeated using each of the adaptive compression algorithms. In Figure 13, note that as the data arrival rate changes, the node is able to adjust its compression ratio and use the appropriate compression settings to minimize the AoI. A peak in the average AoI in the model -based method occurs due to the existence of a gap in the compression settings; this peak could be removed if more options were available in this regard. The greedy algorithm is able to react more quickly, producing networks with lower average AoI by changing the compression ratio more often while requiring only a limited set of options.

4.3. Simulation Testing Setup

Whenever possible, the component level results were verified with hardware. However, due to the complexity of this model, performing all testing with hardware was not feasible, especially for the tests involving control of the inverted pendulum and multiple nodes. In these situations, a simulation environment was used, which had the added benefit of faster development cycles. To this end, a custom simulation environment was developed using Lingua Franca [38]. This provided a framework in which different programs could communicate with one another in an event-based and time-accurate simulation environment [33]. In this way, the same code used in the component-level hardware tests could be seamlessly brought into the simulation environment efficiently and with representative real-time performance.

4.4. Simulation Results

To test the scheduling algorithms, a network of multiple sensor nodes and one gateway node was generated. The network consisted of multiple sensor nodes transmitting an image for classification along with one node transmitting its pendulum angle and receiving a control command. At the start of each slot, new data were generated at each sensor node and each sensor node had the choice of whether to send its data or wait. This choice was determined by the selected scheduling algorithm. The goal of the network was to minimize the average AoI of the classification data packets while meeting the strict AoI deadlines required by the inverted pendulum test. The different schedulers were tested to compare the results.
In the results shown in Figure 14 and Figure 15, it can be clearly seen that Slotted ALOHA scheduler performs poorly compared to the two round-robin-based schedulers, particularly at higher data arrival rates. The round-robin scheduler with priority is able to modestly improve the AoI of the real-time application data with little to no discernible performance decrease to elastic data. Overall, this shows that when data rates are low, the schedulers have very similar performance. In this case, Slotted ALOHA can be used, as it does not require any synchronization between the different nodes. As the data rate increases, using round-robin scheduling and dedicated slots for transmitting real-time data ensures that the AAoI remains low for AoI-critical data even as the traffic increases. Furthermore, the round-robin-based schedulers both have fewer collisions at higher data arrival rates compared to Slotted ALOHA, which can help to reduce the AAoI for elastic data. As mentioned earlier, synchronization between nodes is necessary fo round-robin scheduling, which increases complexity and can add additional overhead.

5. Conclusions and Future Work

This work makes several contributions. It highlights the types of applications that low-power sensor networks must service along with the QoS requirements needed to properly service these applications. AoI proves to be an effective metric for quantifying the QoS requirements of different applications. Different queuing and compression strategies are used to minimize the amount of excessive information in order to achieve these requirements while reducing the amount of latency and wasted energy. Lossy compression is shown to be very effective for reducing the average AoI of the network, and our results show that the effects can be modeled by scaling the arrival rate with the compression ratio. We developed two lossy compression algorithms, namely, a greedy algorithm and a model-based algorithm, with the former proving to be more effective at selecting the optimal compression settings.
Furthermore, this work demonstrates the different drawbacks of these methods. By using a packet scheduler that is aware of QoS needs, these disadvantages can be mitigated. To this end, we integrated a scheduling algorithm into the proposed architecture to ensure that the AoI requirements are met when servicing multiple applications with differing QoS requirements. These schedulers were tested in a simulation environment, verifying that they all meet the QoS requirements of the backend applications. Finally, we present an architecture that meets the strict QoS requirements of different types of packets for different backend applications. This architecture is capable of balancing the drawbacks of the different techniques to improve performance and meet the needs of backend applications. Overall, a QoS-aware architecture is able to ensure that data requirements can be met. Our results show that using adaptive compression can greatly reduce the AoI of data while minimizing the amount of distortion added to the data. Furthermore, round-robin scheduling methods, although more complex and requiring synchronization between nodes, enable real-time applications to receive data in a timely manner even in high-traffic channels.
In future work, the architecture presented in this work will be further refined and geared specifically towards high-speed IoT applications where data freshness and data quality are important. One specific topic worthy of further effort involves quantifying QoS requirements in terms of AoI and distortion, for instance through Value of Information (VoI), a fast-growing and successful offshoot of AoI [39] which has shown promise for use in safety-critical applications [40]. The use of VoI would allow for a more generalized system model, enabling networks to service a wider range of applications.

Author Contributions

Conceptualization, R.B. and F.M.C.; methodology, F.M.C.; software, F.M.C.; validation, F.M.C.; formal analysis, F.M.C.; investigation, F.M.C.; resources, S.M. and R.M.N.; writing—original draft preparation, F.M.C.; writing—review and editing, R.M.N. and S.M.; visualization, S.M.; supervision, R.B.; project administration, R.B. and R.M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets presented in this article are not readily available because of sponsor restrictions.

Conflicts of Interest

Author Frederick M. Chache was employed by the company Arcfield. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflicts of interest.

References

  1. Heble, S.; Kumar, A.; Prasad, K.; Samirana, S.; Rajalakshmi, P.; Desai, U. A low power IoT network for smart agriculture. In Proceedings of the 2018 IEEE 4th World Forum on Internet of Things, Singapore, 5–8 February 2018; pp. 609–614. [Google Scholar]
  2. Kjellby, R.; Cenkeramaddi, L.; Frøytlog, A.; Lozano, B.; Soumya, J.; Bhange, M. Long-range and Self-powered IoT Devices for Agriculture and Aquaponics Based on Multi-hop Topology. In Proceedings of the 2019 IEEE 5th World Forum on Internet of Things, Limerick, Ireland, 15–18 April 2019; pp. 545–549. [Google Scholar]
  3. Macaraeg, K.; Hilario, C.; Amabatali, C. LoRa-based mesh network for off-grid emergency communications. In Proceedings of the IEEE Global Humanitarian Technology Conference, Online, 29 October–1 November 2020. [Google Scholar]
  4. Saraereh, O.; Alsaraira, A.; Khan, I.; Uthansakul, P. Performance evaluation of UAV-enabled LoRa networks for disaster management applications. Sensors 2020, 20, 2396. [Google Scholar] [CrossRef] [PubMed]
  5. Kaul, S.; Gruteser, M.; Rai, V.; Kenney, J. Minimizing the age of information in vehicular networks. In Proceedings of the 2011 IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, Salt Lake City, UT, USA, 27–30 June 2011; pp. 350–358. [Google Scholar]
  6. Chen, X.; Gatsis, K.; Hassani, H.; Bidokhti, S. Age of information in random access channels. IEEE Trans. Inf. Theory 2022, 68, 6548–6568. [Google Scholar] [CrossRef]
  7. Kam, C.; Kompella, S.; Nguyen, G.D.; Wieselthier, J.E.; Ephremides, A. Towards an effective age of information: Remote estimation of a Markov source. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications Workshops, Honolulu, HI, USA, 15–19 April 2018; pp. 367–372. [Google Scholar]
  8. Wang, M.; Chen, W.; Ephremides, A. Reconstruction of counting process in real-time: The freshness of information through queues. In Proceedings of the 2019 IEEE International Conference on Communications, Shanghai, China, 20–24 May 2019. [Google Scholar] [CrossRef]
  9. Chache, F.; Maxon, S.; Narayanan, R.; Bharadwaj, R. Improving quality of service in a mesh network using age of information. In Proceedings of the 2022 IEEE Military Communications Conference, Rockville, MD, USA, 28 November–2 December 2022; pp. 649–654. [Google Scholar]
  10. Yates, R.D.; Sun, Y.; Brown, D.; Kaul, S.; Modiano, E.; Ulukus, S. Age of information: An introduction and survey. IEEE J. Sel. Areas Commun. 2021, 39, 1183–1210. [Google Scholar] [CrossRef]
  11. Little, J.D.C. A proof for the queuing formula: L = λW. Oper. Res. 1961, 9, 383–387. [Google Scholar] [CrossRef]
  12. Morimura, H. On the relation between the distributions of the queue size and the waiting time. Kodai Math. Semin. Rep. 1962, 14, 6–19. [Google Scholar] [CrossRef]
  13. Neely, M.J. Stochastic Network Optimization with Application to Communication and Queueing Systems; Springer Nature: Geneva, Switzerland, 2010. [Google Scholar]
  14. Ramanujan, R.; Newhouse, J.; Ahamad, M. Adaptive streaming of MPEG video over IP networks. In Proceedings of the 22nd Annual Conference on Local Computer Networks, Minneapolis, MN, USA, 2–5 November 1997; pp. 398–409. [Google Scholar]
  15. Yazdani, N.; Lucani, D. Online compression of multiple IoT sources reduces the age of information. IEEE Internet Things J. 2021, 8, 14514–14530. [Google Scholar] [CrossRef]
  16. Hu, S.; Chen, W. Balancing data freshness and distortion in real-time status updating with lossy compression. In Proceedings of the IEEE Conference on Computer Communications Workshops, Online, 6–9 July 2020; pp. 13–18. [Google Scholar]
  17. Zhong, J.; Yates, R.; Soljanin, E. Backlog-adaptive compression: Age of information. In Proceedings of the 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017; pp. 566–570. [Google Scholar]
  18. Hu, S.; Chen, W. Joint lossy compression and power allocation in low latency wireless communications for IIoT: A cross-layer approach. IEEE Trans. Commun. 2021, 69, 5106–5120. [Google Scholar] [CrossRef]
  19. Yang, M. Low bit rate speech coding. IEEE Potentials 2004, 23, 32–36. [Google Scholar] [CrossRef]
  20. Lindstrom, P.; Isenburg, M. Fast and efficient compression of floating-point data. IEEE Trans. Vis. Comput. Graph. 2006, 12, 1245–1250. [Google Scholar] [CrossRef] [PubMed]
  21. Lindstrom, P. Fixed-rate compressed floating-point arrays. IEEE Trans. Vis. Comput. Graph. 2014, 20, 2674–2683. [Google Scholar] [CrossRef] [PubMed]
  22. Lindstrom, P. Error Distributions of Lossy Floating-Point Compressors. United States. Available online: https://www.osti.gov/servlets/purl/1526183 (accessed on 16 February 2023).
  23. Jha, S.; Hassan, M. Engineering Internet QoS; Artech House: London, UK, 2002. [Google Scholar]
  24. Kosta, A.; Pappas, N.; Angelakis, V. Age of information: A new concept, metric, and tool. Found. Trends® Netw. 2017, 12, 162–259. [Google Scholar] [CrossRef]
  25. TensorFlow. Basic Classification: Classify Images of Clothing. Available online: https://www.tensorflow.org/tutorials/keras/classification (accessed on 16 February 2023).
  26. Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
  27. Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; Zaremba, W. OpenAI Gym. arXiv 2016, arXiv:1606.01540. [Google Scholar]
  28. Kam, C.; Kompella, S.; Nguyen, G.D.; Wieselthier, J.E.; Ephremides, A. Modeling the age of information in emulated ad hoc networks. In Proceedings of the 2017 IEEE Military Communications Conference, Baltimore, MD, USA, 23–25 October 2017; pp. 436–441. [Google Scholar]
  29. Bracciale, L.; Loreti, P. Lyapunov drift-plus-penalty optimization for queues with finite capacity. IEEE Commun. Lett. 2020, 24, 2555–2558. [Google Scholar] [CrossRef]
  30. Grilo, A.; Macedo, M.; Nunes, M. A scheduling algorithm for QoS support in IEEE802. 11 networks. IEEE Wirel. Commun. 2003, 10, 36–43. [Google Scholar] [CrossRef]
  31. Li, L.; Li, S.; Zhao, S. QoS-aware scheduling of services-oriented internet of things. IEEE Trans. Ind. Inform. 2014, 10, 1497–1505. [Google Scholar]
  32. Li, C.; Xiao, Y.; Tu, Z.; Chu, D.; Wang, C.; Wang, L. A fast real-time QoS-aware service selection algorithm. In Proceedings of the 2021 IEEE World Congress on Services, Chicago, IL, USA, 5–10 September 2021; pp. 72–77. [Google Scholar]
  33. Chache, F.; Maxon, S.; Narayanan, R.; Bharadwaj, R. Distributed network communications using B.A.T.M.A.N. algorithm over LoRa. In Proceedings of the SPIE Conference in Radar Sensor Technology XXV, Online, 12 April 2021; p. 11742. [Google Scholar] [CrossRef]
  34. Tessaro, L.; Raffaldi, C.; Rossi, M.; Brunelli, D. Lightweight synchronization algorithm with self-calibration for Industrial LoRa sensor networks. In Proceedings of the Workshop on Meteorology for Industry 4.0 and IoT, Brescia, Italy, 16–18 April 2018; pp. 259–263. [Google Scholar]
  35. Lawrence, R. ALOHA packet system with and without slots and capture. SIGCOMM Comput. Commun. 1975, 5, 28–42. [Google Scholar]
  36. Abramson, N. The throughput of packet broadcasting channels. IEEE Trans. Commun. 1977, 25, 117–128. [Google Scholar] [CrossRef]
  37. Semtech Corporation. LoRa Modulation Basics; Application Note AN 1200.22; Semtech Corporation: Camarillo, CA, USA, 2015. [Google Scholar]
  38. Lingua Franca Handbook. Available online: https://www.lf-lang.org/ (accessed on 16 February 2023).
  39. Alawad, F.; Kraemer, F. Value of information in wireless sensor network applications and the IoT: A review. IEEE Sens. J. 2022, 22, 9228–9245. [Google Scholar] [CrossRef]
  40. Ayan, O.; Vilgelm, M.; Klügel, M.; Hirche, S.; Kellerer, W. Age-of-information vs. value-of-information scheduling for cellular networked control systems. In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, Montreal, QC, USA, 16–18 April 2019; pp. 109–117. [Google Scholar]
Figure 1. Network with AoI measured at two locations: Δ 1 ( t ) is measured after the packet leaves the queue of the sender (Red) and Δ 2 ( t ) is measured when the packet reaches the gateway (Yellow).
Figure 1. Network with AoI measured at two locations: Δ 1 ( t ) is measured after the packet leaves the queue of the sender (Red) and Δ 2 ( t ) is measured when the packet reaches the gateway (Yellow).
Iot 05 00037 g001
Figure 2. Age of information vs. time as measured by the source and destination for a well-managed network. This network is managed properly, and the AoI remains bounded with high arrival rates.
Figure 2. Age of information vs. time as measured by the source and destination for a well-managed network. This network is managed properly, and the AoI remains bounded with high arrival rates.
Iot 05 00037 g002
Figure 3. Compression ratio (black dashed line) and noise power of the recovered signal (blue) at different levels of bit precision when using FPZIP on a payload of 222 bytes. As expected, as the compression ratio increases, as does the noise power; similarly, the induced distortion decreases as the compression ratio decreases.
Figure 3. Compression ratio (black dashed line) and noise power of the recovered signal (blue) at different levels of bit precision when using FPZIP on a payload of 222 bytes. As expected, as the compression ratio increases, as does the noise power; similarly, the induced distortion decreases as the compression ratio decreases.
Iot 05 00037 g003
Figure 4. Histogram of the errors from FPZIP (top) and ZFP (bottom). It can be clearly seen that the two algorithms introduce errors with very different profiles. FPZIP produces a stair-step error distribution, whereas ZFP produces an error distribution with a Gaussian shape.
Figure 4. Histogram of the errors from FPZIP (top) and ZFP (bottom). It can be clearly seen that the two algorithms introduce errors with very different profiles. FPZIP produces a stair-step error distribution, whereas ZFP produces an error distribution with a Gaussian shape.
Iot 05 00037 g004
Figure 5. Recovered images from the MNIST fashion training set after different levels of compression were applied. The original image is shown in (a), the recovered image after low compression in (b), and the recovered image after high compression in (c).
Figure 5. Recovered images from the MNIST fashion training set after different levels of compression were applied. The original image is shown in (a), the recovered image after low compression in (b), and the recovered image after high compression in (c).
Iot 05 00037 g005aIot 05 00037 g005b
Figure 6. Distortion vs. classification accuracy for the image classifier. The lower the compression precision, the higher the image distortion and the lower the accuracy of the classifier. If too much compression is used, the transmitted data will become useless to the backend application.
Figure 6. Distortion vs. classification accuracy for the image classifier. The lower the compression precision, the higher the image distortion and the lower the accuracy of the classifier. If too much compression is used, the transmitted data will become useless to the backend application.
Iot 05 00037 g006
Figure 7. Contour map of the inverted pendulum, showing the performance with different levels of network delay and sensor error. The system is able to maintain control and keep the pendulum upright when the sensor signal distortion and AoI are both sufficiently low.
Figure 7. Contour map of the inverted pendulum, showing the performance with different levels of network delay and sensor error. The system is able to maintain control and keep the pendulum upright when the sensor signal distortion and AoI are both sufficiently low.
Iot 05 00037 g007
Figure 8. Simulation environment used to test the various components and optimization scheduling techniques. Nodes with sensors for each of the different application types generate data, then add these data to the appropriate queue type. A distributed scheduling algorithm is used to determine which node should transmit over the shared channel.
Figure 8. Simulation environment used to test the various components and optimization scheduling techniques. Nodes with sensors for each of the different application types generate data, then add these data to the appropriate queue type. A distributed scheduling algorithm is used to determine which node should transmit over the shared channel.
Iot 05 00037 g008
Figure 9. Average AoI vs. arrival rate for FCFS queue, comparing the measured values with the theoretical ones. When the arrival rate approaches zero, the average AoI is large due to long periods of time between updates. The average AoI increases with the increase in arrival rate, because the queue size increases without bound. It can be seen that the simulated, theoretical, and measured D/M/1 performance results all agree.
Figure 9. Average AoI vs. arrival rate for FCFS queue, comparing the measured values with the theoretical ones. When the arrival rate approaches zero, the average AoI is large due to long periods of time between updates. The average AoI increases with the increase in arrival rate, because the queue size increases without bound. It can be seen that the simulated, theoretical, and measured D/M/1 performance results all agree.
Iot 05 00037 g009
Figure 10. Average AoI vs. arrival rate for LCFS queuing, comparing the measured values with the theoretical ones. The theoretical, simulated, and measured performance results of the D/M/1 queue agree. At low arrival rates, the average AoI is high because there are long periods of time between sensor updates. As the arrival rate increases, the average AoI asymptotically approaches the minimum value.
Figure 10. Average AoI vs. arrival rate for LCFS queuing, comparing the measured values with the theoretical ones. The theoretical, simulated, and measured performance results of the D/M/1 queue agree. At low arrival rates, the average AoI is high because there are long periods of time between sensor updates. As the arrival rate increases, the average AoI asymptotically approaches the minimum value.
Iot 05 00037 g010
Figure 11. The Adafruit Feather 32u4 RFM95 LoRa Radio transceivers used during testing to obtain experimental results. Each transceiver contains an ARM core processor running C++ code and a SX1276 LoRa transceiver. The receiver and transmitter used identical hardware and differed only in their code.
Figure 11. The Adafruit Feather 32u4 RFM95 LoRa Radio transceivers used during testing to obtain experimental results. Each transceiver contains an ARM core processor running C++ code and a SX1276 LoRa transceiver. The receiver and transmitter used identical hardware and differed only in their code.
Iot 05 00037 g011
Figure 12. Comparison of average AoI measured between two LoRa transceivers for three different bit precision levels and when varying the number of samples added per slot. For each precision level, there exists an optimal number of added samples which minimizes the average AoI of the network.
Figure 12. Comparison of average AoI measured between two LoRa transceivers for three different bit precision levels and when varying the number of samples added per slot. For each precision level, there exists an optimal number of added samples which minimizes the average AoI of the network.
Iot 05 00037 g012
Figure 13. Average AoI vs. number of packets added per slot with two LoRa transceivers for the two adaptive compression rate algorithms. The adaptive algorithms change the compression settings to minimize the average AoI as the number of added samples varies, with the greedy algorithm outperforming the model-based method.
Figure 13. Average AoI vs. number of packets added per slot with two LoRa transceivers for the two adaptive compression rate algorithms. The adaptive algorithms change the compression settings to minimize the average AoI as the number of added samples varies, with the greedy algorithm outperforming the model-based method.
Iot 05 00037 g013
Figure 14. AoI of the elastic data as a function of the data arrival rate for the different scheduling algorithms. At low arrival rates, all algorithms have similar performance; however, at high arrival rates Slotted ALOHA performs poorly compared to round-robin scheduling.
Figure 14. AoI of the elastic data as a function of the data arrival rate for the different scheduling algorithms. At low arrival rates, all algorithms have similar performance; however, at high arrival rates Slotted ALOHA performs poorly compared to round-robin scheduling.
Iot 05 00037 g014
Figure 15. AoI of the real-time data as a function of the data arrival rate for the different scheduling algorithms. Round-robin with priority slightly outperforms round-robin, though both have a constant AAoI over the range of arrival rates. Slotted ALOHA performs the worst, with the AAoI increasing as the arrival rate increases.
Figure 15. AoI of the real-time data as a function of the data arrival rate for the different scheduling algorithms. Round-robin with priority slightly outperforms round-robin, though both have a constant AAoI over the range of arrival rates. Slotted ALOHA performs the worst, with the AAoI increasing as the arrival rate increases.
Iot 05 00037 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chache, F.M.; Maxon, S.; Narayanan, R.M.; Bharadwaj, R. Age of Information-Aware Networks for Low-Power IoT Sensor Applications. IoT 2024, 5, 816-834. https://doi.org/10.3390/iot5040037

AMA Style

Chache FM, Maxon S, Narayanan RM, Bharadwaj R. Age of Information-Aware Networks for Low-Power IoT Sensor Applications. IoT. 2024; 5(4):816-834. https://doi.org/10.3390/iot5040037

Chicago/Turabian Style

Chache, Frederick M., Sean Maxon, Ram M. Narayanan, and Ramesh Bharadwaj. 2024. "Age of Information-Aware Networks for Low-Power IoT Sensor Applications" IoT 5, no. 4: 816-834. https://doi.org/10.3390/iot5040037

APA Style

Chache, F. M., Maxon, S., Narayanan, R. M., & Bharadwaj, R. (2024). Age of Information-Aware Networks for Low-Power IoT Sensor Applications. IoT, 5(4), 816-834. https://doi.org/10.3390/iot5040037

Article Metrics

Back to TopTop