1. Introduction
Internet of things (IoT) is being applied in various fields, from small sensors to industrial control systems, by supporting the interconnection between devices, and it aims to improve the quality of life [
1]. Recently, new challenges are emerging for machine-type communication (MTC), such as public safety and problems in the field of smart grids. In addition, it is predicted that the IoT market will rapidly grow above ten times that of cellular communications [
2]. Furthermore, 5G technology, expected to be commercially available by 2020, includes the development of two IoT-specific features, namely, massive IoT, suitable for high-density networks, and mission-critical IoT, aimed to delay-sensitive services [
3]. To support various services, IoT networks will coexist with different technologies according to the type of service, rather than being unified by a single solution.
The connectivity technology for IoT services is classified into two categories according to the coverage distance: short-range technology, for applications including smart home and smart health, and long-range technology, for applications including tracking and monitoring for connected cars and smart watches. Short-range connectivity relies on technologies such as wireless local area network (WLAN), Bluetooth, and ZigBee, and the resulting network connects devices based on multiple technologies and forms an infrastructure with a gateway to connect to external networks. In this architecture, however, network maintenance, operation cost, and network complexity rise exponentially as the number of devices grows. In contrast, long-range connectivity relies on cellular network technologies such as global system for mobile communications (GSM), wideband code-division multiple access (WCDMA), long-term evolution (LTE), and LTE-MTC that cover several kilometers and support both device mobility without connection loss and high-speed transmission. However, these technologies present drawbacks including protocol overhead, reduced battery life, and high cost.
Some connectivity solutions currently require low-frequency operation and tolerate some amount of delay, but also demand to continuously operate for several years in a wide area [
4,
5]. However, the conventional connectivity technology cannot suitably satisfy these requirements. Given that the cellular network is optimized for voice, multimedia, and high-speed services, it is difficult to achieve low cost and low energy consumption for IoT modules. Likewise, the requirement of a gateway to provide connection with external networks in short-range connectivity technologies causes similar problems to those when using a wired or cellular network. To solve these problems, the low-power wide area network (LPWAN) technology was introduced. This technology extends the coverage area to several kilometers and improves the device cost and energy consumption by reducing the modem complexity.
Figure 1 illustrates the LPWAN technology and IoT services according to coverage and throughput.
A LPWAN is designed with the following requirements: communication distance up to 40 km, thousands of devices supported by one base station, availability for over 10 years without battery replacement, and module price below US
$5. Examples of LPWAN technologies that use the unlicensed industrial, scientific, and medical bands include SIGFOX and long range (LoRa), whereas the narrowband IoT (NB-IoT), recently introduced by the 3rd Generation Partnership Project (3GPP), uses the licensed bands [
6]. The NB-IoT introduced the evolved packet system (EPS) optimization that reduces the protocol overhead of state transition to achieve low energy consumption in addition to reduce the modem complexity. In existing LTE networks, the control signals and application data are transmitted separately via the control and user planes, respectively, and a radio bearer setup is necessary to send application data. On the other hand, the NB-IoT enables a piggyback small application packet into the non-access stratum (NAS) signaling message. This scheme improves the transmission efficiency by skipping the bearer setup. Despite these features, there has been no progress in the scheduling request procedure to reduce energy consumption. Given that the NB-IoT is developed based on LTE, auxiliary procedures are necessary for the uplink radio resource allocation that is performed only by a random-access procedure.
In this article, we proposed a novel mechanism to improve energy consumption based on the predictive resource allocation. Prediction-based algorithms have been studied to guarantee the bandwidth and minimize the latency in poor channel condition. In [
7], the authors investigated the predictability of user behavior to minimize bandwidth to achieve certain quality of service. However, this work does not exploit the non-continuous traffic characteristics and power consumption problem. In [
8,
9], the algorithm that predicts uplink radio resources by monitoring bandwidth measurement for the video stream was proposed. These papers analyzed real-time service which generates packet with short deterministic interval continuously. In contrast, our work is for IoT data traffic that only a few handshakes with small size packet exist. In [
10], the authors made traffic model for machine-to-machine communication and proposed uplink packet prediction algorithm based on the idea that data traffic can be generated at similar time by machines in the same group with a high probability. But again, it does not exploit the correlation between uplink and downlink packet.
The remainder of this article is organized as follows. We briefly introduce the NB-IoT technology in
Section 2 and identify the energy consumption problem caused by scheduling request procedures in
Section 3. Then, we propose the algorithms to alleviate this problem in
Section 4.
Section 5 presents the simulation results and evaluation, and we provide a conclusion in
Section 6.
3. Problem Statement for Scheduling Request
Even after the radio bearer is established and the RRC is in the connected state (RRC_Connected), the uplink packets cannot be transmitted without a scheduling request procedure. In this section, we present the scheduling request procedure and the energy consumption problem in NB-IoT networks.
Similar to LTE networks, the NB-IoT radio resources for packet transmission are shared among UEs, and an evolved node B (eNB) scheduler dynamically assigns these resources to each UE based on the scheduling policy. Scheduling commands, which contain assigned time, resource, and decoding information, represent DCI transmitted via the NPDCCH. The UE decodes the NPDCCH information by using a radio network temporary identifier (RNTI) at specific times with the decoding period configured when the connection is established. This RNTI-based decoding identifies whether the radio resource for the NPUSCH is assigned to the UE.
Figure 8 illustrates the uplink packet transmission procedure including the physical control channel in the case that the UE does not have any assigned radio resources for uplink. In contrast to LTE networks, the NB-IoT does not have dedicated radio resources for scheduling requests. Hence, the uplink radio resources can be only requested through a random-access procedure. Since this procedure competes with that of other devices, the contention resolution could fail and the UE retries random access after a back-off time. As shown in step (1) of
Figure 8, the preamble is transmitted via a narrowband random-access channel (NPRACH). Then, the eNB sends the random-access response through the NPDSCH and NPDCCH. The NPDCCH includes the DCI to decode the NPDSCH, and the NPDSCH carries an identifier for contention resolution and a scheduling command for the next NPUSCH, as shown in steps (2) and (3) of
Figure 8. The eNB scheduler has no information on the amount of data in the UE buffer. Hence, it first assigns a small radio resource to receive the UE buffer status information. Next, when the UE receives the uplink scheduling command, it sends the buffer status report through the scheduled NPUSCH, as shown in step (4) of
Figure 8. Finally, the eNB acknowledges the UE buffer status and continuously assigns uplink radio resources until the data transfer corresponding to the reported size is completed, as illustrated in steps (5) though (8) in
Figure 8.
Thus, although the radio bearer is established, additional interactions are required due to the scheduling request procedure. Given that the random-access procedure can be used to send a RRC connection request besides the scheduling request, the response message is typically composed of a 88-bit scheduling command, a size appropriate for that type of message [
15]. Therefore, even packets with few bytes are difficult to transmit in MSG3 (step (4) of
Figure 8), and another uplink transmission is required to complete the packet transmission, like step (6) of
Figure 8. Given the NB-IoT repeating transmission for coverage enhancement, as the number of uplink application packets increases, so does the energy consumption.
Clearly, there is no effect of the scheduling request when a session completes after the transmission of one small packet, as that shown in
Figure 7b. However, if a handshake between the UE and network is required, as shown in
Figure 9, the battery consumption by the scheduling request increases.
Figure 9a illustrates the scenario where the network server responds to the application report, and a radio link control (RLC) ACK packet corresponding to the downlink data should be transmitted with the scheduling request procedure.
Figure 9b illustrates the scenario of a session triggered by the network command. In the worst case, two scheduling request procedures can occur by the RLC ACK and application response due to different processing times. As the active time increases by the random-access procedure, both the battery life and cell capacity can decrease due to failure on contention resolution.
Besides the handshake triggered by application packets, the energy consumption problem arises by the transport protocol stack used in the IoT network, such as constrained application protocol (CoAP), datagram transport layer security (DTLS), and multicast domain name system (mDNS). The DTLS protocol, which is intensively investigated for the IoT [
16], needs additional handshakes for security-context creation and resuming [
17]. Given the memory cost for the server to maintain several security contexts from IoT devices, the refresh operation at the server is inevitable. In this case, an inappropriate timeout value can cause more scheduling request procedures and increase energy consumption.
Figure 10 illustrates the initial 6-way handshake to create security context in the CoAP/DTLS protocol. The uplink DTLS and RLC ACK packets need a scheduling request procedure per transmission.
4. Prediction-Based Energy Saving Mechanism
In this section, we propose a prediction-based energy saving mechanism (PBESM) that allows predictive resource allocation to reduce the energy consumption caused by the scheduling request procedure in an NB-IoT network. The PBESM predicts the uplink occurrence and processing delay from packet inspection, and the eNB pre-assigns radio resources for uplink packet transmission. Thus, the PBESM allows to send uplink packets without requiring a scheduling request procedure.
4.1. Network Architecture with PBESM.
Figure 11 shows the NB-IoT network architecture that includes proposed mechanism. It is basically the same as the conventional network architecture and interface structure, but considers two new entities, namely, the packet inspection entity (PIE) and packet prediction entity (PPE).
The PIE, which is logically located on the MME, determines the session type from the packet header inspection, e.g., protocol type, port number, and IP address. Then, the PIE predicts the occurrence of the uplink response message with strategy as o the paging delay in the PSM.
Table 1. Moreover, it measures the response time for each session and sends this information to the PPE, which operates in the base station. The PIE is located on the MME because the paging delay in the PSM may pull the predictive processing delay in the wrong direction, and the MME can prevent this case. The estimated information is transmitted through the existing S1-MME interface in the downlink packets. Though the response message of the NAS and transport protocol (e.g., DTLS, CoAP, and transmission control protocol–TCP) can be predicted in the PIE, it is difficult to predict the application response due to message encryption and customized protocols. Therefore, the application server should support the prediction of uplink occurrence to profit from the PBESM. For prediction from the application server, the information about packet occurrence is trustable only due to the paging delay in the PSM.
The PPE organizes the received uplink occurrence information and response time for each session. Using these data and the proposed algorithm, it predicts the processing delay, which is the time difference between the downlink transmission and uplink packet generation. Then, the eNB generates a prescheduling command that contains the uplink transmission time and modulation scheme, and the command is sent to the UE in a downlink packet via the NPDSCH. When the UE receives the command, it holds the uplink packet for the specified time without a scheduling request procedure.
Predictable Packet Type
There are five types of predictable packets, namely, RLC ACK, RRC and NAS signaling messages, transport layer response (e.g., TCP ACK and DTLS response), and application response.
Table 1 lists the prediction entities and strategies for each packet type.
4.2. Prediction of Processing Delay
The processing delay for downlink packets varies according to the protocol category, application type, and device performance. The PPE manages the prediction metrics for each session and UE. We introduce the predictive processing delay,
according to the cell ID (UE cell ID–CID) and the packet type (packet type ID–PID). Hence, we define the
k-th predictive processing delay as
and modified predictive processing delay
, which applies the carried information with weight
by
where
is the received information from the MME.
For a given session and UE, the processing delay can be very similar with its previous value under the same UE load. Therefore, the proposed algorithm uses the previous value as a prediction reference. The prediction result and statistics of past predictions are also considered. Whether the previous prediction was successful can be confirmed through the decoding result of the preassigned radio resource. The variation of the predictive processing delay follows these criteria: if the previous prediction failed and the current probability,
, of successful delay prediction is smaller than the target probability,
, of successful delay prediction, then the PPE increases the predictive processing delay; if the previous prediction succeeded and
is greater than
, then the PPE reduces the predictive processing delay; whereas the PPE holds previous value otherwise. The corresponding expression is:
where:
and are compensation steps,
, and
and are the upper and lower boundaries, respectively.
The variation step has a dependency on the target probability in (2). Hence, the predictive processing delay has a slower decrease compared with the rate of increase, as the target probability is larger. Γ has a role to reduce the variation step when the current probability of successful delay prediction is close to the target probability. Algorithm 1 shows the corresponding algorithm to predict the processing delay. The processing delay is estimated in lines 2 to 12, and the probability of successful delay prediction is updated in lines 13 to 23. Furthermore, a window calculation is applied to reduce the complexity and consider the most recent results, as described in lines 19 to 23. Initial values are chosen based on the simulation result.
Algorithm 1. Proposed prediction algorithm for processing delay in an NB-IoT network |
1: Initialization: =0.9, =0.0, =0.5, = , =0, =0.8 for NAS/APP/IP packet, =1.0 for RLC/RRC packet, CRC=0, =1, =30,000, =10, =256 |
|
2: Prediction: |
3: = + 1 |
4: = | − | |
5: = + (1 − ) × |
6: if and CRC = 0 then |
7: = min{ (1 + } |
8: else if and CRC = 1 then |
9: = max{ (1 } |
10: else |
11: = |
12: endif |
|
13: NPUSCH result update: |
14: if NPUSCH_CRC = OK then |
15: CRC = 1 |
16: else |
17: CRC = 0 |
18: endif |
19: if then |
20: = ( (-1) + CRC)/ |
21: else |
22: = ( (-1) + CRC)/ |
23: endif |
4.3. UE Procedure
Algorithm 2 describes the procedure to apply the PBESM in the UE side. The UE stores a prescheduling command, which is transmitted via NPDSCH, and checks the stored prescheduling command when the uplink packet occurred. If there is no pre-scheduling command, the UE follows the legacy scheduling request procedure. However, if there is stored information, the UE defers the uplink packet transmission until the prescheduled time without triggering the scheduling request procedure.
Algorithm 2. Proposed UE procedure in an NB-IoT network |
1: Scheduling request procedure: |
2: if scheduling request is triggered then |
3: if scheduling command is already stored then |
4: store uplink packet in the buffer. |
5: delay uplink transmission by prescheduled time |
6: else |
7: process scheduling request procedure with RA |
8: endif |
9: endif |
10: Uplink Scheduling Procedure: |
11: if TX time equals scheduling command time then |
12: if buffer is not empty then |
13: process NPUSCH transmission |
14: else |
15: ignore NPUSCH |
16: endif |
17: endif |
On the other hand, if the eNB fails to predict the processing delay and the UE does not have uplink packets to send at the prescheduled time, the UE does not send empty packets to save energy. In this case, the UE does nothing at the prescheduled time and follows the legacy scheduling request procedure when uplink packets are generated.
4.4. Uplink Procedure without Scheduling Request
Figure 12 compares the legacy NB-IoT and PBESM procedures for the scenario that the UE receives a network command and sends the response message. For the NB-IoT, the random-access procedure is performed to request uplink radio resources. However, the PBESM delays the uplink packet until the prescheduled time, and the random-access procedure is not executed. In view the UE TX operation, MSG1 and MSG3 procedures can be bypassed using proposed mechanism and battery consumption for message transmission can be substantially reduced. Furthermore, if the predictive processing delay is overestimated, the PBESM latency can be longer than that of the legacy NB-IoT procedure. However, this can be mitigated by setting the maximum predictive delay to a smaller value than the random-access procedure delay. Clearly, a large predictive processing delay improves the energy consumption for more sessions by reducing the prediction failure, but it causes more transmission latency. Hence, there is a tradeoff between latency and energy consumption.
6. Conclusions
In this paper, we introduced the energy consumption problem caused by the scheduling request procedure in the NB-IoT network. NB-IoT characteristics such as repeating transmission of NPUSCH and NPRACH, increase energy consumption in an IoT device, thus notably decreasing the battery life. As a solution, we propose the PBESM to reduce energy consumption by decreasing the number of scheduling request procedures. The PBESM predicts the uplink occurrence through inspection of the packet header and classification of the predictable message. Furthermore, it measures the response time for each session and predicts the processing delay following the proposed algorithm. A network-level simulation showed that the PBESM can achieve from 10% to 34% battery saving in different scenarios and improve the total active time per session by up to 16%, if compared to the legacy NB-IoT. Even in the worst case that corresponds to a very short time session with good channel quality, the active time increased to some extent, but the energy consumption was dramatically reduced.
Further works will include a software-defined network architecture to enhance packet inspection. Moreover, we will investigate the effect of contention resolution in a multiuser scenario. We expect that the failure of contention resolution will decrease and the PBESM will raise the cell capacity.
Overall, the proposed mechanism does not require any special hardware on an IoT device, but a simple modification in the network entities including packet inspection and delay prediction. We believe that our mechanism can contribute to successful deployment of NB-IoT networks.