1. Introduction
Radio spectrum is undoubtedly a vital resource that needs to be managed effectively. Spectrum is a kind of natural resource like water, gas, land, and minerals; but unlike them, spectrum is reusable over spatial and temporal dimensions. Usually, it is divided into discrete bands ranging between 3 kHz and 300 GHz, having multiple licensed and unlicensed bands. Within each band, various frequencies are distributed into small sets of non-overlapping frequencies (or channels) to prevent interference. Radio frequencies are traditionally assigned to groups of similar services for long-term periods by static spectrum allocation [
1]. In this method, each channel is exclusively assigned to a single service provider, which often leads to a huge waste of spectrum [
2,
3,
4].
Currently, the Internet-of-Things (IoT) has become an increasingly growing technology due to colossal evolution and integration of tiny objects like Radio Frequency ID (RFID) tags, sensors, actuators, etc., with the Internet. The IoT is now taking hold in almost every discipline of life in order to liberate humans from old and dumb devices with new low-power and low-cost smart objects that can operate independently. The development of these smart objects has enabled IoT technology to offer a wide range of useful applications such as smart homes, smart agriculture, smart grids, smart cars, connected health, and so on [
5]. Under these applications, a lot of battery-powered smart objects essentially require Internet connectivity anywhere, anytime for anything. Unfortunately, most of the IoT-enabling technologies such as ZigBee, WiFi, 6LoWPAN, Bluetooth Low Energy (LE), etc., rely on the license-free Industrial, Scientific, and Medical (ISM) bands for spectrum. The increasing demand for spectrum requires makeshift changes in the traditional static spectrum allocation policy to protect against the ISM bands becoming congested.
Limited by static allocations and technological inventions like 3G and 4G telecom services, spectrum scarcity is considered one of the major issues for both the industry and academia. One recent report anticipates that 2.7% of physical objects will have smart devices attached to them by 2020, whereas the portion was only 0.6% in 2012 [
6]. Furthermore, based on the United Nations world population prospects [
7] and information from another recent report [
8], we calculate that, in 2025, each person will have at least nine things with Internet-connected smart devices on average, as shown in
Figure 1. Parts of the spectrum necessarily accommodate new inventions and emerging demand from the Internet, so that technological innovation could gear up with better utilization of spectrum, maximizing net gains and social benefits. Thus, the rapid proliferation of the IoT necessitates the development of new access paradigms, enabling protocols, and more spectrum.
In much of the research, we find proposals to use the Cognitive Radio (CR) system [
9,
10,
11,
12], in order to resolve the spectrum shortage issue for IoT-enabled networks. In CR technology, Secondary Users (SUs) can exploit the licensed channels whenever the legitimate Primary Users (PUs) are inactive. Meanwhile, SUs are obligated to immediately vacate the channel once PUs become active. Otherwise, PUs can suffer harmful interference from the SUs’ transmissions. However, the temporal and spatial appearance of PUs and imperfect sensing results make it really difficult to avoid interfering with PUs. Furthermore, CR networks, especially with a decentralized architecture, are prone to the hidden primary terminal problem [
13,
14,
15], which can also interfere with PUs.
In Chen et al. [
16], we can find a Multiple Access Control (MAC) protocol called Cognitive Radio Carrier Sense Multiple Access with Collision Avoidance (CR-CSMA/CA), which resolves the hidden primary terminal problem. In this protocol, an SU transmitter first conducts carrier sensing to avoid transmission overlap, and then executes a mutual spectrum sensing operation to protect the hidden PU receiver. In mutual spectrum sensing, an SU transmitter synchronizes with the corresponding SU receiver via a control packet in order to continue their respective sensing operations at the same time. Therein, both transmitter and receiver can simultaneously determine the channel’s status in their respective sensing zones. To this end, the transmitter broadcasts a new packet called Prepare-To-Sense (PTS) to let the SU receiver continue mutual spectrum sensing. Therein, the SU transmitter and SU receiver confirm the silence of PUs at the same time. Later, both SUs exchange the Request-To-Send (RTS) and Clear-To-Send (CTS) packets to ensure DATA packet transmission as part of the classic CSMA/CA [
17]. However, this protocol is not very attractive to the IoT environment due to the additional overhead from the PTS packets.
For CR-based IoT networks, IoT technology also brings intrinsic key issues, which include low power consumption, high throughput, more mobility, and extensive scalability [
18]. Unfortunately, most of the existing technologies and protocols do not fully support IoT applications. For example, Low Power Wireless Personal Area Network (LP-WPAN) technologies (e.g., ZigBee, Bluetooth LE, and 6LoWPAN) for short-range communications support a few hundred kilobits per second. On the other hand, Low Power Wide Area Network (LP-WAN) technologies (e.g., SigFox and LoRA) for long-range communications only support a few kilobits per second. The short range of WPAN and low throughput of LPWAN technologies limit their application in the IoT.
Moreover, although the prevalent IEEE 802.11 standard achieves better performance for Wireless Local Area Networks (WLANs), it lacks support for super-dense IoT networks. Actually, stations in WLANs under the super-dense environment create frequent collisions that not only cause high power consumption but also degrade throughput. Rango et al. [
19] estimated that wireless stations may consume 50% of their total energy due to collisions at high load by super-dense WLANs. Akella et al. [
20] obtained measurement results in several cities to prove that more than ten closely deployed Service Access Points (SAPs) for IEEE 802.11 networks cause severe collisions with each other. This situation will get worse if more SAPs are deployed to act as contenders.
Given the issues mentioned above, the IEEE 802.11 Task Group initiated the IEEE 802.11ah project to enact a standard that can operate on sub-1 GHz license-exempt bands [
21]. The major objectives of this project are allowing a large number of stations within wide range of a network, but with a high data rate and low power consumption. MAC protocol in IEEE 802.11ah is based on placing a number of participating stations into multiple groups to reduce the number of collisions. The stations in each group are alternately allowed access to the channel in a Restricted Access Window (RAW) that lasts for a limited period of time. The designated stations in the RAW use the traditional Enhanced Distributed Channel Access (EDCA) to access the channel [
22]. Meanwhile, stations outside the RAW do not participate, but they do go into sleep mode and save energy. The participating stations determine their RAW through coordination with the SAP for uplink or downlink transmissions.
IEEE802.11ah assumes that every station can directly connect with the SAP. However, this assumption is not always true, since SAP services are not always available to the serving stations. Moreover, SAPs require signaling overhead to control the groups of stations. For the SAP, more regroupings are required due to frequent variations in the number of participating stations over time, and they thus incur more overhead [
22]. This phenomenon is likely to be common in IoT networks with mobile devices like vehicles and mobiles phones, and such networks are usually hampered by the side effects of mobility [
23]. Similarly, if the SAP somehow fails, it will stop the entire network from serving clients, and will disrupt all the transmissions. The stations should, therefore, be capable enough to develop an ad hoc network in an IEEE 802.11ah system.
The wide application of ad hoc networks provides motivation for IoT realization from ease of deployment, self-organization, and cost-effectiveness. Therefore, research room exists for CR-based decentralized IoT networks. To this end, a grouping strategy like that in IEEE 802.11ah can sufficiently resolve the contention problem in a super-dense network. However, the grouping of stations in decentralized networks can incur the rendezvous issue. That is, suppose that the stations are grouped and that each group is assigned to a different RAW. Then, a transmitter and its receiver might be positioned in different groups and cannot communicate with each other. On the other hand, this problem does not occur with the SAP under IEEE 802.11ah, since the SAP is the only receiver for all transmitters. Furthermore, the number of stations in the network is not known in the decentralized setup. Under such a scenario, defining the groups and their sizes becomes crucial in order to fix the number of RAWs required at any moment. However, with centralized networks like IEEE 802.11ah, such problems do not exist, since the number of connected stations is always known by the SAP.
In CR networks, channel access can either be controlled by spectrum sensing [
24] or by a geolocation database method [
25]. In spectrum sensing, the SU can autonomously detect the activity of PUs with energy detection or cyclostationary feature detection. Conversely, with a geolocation database, a centralized architecture is required to communicate white space information to SUs. It is debatable as to which of the two methods is suitable to ensure channel access and measure interference levels. However, when we talk about a decentralized setup, the former can be preferred over the latter due to low cost and ease of implementation [
26]. For example, in IEEE 802.11af-based licensed TV bands, plenty of white space exists but with a limited number of broadcast stations [
27]. Therefore, those broadcast stations cannot fully deliver local information on white space to the SUs at any one given time and location. Otherwise, more broadcast stations must be installed to protect the PUs. If stations of unlicensed networks, e.g., IEEE 802.11ah, can enable spectrum sensing in primary networks, e.g., IEEE 802.11af, then SUs can simply identify the local white space without coordination with broadcast stations.
In this paper, we propose a new decentralized MAC protocol for CR-based IEEE 802.11ah networks, in which users of the IEEE 802.11ah standard can dynamically access the TV bands of IEEE 802.11af networks as SUs. Our protocol is called carrier sense Restricted Access with Collision and Interference Resolution (RACIR), since it features carrier sensing with spectrum sensing for scalability and interference resolution. Unlike wireline networks, channel sensing is usually not feasible during data transmission in wireless networks due to the deafness problem. Therein, the receiver of the transmitting node is overwhelmed by its own transmission power. To this end, the design of the proposed protocol was inspired by Wireless-CSMA/CD [
28] and CSMA/CR [
29], since they characterize collision detection and collision resolution in wireless networks, respectively. We use a hybrid approach to Wireless-CSMA/CD and CSMA/CR with a novel application of interference resolution in CR-based decentralized IoT networks.
The key contributions of this paper are summarized as follows:
We propose a novel MAC protocol for CR-based IEEE 802.11ah networks that resolves both the scalability issue and the hidden primary terminal problem.
We develop a new decentralized algorithm that estimates the number of participating stations in the network to judicially organize them into different groups.
We analyze the normalized throughput of our proposed protocol with the Markov chain model and compare the results with that of CR-CSMA/CA. We also compute analytic expressions to evaluate the performance of the proposed MAC in terms of average packet delay and average energy consumption per delivered bit.
The remainder of the paper is organized as follows.
Section 2 provides an overview of the IEEE 802.11ah standard.
Section 3 similarly takes an overview of IEEE 802.11af standard.
Section 4 summarizes related work.
Section 5 presents the system model.
Section 6 describes the proposed MAC protocol, and
Section 7 analyzes it through a mathematical model.
Section 8 validates our mathematical model and discusses the results. Finally, in the last section, we summarize the paper and draw conclusions.
4. Related Work
Under IEEE 802.11ah, the SAP allocates groups of stations to RAW-periods based on time division. Some researchers suggested an analytic model that evaluates system performance of the RAW method in terms of throughput, delay, and energy efficiency [
38,
39]. Yoon et al. [
34] evaluated the impact of the hidden terminal problem in the IEEE 802.11ah system and proposed a regrouping algorithm to alleviate its effects on system performance. Hzmi et al. [
40] presented some holding schemes for RAW handover between the groups in order to improve throughput and energy efficiency. Under the holding schemes, stations essentially hold themselves to continue transmission before crossing the boundary of their allocated RAW-period to avoid collisions. Kim et al. [
41] determined that, based on time division, grouping of stations is likely to degrade system performance. This is because an inevitable holding period is required during the handovers between the groups. For that, they suggested that the grouping strategy should be based on transmission attempts in order to avoid wasting the channel. That is, a group of stations within a RAW-period is allowed to utilize slots based on the number of successful attempts, instead of the fixed length of the RAW-period, so that stations in the last slot do not need to hold their transmissions before the expiry of allocated RAW-periods.
We found more research that specifically focused the RAW mechanism on various performance measures [
42,
43,
44]. Zhao et al. [
42] optimized the performance of the RAW-mechanism in terms of power consumption, and showed that energy efficiency in the sensor nodes improves with an increasing number of RAW-groups. Tian et al. [
43] evaluated the performance of RAW-grouping parameters under a variety of network configurations and highlighted the need for adaptation of grouping parameters to improve network efficiency. Tian and colleagues [
44] suggested an optimization algorithm to judicially define the grouping-parameters based on real-time traffic, and improved network efficiency.
We can classify grouping strategies into centralized and decentralized schemes. For example, Chang et al. [
45] proposed a load-balanced grouping algorithm to form efficient groups of sensors that are connected to the SAP server. Other researchers proposed analytic models to track throughput performance of a centralized grouping strategy under a super-dense network [
23,
46]. In addition, we found some other so-called decentralized grouping schemes [
23,
41], but they did not consider a decentralized topology in the network. However, all these schemes assumed that there always exists a centralized receiver (or SAP), which coordinates all the stations in terms of channel access, power control, sleep schedule, and other RAW parameters. We found relay-based schemes for IEEE 802.11ah [
47], in which the relay stations, connected to the root-SAP, can act as relay-SAPs. However, those authors did not consider the implementation issues of the voluntary root-SAP services like the selection process of the relay-stations and their energy consumption. In that case, the relay stations can specifically drain energy very quickly due to the overhead of coordination with the root-SAP and a large number of stations.
Another approach exists to reduce collisions among stations, which can be called a contention alleviation scheme [
48,
49]. In this scheme, stations purposefully enlarge the contention window (
W) in order to reduce the collision probability. Typically, stations doubled the size of the window (until a maximum value) whenever a collision is observed in an IEEE 802.11 system. If stations set the size of the window too small, they suffer from collisions more frequently. Conversely, if the size of the window is too large, stations likely reduce collisions but system throughput is likely reduced due to larger access delay. To this end, Bianchi [
49] determined that the optimal value of the window is a function of the total active stations in the network such that
, where
n is the number of stations attempting to access the channel, and
T is the transmission time of a data frame. However, this scheme may not be practical in super-dense networks due to the larger access delay for the stations. For instance, 8000 stations will at least have an optimum value for a window larger than 16,000 backoff slots, which will extremely increase access delay. Eventually, the efficiency of the system will also be compromised.
In typical ad hoc networks, clustering [
50,
51] is another idea in order to implement grouping. In clustering schemes, stations are organized into multiple clusters based on their geographic locations, and a cluster head often coordinates communications between stations through an exchange of location information messages. A detailed study on clustering schemes for ad hoc networks was done by Yu and Chong [
52]. The major difference in the grouping (as with that in IEEE 802.11ah and clustering) is that grouping utilizes the time dimension, while clustering exploits the space dimension of the spectrum [
23]. Therefore, the benefits of clustering are limited in a super-dense network where stations are most likely close to each other. In such a scenario, more location information messages will be required to maintain clusters under a deep hierarchy, which cannot only drive stations to drain more energy but can also consume bandwidth at a large scale.
5. System Model
We consider
N SUs in a secondary network, which are co-located with multiple PUs in a primary network. SUs and PUs, respectively, belong to IEEE 802.11ah and IEEE 802.11af networks, as shown in
Figure 4. There exists no collaboration between PUs and SUs because both primary and secondary networks operate non-cooperatively. However, SUs can communicate with each other in a self-organized decentralized network without the SAP server. SUs can only rely on the SAP for Internet services when they act as IoT devices. SUs can occupy the licensed channel whenever all the neighboring PUs are inactive. Meanwhile, if a neighboring PU is active, SUs are obligated to vacate the channel immediately. We assumed an error-free single channel model, in which packet loss takes place only due in part to the collisions between SUs and/or interference with PUs. Each SU conducts spectrum sensing to detect the activity of PUs in its neighborhood. SU
j can detect the neighboring PUs as active with a probability of
, and as inactive with a probability of
.
In a real environment, perfect sensing is a big challenge. Thus, there always exists a certain probability of misdetection and false alarm. For SU
j, we denote the misdetection probability as
and the false alarm probability as
. Misdetection indicates that an SU mistakenly recognizes active PUs as idle, which leads to significant interference due to subsequent transmissions by SUs. When there are
N SUs, the false alarm probability increases as seen on the left side of Equation (
1), assuming that misdetections between users are independent:
We require that the misdetection probability be bounded by a predefined value
. On the other hand, a false alarm means that an SU mistakenly detects an idle PU as active. Then, SUs can miss a transmission opportunity. The false alarm probability depends on sensing time
T [
53] as follows:
where
is the Signal-to-Interference-and-Noise-Ratio (SINR) of the PUs’ signals measured at SU
j,
is the sampling rate of the channel, and
represents the complementary distribution function of a standard Gaussian variable.
6. Proposed MAC Protocol
The proposed MAC protocol, called carrier sense Restricted Access with Collision and Interference Resolution (RACIR), is based on the random access model. In RACIR, system time is divided into non-overlapping and equal time slots. The SUs in the designated group can only access the shared channel at the beginning of each time slot, which is denoted as
. The SUs are assigned to the designated group with our group split algorithm, which is explained in
Section 6.4.
6.1. RTS/CTS Access Mechanism of RACIR
We here discuss how SUs in a designated group access a channel within the allocated RAW-period using control frames. According to RACIR, the intended SU first senses the channel for an Extended Interframe Space (EIFS) interval. Therein, if the channel is sensed as idle, it then exchanges RTS/CTS with the corresponding SU receiver, as part of CSMA/CA. The SU transmitter can start data transmission after successful exchange of RTS and CTS packets. Right after the beginning of data transmission, both SU transmitter and SU receiver start spectrum sensing for a short interval, called the Interference Detection (ID) slot. Both of the SUs randomly choose an ID slot from the ID period in order to detect the activity of neighboring PUs in their respective spectrum-sensing zones.
If a PU is found to be active by either the SU transmitter or the SU receiver, it will first broadcast a JAM signal to stop the ongoing transmission, and it then goes into a blocking state. After receiving the JAM signal, the other SU also blocks itself for a predefined period to protect the ongoing transmission of the PU. In the blocking state, the length of the blocking period is equal to twice the data transmission period. When blocking, the detection of a JAM signal for SUs nevertheless depends upon the PHY layer and the propagation delay required in switching from transmission (Tx) to receiving (Rx) or from Rx to Tx. Whenever the PU is not active and a JAM signal is not received, the SU transmitter will continue transmitting data packets, and will then receive an ACK from the corresponding receiver to complete the transmission. The length of the ID slot is of necessity shorter than the EIFS interval so that other SUs do not interfere. Furthermore, it should be greater than the JAM signal’s detection time plus the switch time for Tx to Rx or Rx to Tx, to let the SU invoke the blocking state. We mention that PUs can exercise transmission priority rights compared to SUs due to the large interval of the EIFS in carrier sensing and application of a JAM signal after the mutual spectrum sensing operation.
6.2. Basic Access Mechanism under RACIR
For the one-hop distance of a fully connected network, our RACIR protocol is also applicable to the basic access mechanism as standardized by IEEE 802.11. In the basic access mechanism for RACIR, SUs in competition must first sense the channel as idle for the EIFS interval. Then, they individually choose a backoff counter to avoid collision. The SU transmitter where the backoff counter expires first reserves the right to transmit data packets directly. Right after data transmission begins, it conducts mutual spectrum sensing to enable the interference resolution process, similar to the RTS/CTS access mechanism in RACIR. Conversely, the other SUs decrease their backoff counters one by one whenever the channel is found idle for the EIFS interval again.
Remark: In our proposed MAC, if a PU becomes active during the exchange of control packets, the PU interferes until the completion of RTS/CTS transmission. This interference is acceptable in CR networks in the following senses:
IEEE 802.22, a representative standard in CR systems, requires the SUs to vacate the channel within 100 ms when PUs become active [
54]. It is enough for completion of an RTS/CTS operation.
The activity rate of PUs is very low in the cognitive radio environment [
2,
3,
4]. Hence, it is not likely for PUs to be active more frequently.
However, in the ID slot, we have considered a spectrum sensing technique [
29], such as energy detection, cyclostationary feature detection, etc., with a JAM signal to resolve interference with the PU during data transmission.
6.3. PU Detection and Protection via JAM Signal
The length of the ID slot, the ID period, and detection of a JAM signal must still be designed. In this connection, let us take an analogy of the Clear Channel Assessment (CCA) method, as defined by the IEEE 802.11 standard. The CCA method in the standard evaluates the channel as busy either by carrier sensing or spectrum sensing [
17]. Therein,
is the firm time required to detect a carrier signal in the PHY layer in various modes, e.g., Direct Sequence Spread Spectrum (DSSS), Frequency Hopping Spread Spectrum (FHSS), etc., in order to access the channel, which is given as:
where
is the required clear channel assessment time by the PHY layer,
is the maximum time required to switch from receive to transmit states,
is the maximum time required for a signal to travel to its destination, and
is the time required by the MAC to issue a request to the PHY layer.
For our RACIR protocol, we can develop a distinguishable pattern for the JAM signal similar to that of the preamble in the IEEE 802.11 standard. Therefore, the detection of JAM signal within
can be possible. Our RACIR protocol, however, requires a Tx-Rx-Tx transition to complete a spectrum sensing operation during data packet transmission. To ensure this, a Tx/Rx turnaround time is desirable for inclusion in
, as defined in CSMA/CR [
29]. The length of the ID slot, therefore, includes
. In addition, an ID slot time should be shorter than the EIFS interval, which is defined by the IEEE 802.11 standard as follows:
where
is the length of an ACK frame in bytes and
is the PHY header length expressed in microseconds. Hence, an ID slot (
) must satisfy
Lastly, the slot period length can readily be set as the multiples of ID slots such that .
6.4. Group Split Algorithm
It is not easy to estimate the number of stations during a normal channel access operation since it is very complicated. Whenever we want to estimate the number of stations, we let each station behave in a predefined simple way, which enables probabilistic analysis. To this end, we develop a purpose-built group split algorithm that estimates the number of active stations in the network. The key idea is to split the number of stations into multiple groups based on the current size of the network. If the number of SUs in a designated group is too large for the slots in an allocated RAW-period, the efficiency of the network may go down due to a large number of collisions. Conversely, if the number of SUs in a designated group is fewer than the slots in an allocated RAW-period, the efficiency of the network could also be compromised due to empty slots. We, therefore, design the following group split algorithm.
Any arbitrary station broadcasts a packet announcing the start of the estimation phase. The packet carries a probability parameter
a, and a target group size
. After receiving the packet, each station can access the following
estimation slot with a probability of
independently for each estimation slot. Thus, no station accesses an estimation slot with probability
, where
N denotes the number of stations that we want to estimate. Suppose that we set the number of estimation slots at
K, and, among them, no station has accessed
k estimation slots. Letting
, we know that such an event occurs with a probability of
This probability is maximized at
such that we can estimate
, i.e.,
This is a Maximum Likelihood (ML) estimation, where a is our design parameter. If the estimated number of stations is larger than the target group size, i.e., , we split the stations into two separate groups. To this end, stations are individually liable to estimate the number of active stations at any one moment. Therefore, if the number of estimated stations is larger than the target group size, each station then chooses a random number between 0 and 1 to select a new group. Otherwise, stations do not split into groups.
8. Results and Discussion
We have analyzed the performance of our proposed protocol with
and
access mechanisms and compared it to that of CR-CSMA/CA [
16] under the system of interest. In our analytic model, we set the number of SUs
N at 12,000 and the number of RAW-periods at 140. In each RAW-period, we set the number of RAW slots at 10 and the target group size
at 100, unless otherwise stated. The optimum value of probability parameter
a is chosen from the range 0 to 1 with the trial-and-error method. Initially, each SU estimates the network size according to the group split algorithm in order to choose a group index. Thereafter, SUs of each group, RAW-period-wise, access the channel in round-robin fashion. However, the RAW slots in each RAW-period are accessible according to the proposed protocol. We developed Monte Carlo simulations with Matlab to verify the analytic results. Furthermore, we obtained the results from averaging 1000 runs. To put it in a nutshell, we summarize the default system configurations in
Table 3.
We here refer to the variants of the proposed protocol as RACIR-basic and RACIR-rts/cts for the Basic and RTS/CTS based access mechanisms, respectively.
Figure 8 shows the normalized throughput
of the proposed RACIR-rts/cts protocol without grouping of the stations in a super-dense environment. We can see that the values of
sharply decrease with an increase in the number of SUs. This suffices to explain that the length of the maximum backoff counter from
quickly becomes smaller, compared to the number of SUs, which increases the number of collisions. The channel time is being wasted and thus decreases system performance. We estimate the size of the network in order to implement the group-based, contention-free access mechanism to reduce collisions. In
Figure 9, we demonstrate the number of estimated stations (or SUs) compared to the actual number of stations in the network with the help of an error graph. We can see that the error bars of the estimated number of stations do not deviate much than the standard values for the true number of stations in the network. This verifies the accuracy of the probabilistic estimation method used in the group split algorithm. Moreover, we observe a nominal estimation noise (error) when probability parameter
a is set to 0.85.
In
Figure 10, we describe the normalized throughput with grouping of stations in the RACIR-rts/cts system for various sizes of the target group
, and the number of RAW-periods. We observe that throughput of the system initially remains low, and then monotonically increases with the increase in RAW-periods up to a certain limit, and, thereafter, it becomes steady. This is because, in early RAW-periods, the size of the groups remains larger due to less grouping of the stations and so the maximum backoff counter from
becomes smaller for many SUs in each group, which results in a large number of collisions. However, the collision-resolution power of the system gradually increases as the RAW-period index goes by, because, with time, the size of the groups gets smaller and closer to that of the target group and thus, increases system performance. When the size of the groups equals (or decreases to) the target group size, the system does not allow further grouping of the stations and so, its throughput becomes steady. However, the gap in the curves of the throughput is attributed to the various sizes of the target group. We also observe that a smaller target group size performs better due to fewer collisions, and improves system performance.
Figure 11 shows the effect of the PU activity rate
, over the normalized throughput of the RACIR protocols. We observe that values of
curves keep monotonically decreasing with an increase in
. This is because SUs increase the blocking probability with an increase in the PU detection rate in the ID slot to avoid interference, and generate more and more spectrum access to the primary network. In return, channel access to the secondary network is reduced, which ultimately decreases system throughput. We also observe that RACIR-basic achieves better performance, compared to RACIR-rts/cts, because of its direct-access mechanism.
Figure 12 shows the effect of the number of SUs
N, on average packet delay
, at different sizes of the initial contention window
. We observe that values of
remain higher at large values of
and
N. This is due to the fact that a large
N can improve packet load with a higher probability of packet collisions, as calculated with Equation (
9). Thus, the large
yields a larger delay than that a small
in order to reduce the number of collisions. Eventually,
curves (after a certain limit) reach their maximum values and, thereafter, they do not aggravate anymore, since backoff stages and buffer sizes are limited. To this end, another factor that also contributes, in part, is the grouping phenomenon due to which the scalability of the network remains under control, and so it makes the
curves insensitive to the number of SUs.
Figure 13 exhibits the effect of
N on average packet delay
at different values of maximum backoff stage
M. We see that values of
are an increasing function of
N. The reason is that at a large
N, an SU’s HOL packet has to wait longer with the increased backoff delay due to the large number of collisions. We also see that the values of
remain higher for a large
M because it can lead to queueing the packets in order to increase the probability of successful transmission, but at the cost of increased delay.
Figure 14 describes performance comparisons among CR-CSMA/CA, RACIR-basic, and RACIR-rts/cts protocols. In
Figure 14a, we see that values of
decreased monotonically with an increase in
N due in part to the following reasons. First, the maximum backoff counter chosen from
becomes smaller with an increase in
N. In due course, SUs observe frequent collisions and thus, decrease system throughput. On the other hand, SUs can be positioned into many groups with an increase in
N. Hence, the time to rendezvous between the SUs is also increased, and that can lead to a decrease in system throughput. However, RACIR-basic outperforms RACIR-rts/cts and CR-CSMA/CA protocols due to its direct-access method. Conversely, RACIR-rts/cts outperforms the CR-CSMA/CA protocol due to its lowest possible control overhead to protect the PU receiver. Thus, it collectively spares a fraction of the bandwidth for transmission of more data packets, which ultimately maximizes system performance. We also observe that our proposed RACIR protocols have lower packet delay than CR-CSMA/CA in
Figure 14 b. Similarly, this is attributed to the lower overhead required for each successful operation of the RACIR protocols, which saves channel negotiation costs and thus, decreases the average packet delay.
We demonstrate the average energy consumption per delivered bit (in Joules) for the various numbers of RAW slots in
Figure 14c. We can see that energy consumption increases with the increase in the number of slots
S, in a RAW-period. This is evident because the length of the RAW-period is a function of
S, which ultimately increases the wakeup time of the stations, and thus increases energy consumption. However, the gap between the energy curves is attributed to the unique cost of MAC protocols. We observe that both variants of RACIR outperform CR-CSMA/CA protocol due to their smart operations that lead to save the fraction of energy in the system, which is what is critical for IoT applications. Hence, the proposed RACIR can be a good candidate MAC for CR-based IoT networks.