Next Article in Journal
Screen-Shooting Resilient Watermarking Scheme via Learned Invariant Keypoints and QT
Next Article in Special Issue
Dynamic Packet Duplication for Industrial URLLC
Previous Article in Journal
Evaluation of the Validity and Reliability of Connected Insoles to Measure Gait Parameters in Healthy Adults
Previous Article in Special Issue
RRH Clustering Using Affinity Propagation Algorithm with Adaptive Thresholding and Greedy Merging in Cloud Radio Access Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Interference Avoidance Based on a Distributed Deep Learning Model for 5G-Enabled IoT

by
Radwa Ahmed Osman
1,*,
Sherine Nagy Saleh
2,* and
Yasmine N. M. Saleh
3
1
Basic and Applied Science Department, College of Engineering and Technology, Arab Academy for Science and Technology (AAST), Alexandria 1029, Egypt
2
Computer Engineering Department, College of Engineering and Technology, Arab Academy for Science and Technology (AAST), Alexandria 1029, Egypt
3
Computer Science Department, College of Computing and Information Technology, Arab Academy for Science and Technology (AAST), Alexandria 1029, Egypt
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(19), 6555; https://doi.org/10.3390/s21196555
Submission received: 17 August 2021 / Revised: 28 September 2021 / Accepted: 28 September 2021 / Published: 30 September 2021
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Cellular Networks)

Abstract

:
The co-existence of fifth-generation (5G) and Internet-of-Things (IoT) has become inevitable in many applications since 5G networks have created steadier connections and operate more reliably, which is extremely important for IoT communication. During transmission, IoT devices (IoTDs) communicate with IoT Gateway (IoTG), whereas in 5G networks, cellular users equipment (CUE) may communicate with any destination (D) whether it is a base station (BS) or other CUE, which is known as device-to-device (D2D) communication. One of the challenges that face 5G and IoT is interference. Interference may exist at BSs, CUE receivers, and IoTGs due to the sharing of the same spectrum. This paper proposes an interference avoidance distributed deep learning model for IoT and device to any destination communication by learning from data generated by the Lagrange optimization technique to predict the optimum IoTD-D, CUE-IoTG, BS-IoTD and IoTG-CUE distances for uplink and downlink data communication, thus achieving higher overall system throughput and energy efficiency. The proposed model was compared to state-of-the-art regression benchmarks, which provided a huge improvement in terms of mean absolute error and root mean squared error. Both analytical and deep learning models reached the optimal throughput and energy efficiency while suppressing interference to any destination and IoTG.

1. Introduction

The fifth generation (5G) is considered a basic and emerging technique for the Internet-of-Things (IoT). IoT is a communication environment where a massive number of devices communicate with each other. IoT devices can be included in smart homes, healthcare, and industrial and autonomous vehicles, which improve people’s daily life [1]. 5G networks address the major challenges that exist in cellular networks. They enable all devices to communicate with each other without the need for a base station (BS) which is known as device-to-device (D2D) communication. Furthermore, they enable machine-to-machine (M2M) and device-to-everything (D2E). In addition, 5G authorizes secure, low-latency, reliable and efficient connectivity, also supporting mobility [2]. One of the most important systems that deploy D2D communication nowadays is 5G-enabled IoT which is considered a promising future technique. 5G-enabled IoT communication to supports a large number of applications such as self-driving cars, drones, virtual reality, security surveillance, and many more applications [3]. All the devices used in these applications communicate with each other or with an access point or infrastructure using wireless or wired links [4]. Compared to wired, wireless links are more suitable and efficient to be used for IoT devices. Additionally, wireless links provide a high rate and reliability with low latency.
5G-enabled IoT is expected to face many challenges such as security, privacy, data control, latency, interference, resource allocation and power consumption. In the literature, diverse solutions have been proposed to overcome those potential problems, yet some challenges still need further investigation [5]. IoT generates an enormous amount of data that have to be processed using machine learning and data analytics [6]. Machine learning, which is a branch of Artificial Intelligence (AI), focusses on how to extract hidden information in the provided data and to help to make decisions. One of the famous machine learning techniques, which has gained a lot of interest lately, is deep learning. For the past decade, deep learning has gained a lot of attention as its tremendous powers have built models that can learn tough problems and achieve high performance. Various recent 5G-enabled IoT applications have adopted deep learning since it can provide higher level of analytical processing, and thus dramatically enhancing results [7,8].
IoT devices (IoTDs) and cellular users equipment (CUEs) transmit and receive data while sharing the same spectrum in 5G-enabled IoT. Usually IoTD communicates with an IoT gateway (IoTG): However CUE communicates with base station (BS) or other CUE which is known as D2D communication. All devices send data using the same spectrum, causing interference at BS or any CUE receiver and IoTG. This interference affects the system reliability and efficiency. In this work, an interference avoidance scheme using a deep learning model is proposed. The main goal of the proposed model is to increase the overall system throughput and energy efficiency. The contributions of this article are summarized as follows:
  • The proposed approach developed an efficient method to enhance the overall system performance in terms of system throughput and energy efficiency.
  • An optimization problem using an analytical and deep learning model was formulated to ascertain the reliability and efficiency of communication among 5G and IoTs.
  • The proposed approach aims to decrease or eliminate the interference in 5G networks and IoT systems. This was achieved through determining the optimum distance between CUE-IoTG and IoTD-D for the uplink (UL) data communication and between BS-IoTD and IoTG-CUE for the downlink (DL) data communication. This can be achieved based on different parameters, which affect the system performance such as transmission power, distance between CUE-D and IoTD-IoTG, path loss and signal-to-interference-plus-noise ratio (SINRth).
  • The proposed approach allowed the transmission of CUE and IoTD, using a deep learning model, to predict the suitable acceptable distance between CUE-IoTG and IoTD-D (uplink) and between BS-IoTD and IoTG-CUE (downlink) thus avoiding severe interference.
  • The proposed deep learning model was compared to state-of-the-art benchmark methods and it provided a marked improvement in the results.
  • The proposed model can be used in the design phase for interference prediction and circumvention.
  • The proposed approach was investigated in terms of overall system throughput and energy efficiency under different conditions, such as the path loss exponent, transmission power, different SINRth values, and different transmission ranges. The whole network can be optimized by these findings in a vibrant environment.
The upcoming sections are organized as follows: First the related work of IoT and 5G networks will be presented in Section 2. The proposed analytical and deep learning models will be thoroughly discussed in Section 3. Section 4 will show the details of all the experimental work and analytics of the results. Finally, the presented work will be summarized and concluded in Section 5.

2. Related Work

Decreasing the interference in IoT communication and 5G is an important issue that should be tackled, as it directly affects the system performance. In the literature, considerable research effort and solutions have been dedicated for the minimization of interference among devices in IoT-based networks such as optimization [9,10,11,12,13,14], deep learning [15,16,17,18,19,20,21,22], modulation [23,24] and the Nash nocoopertaive power game [25].
In [9], a stochastic optimization problem for a network of IoT devices and cellular users sharing the same frequency spectrum with drones was formulated to obtain the optimized transmission power and maximize energy efficiency given lower interference constraints. In addition, the proposed idea in [10] was based on optimizing the random-access procedure, which allowed users to send messages to the base station using one of the target receiver powers according to which the base station could differentiate between different users using successive interference cancellation. Furthermore, the authors of [11] proposed an approach for the resource allocation and placement of multiple unmanned aerial vehicle base stations in an uplink IoT network. Their approach was divided into three main steps: k-means clustering algorithm to group devices served by the same base station, subchannel assignment allocation for devices to diminish interference, and optimization of the transmission power of the IoT devices and the altitudes of the unmanned aerial vehicles. Moreover, an analysis of a hybrid transceiver design problem was presented in [12] for the maximization of the energy efficiency of multiple-input multiple-output interference channels for IoT power constraint devices. The authors of [13] proposed a utilization of Piece-Wise and Forward Non-Orthogonal Multiple Access (PF-NOMA) in a cooperative communication based on an optimization problem. Their proposed model acquired the optimal power and time splitting factors to achieve the maximum rates. Additionally, [14] proposed a new framework called the interference control model. This proposed model aimed to control the interference among IoT and 5G networks based on an optimization technique to maximize the system efficiency and reliability.
The deployment of deep learning for the minimization of interference was presented in [15] where spectral efficiency was enhanced using a scheme based on deep reinforcement learning, which reused the spectrum resources in the communication of the D2D and cellular user equipment (CUE). Similarly, the authors of [16] aimed to enhance spectral efficiency. They proposed the deployment of multiple concurrent frequency bands instead of one channel where deep leaning was used to dynamically select the most suitable channel based on quality requirements such as the signal-to-interference-plus-noise-ration. In addition, in [17], an interference control scheme based on reinforcement learning was proposed to allow the base station to optimize its downlink transmission power while being oblivious of the distribution of the inter-cell interference. The application of Deep Reinforcement Learning (DRL) in which each subcarrier power allocation could be adjusted among D2D pairs was proposed in [18] to reduce latency and increase reliability in D2D communication while rigorous interference constraints were satisfied. Furthermore, the authors in [19] considered deep learning to highlight the interference problem among D2D communications in cellular-enabled IoT networks. The proposed model highlighted the delayed latency and burden on enhanced node base station nodes. Moreover, [20] proposed a scheme to protect energy efficient video transmission in an IoT system against interference using reinforcement learning. In the proposed scheme, a base station managed the IoT transmission action such as transmission power, rate of encoding and scheme of modulation and coding with no knowledge of the transmission channel model. An investigation of different scenarios for channel access management of indoor IoT communication was presented in [21], in which a distributed coordination scheme, based on reinforcement learning, allowed devices to learn to control their activity patterns based on the deep learning. The effect of imperfect inference cancelation and constraint transmission power requirements were studied in [22] where NOMA and packet diversity were jointly adopted.
Other approaches have been adopted, such as in [23], where a modulation scheme was used for 4G and 5G to obtain the required frequency offset to ensure their coexistence with minimal interference among adjacent channels. In addition, in [24], a multiplexing technique was presented for D2D communication to enhance resource utilization and minimize interference of the D2D users with the cellular users. The proposed technique divided each cell into two regions and spectrum resources were allocated to each region to reduce interference among adjacent cells. Finally, in [25], the authors proposed a novel scheme based on the Nash noncooperative power game for a multiuser multiple-input multiple-output power downlink for handling the interference among IoT devices.
A lot of research has been proposed for the performance enhancement of 5G-enabled IoT systems, yet there still is room for further investigations focusing on how IoTDs, CUEs, BS and IoTG interact to decrease the interference occurrence at D and IoTG for uplink and CUE and IoTG for downlink. The main goal of this work is to determine the required conditions to decrease or eliminate the interference in 5G networks and IoT systems. This can be achieved by establishing the minimum appropriate predicted distance between CUE-IoTG and IoTD-D for uplink data communication and between BS-IoTD and IoTG-CUE for downlink data communication in order to decrease interference at any D and IoTG. The proposed model deploys deep learning and analytical optimization in which a distributed deep learning model for IoT and 5G networks is used to learn how all of IoT and CUE devices, BS and IoTG can avoid interference by adapting the distance between CUE-IoTG and IoTD-D for uplink and the distance between IoTG-CUE and BS-IoTD for downlink. This enhances the system reliability and efficiency. The assessment of the overall system performance is determined in terms of system throughput and energy efficiency.

3. Proposed Model

In this section, the proposed model for controlling the interference affecting each destination is described by a numerical optimization technique. Next, the dataset generation based on the proposed analytical model is demonstrated followed by a proposed deep neural network architecture that would be applied to sending devices, base stations and IoT gateways in real life.

3.1. System Model and Problem Formulation

The proposed network assumes that there are N number of CUEs, K number of IoTDs, a BS for cellular communication and an IoTG for IoT communication sharing the same spectrum as shown in Figure 1. Figure 1a shows the uplink (UL) data communication for cellular network and IoT communication. There are two means of communication for the cellular network: (i) the CUE communicates with other CUEs, which is known as D2D communication, or (ii) the CUE communicates with the base station, which is the standard known cellular communication. Additionally, there are a number of IoTDs communicating directly with IoTG. Assume that at least one CUE and IoTD sharing the same spectrum have information that needs to be transmitted to the destination makes the BS and any destination node suffer from interference caused by all the transmitted sources. During the downlink (DL) as shown in Figure 1b, BS and IoTG transmit data to CUEs and IoTDs, respectively; in this case, the interference occurs at any CUE and IoTD. The aim of the proposed model is to control the interference among all destinations to enhance the overall network performance by optimizing the system throughput S and energy efficiency E E for both the uplink and downlink as shown in the following equations:
M a x i = 1 ,   j = 1 i = N ,   j = K S i j   U L S i j   U L f 1 d C G , d I D ,   P C ,   P I
M a x   i = 1 ,   j = 1 i = N ,   j = K E E i j   U L E E i j   U L f 2 d C G , d I D ,   P C ,   P I
M a x i = 1 ,   j = 1 i = N ,   j = K S i j   D L S i j   D L f 1 d B I , d G C ,   P B ,   P G
M a x   i = 1 ,   j = 1 i = N ,   j = K E E i j   D L E E i j   D L f 2 d B I , d G C ,   P B ,   P G
where S i j   U L , E E i j   U L , S i j   D L   and   E E i j   D L are the overall system throughput and the total system energy efficiency for the uplink and downlink, respectively, of the i-th path between a CUE-D and the j-th path between IoTD and IoTG. Symbols d C G , d I D , d B I   , and d G C   are the uplink interference distance between CUE-IoTG and IoTD-D and the downlink interference distance between BS-IoTD and IoTG-CUE, respectively. Symbols P C and P I are the uplink CUE and IoTD transmission power, respectively. Symbols P B and P G are the downlink BS and IoTG transmission power, respectively.
For the proposed model, non-orthogonal multiple access (NOMA) was considered, NOMA can serve a large amount of devices and allow them to access the channel at the frequency/time and with the same transmission power [5,26]. Moreover, a Rayleigh fading channel with additive white Gaussian noise (AWGN) was considered for the proposed model [27]. Furthermore, for different links, it was assumed that the channel fading coefficient was statistically mutually independent.

3.1.1. Uplink Data Communication

During the uplink data communication CUE transmits data to any destination whether it is BS or other CUEs. Furthermore, IoTD transmits its data to the IoTG. Thus, the received signal between the CUE-D ( r C D ) and IoTD-IoTG ( r I G ) links can be expressed as follows [28]:
r C D = P C   H C D   X 1 + j = 1 K P I j   H I j D   Y 1 + n 1
r I G = P I   H I G   X 2 + i = 1 N P C i   H C i G   Y 2 + n 2
where H C D and X 1 are the channel gain coefficient and the transmitted symbol of the CUE-D link, respectively. Symbol P I j is the transmission power of the j-th IoTD. H I j D is the channel gain coefficient between IoTD-D. Symbol Y 1 represents the noise symbol received by D. Symbols H I G and X 2 are the channel gain coefficient and the transmitted symbol of the IoTD-IoTG link, respectively. Symbol P C i is the transmission power of the i-th CUE. Symbol H C i G is the channel gain coefficient between CUE-IoTG. Symbol Y 2   represents the noise symbol received by IoTG. Symbols   n 1 and   n 2 are the independent and identically distributed (i.i.d.) additive white Gaussian noise (AWGN) of the CUE-D and IoTD-IoTG, respectively. The signal-to-noise-plus- interference for CUE-D ( S I N R t h C D ) and IoTD-IoTG ( S I N R t h I G ) can be represented as follows:
S I N R t h C D = P C   H C D j = 1 K P I j   H I j D + N o B
S I N R t h I G = P I   H I G i = 1 N P C i   H C i G + N o B
where N o is the thermal noise power spectral density per Hertz. Symbol B is the channel system bandwidth. Symbols H C D , H I j D ,   H I G and H C i G can be represented as:
H C D = h C D 2 γ C D
H I j D = h I j D 2 γ I j D
H I G = h I G 2 γ I G
H C i G = h C i G 2 γ C i G
where h C D 2 ,   h I G 2 , h C i G 2 and   h I j D 2 follow a complex normal distribution CN (0, 1). Symbols γ C D , γ I G , γ C i G , and γ I j D represent the path loss model of CUE-D, IoTD-IoTG, i-th CUE-IoTG and the j-th IoTD-D, respectively. The path loss between CUE-D, IoTD-IoTG, i-th CUE-IoTG and the j-th IoTD-D can be expressed as [29,30]:
γ C D = γ o   d C D α
γ I G = γ o   d I G α
γ C i G = γ o   d C i G α
γ I j D = γ o   d I j D α
where γ o is the path loss constant of any transmission link symbol. Symbols d C D and d I G are the transmission distance between CUE-D and IoTD-IoTG, respectively. Symbol α is the path loss exponent. It is worth mentioning that the path loss will going to be changed based on the CUE communication with other CUEs or BSs. This is due to the difference between CUE-CUE or CUE-BS. Accordingly, Equations (7) and (8) can be written as:
S I N R t h C D = P C   γ o   d C D α j = 1 K P I j   γ o   d I j D α + N o B
S I N R t h I G = P I   γ o   d I G α i = 1 N P C i   γ o   d C i G α + N o B
Therefore, the overall system throughput ( S ) and the energy efficiency can be expressed as follows for the uplink data communication:
S U P = i = 1 N l o g 2 1 + S I N R t h C i D + j = 1 K l o g 2 1 + S I N R t h I j G
E E U P = i = 1 N l o g 2 1 + S I N R t h C i D i = 1 N P C i + P o + j = 1 K l o g 2 1 + S I N R t h I j G j = 1 K P I j + P o
where P o is the internal circuitry power.
Consequently, the objective function and constraints can be derived as:
M a x   i = 1 ,   j = 1 i = N ,   j = K S i j   U L Subject   to   d C G d C G m i n d I D d I D m i n P C P C m a x P I P I m a x
and   M a x i = 1 ,   j = 1 i = N ,   j = K E E i j   U L Subject   to   d C G d C G m i n d I D d I D m i n P C P C m a x P I P I m a x
where d C G m i n and d I D m i n are the minimum required distance between CUE-IoTG and IoTG-D, respectively, for avoiding interference and enhancing the system performance. Symbols P C m a x and P I m a x   are the transmission power of the CUE and IoTD, which helps improving the system performance.

3.1.2. Downlink Data Communication

During the downlink data communication BS and IoTG transmits their data to the i-th receiving CUE and j-th IoTD, respectively. Thus, the received signal between BS and any receiving CUE ( r B C ) and IoTG and any receiving IoTD ( r G I ) links can be expressed as follows [28]:
r B C = P B   H B C   X 3 + P G   H G C   Y 3 + n 3
r G I = P G   H G I   X 4 + P B   H B I   Y 4 + n 4
where H B C and X 3 are the channel gain coefficient and the transmitted symbol of the BS-CUE link, respectively. H G C is the channel gain coefficient between IoT and any receiving CUE. Symbol Y 3 represents the noise symbol received by any receiving CUE. Symbols H G I and X 4 are the channel gain coefficient and the transmitted symbol of the IoTG-IoTD link, respectively. Symbol H B I is the channel gain coefficient between BS-IoTD. Symbol Y 4   represents the noise symbol received by IoTD. Symbols   n 3 and   n 4 are the independent and identically distributed (i.i.d.) additive white Gaussian noise (AWGN) of the BS-CUE and IoTG-IoTD, respectively. The signal-to-noise-plus- interference for BS-CUE ( S I N R t h B C ) and IoTG-IoTD ( S I N R t h G I ) can be represented as follows:
S I N R t h B C = P B   H B C P G   H G C + N o B
S I N R t h G I = P G   H G I P B   H B I + N o B
where H B C , H G C ,   H G I and H B I can be represented as:
H B C = h B C 2 γ B C
H G C = h G C 2 γ G C
H G I = h G I 2 γ G I
H B I = h B I 2 γ B I
where h B C 2 ,   h G C 2 , h G I 2 and   h B I 2 follow a complex normal distribution CN (0, 1). Symbols γ B C , γ G C , γ G I , and γ B I represent the path loss model of BS-CUE, IoTG-CUE, IoTG-IoTD and BS-IoTD, respectively. Thus, the path loss between BS-CUE, IoTG-CUE, IoTG-IoTD and BS-IoTD can be expressed as [29,30]:
γ B C = γ o   d B C α
γ G C = γ o   d G C α
γ G I = γ o   d G I α
γ B I = γ o   d B I α
where d B C and d G I are the transmission distance between BS-CUE and IoTG-IoTD, respectively. Therefore, Equations (25) and (26) can be written as:
S I N R t h B C = P B   γ o   d B C α P G   γ o   d G C α + N o B
S I N R t h G I = P G   γ o   d G I α P B   γ o   d B I α + N o B
Therefore, the overall system throughput ( S ) and the energy efficiency can be expressed as follows for the downlink data communication:
S D L = i = 1 N l o g 2 1 + S I N R t h D C i + j = 1 K l o g 2 1 + S I N R t h G I j
E E D L = i = 1 N l o g 2 1 + S I N R t h D C i P B + P o + j = 1 K l o g 2 1 + S I N R t h G I j P G + P o
Consequently, the objective function and constraints can be derived as:
M a x   i = 1 ,   j = 1 i = N ,   j = K S i j   D L Subject   to   d B I d B I m i n d G C d G C m i n P B P B m a x P G P G m a x
and   M a x i = 1 ,   j = 1 i = N ,   j = K E E i j   D L Subject   to   d B I d B I m i n d G C d G C m i n P B P B m a x P G P G m a x
where d B I m i n and d G C m i n are the minimum required distance between BS-IoTD and IoTG-CUE, respectively, for avoiding interference and enhancing the system performance. Symbols P B m a x and P G m a x   are the maximum transmission power of BS and IoTG, which helps improve the system performance.
For the proposed model, for fairness, it is assumed that the required signal-interference-plus-noise for uplink and downlink for a 5G network (SINRthCiB, SINRthBCi) and IoT system (SINRthIjG, SINRthGIj) has the same value, which is SINRth.

3.2. Dataset Generation

The datasets used in this work were generated using MATLAB simulations based on the equations previously explained in Section 3.1.1 for the uplink and Section 3.1.2 for the downlink communication. The parameters in the equations were substituted for by the values declared in Table 1. Two datasets were generated, one for the uplink and the other for the downlink communication. The datasets will be used to train a model that is to be placed on all sending devices in the case of uplink and another to be placed on BS and IoTG in the case of downlink communication.
For the uplink communication, the outputs were generated for different combinations of SINRth and distances of CUE-D and IoTD-IoTG using the Lagrange optimization technique to generate the optimal distances of IoTD-D and CUE-IoTG for each input. The experiments were run for different values of SINRth ranging from 0 to 20. For each value of SINRth, the value of the CUE-D distance was initialized to 1 and incremented by half a meter for each record until the throughput and energy efficiency of the calculated distances of IoTD-D and CUE-IoTG were unacceptable.
Considering the downlink communication, the output distances IoTG-CUE and BS-IoTD were generated for different values of SINRth, along with distances of BS-CUE and IoTG-IoTD. The output distances were evaluated for each record to make sure that they meet the required throughput and energy efficiency.
The statistical description of all features in the two generated datasets showing the minimum, maximum, mean, standard deviation and total number of records are shown in Table 2 and Table 3. The spearman correlation of all input and output features for both the uplink and downlink communication was calculated and is presented in Figure 2. In the results section, the effect of the correlation will be further explained.

3.3. Proposed Deep Learning Model

One of the main deep learning architectures most commonly used is the convolutional neural networks (CNN), which has become almost a standard in a variety of two-dimensional (2D) data applications, especially image and video processing. Recently, a modification of the traditional 2D-CNN, namely the 1D-CNN, was proposed in [35] and later showed outstanding performance in numerous studies given a limited amount of signal data. Examples of such applications are biomedical classification, speech recognition, fault detection in a motor and several others [36].
There are several advantages that could be achieved when using 1D-CNN. Compared to ordinary deep learning methods, it has proved to generate good results even if the training records are scarce. 1D-CNN has a low computational complexity, making it much easier and faster to train. It is very well suited for being used in real-time applications on mobile devices as they consume minimal processing and battery power [36]. Consequently, 1D-CNN was deployed in this research work to be the feature extraction methodology for the proposed model. In this section, the proposed deep learning model is introduced. The presented model is to be implemented on sending devices, BS and IoTG separately to calculate the optimal distance required to reduce the interference.

3.3.1. Network Structure

In this work, a distributed deep learning network is proposed having two sub-models each comprising the input, 1D CNN, and fully connected and output layers. The proposed trained network is intended to be used by each device independently thus predicting the optimal distance for minimal interference and therefore the best throughput and energy efficiency. Two models are to be trained, one for uplink communication and the other for downlink communication, each using one of the generated datasets.
The choice of the number of hidden layers was based on multiple experiments leading to the network depicted in Figure 3. The figure shows the input at a single time to the proposed model knowing that any device using this model will need to input new entries each time it needs to calculate the optimal distance to avoid interference. Each sub-model is inputs the values of SINRth, input distance 1 (I-Dist1) and input distance 2 (I-Dist2). I-Dist1 represent the CUE-D for uplink communication and BS-CUE for downlink communication while I-Dist2 denotes IoTD-IoTG for uplink communication and IoTG-IoTD for communication.
Each sub-model is then trained independently to learn to predict one of the output distances: O-Dist1 and O-Dist2. O-Dist1 represents the IOTD-D for uplink and IoTG-CUE for downlink while O-Dist2 represents the CUE-IOTG and BS-IoTD for downlink. The values are input to the 1D-CNN layers for feature extraction and followed by fully connected layers for calculating the estimated distances as a regression problem. For each sub-model, the layers are defined as follows:
  • An abstract input layer that takes the current values of the input and passes it to the 1D-CNN layers
  • The first 1D-CNN is 3 × 1 having 32 filters, with a kernel size of 3
  • The second 1D-CNN is 1 × 1 having 16 filters, with a kernel size of 1
  • A flattening layer to reshape the 1D CNN can be input to the fully connected layers
  • A 32-neuron fully connected layer
  • A 16-neuron fully connected layer
  • An output layer to produce the regression distance result

3.3.2. Data Scaling

To achieve the best results in learning the deep learning network parameters, data must be normalized. In this work the data were normalized using the min-Max normalization, which applies the following equation to each feature
x s c a l e d = x x m i n x m a x x m i n
where x represents the data to be normalized, x m i n is the minimum value, x m a x is the maximum value in the feature addressed, and x s c a l e d is the output normalized value ranging from 0 to 1.

3.3.3. Activation Function

Since the presented network aims to predict the distance between devices, it cannot have a negative output. This led to the choice of the Parametric Rectified Linear Unit (PReLU) activation function, which was first proposed in [37] as a generalization of the traditional Rectified Linear Unit (ReLU), which applies the equation:
F ( w i ) = w i ,               i f   w i > 0 a i w i             i f   w i 0
where F ( w i ) represents the output of the activation function, w i represents the ith input and a i is the ith alpha. PReLU manages to adaptively learn the suitable alpha to fit values below zero propagating through the network from the training data.

3.3.4. Optimization Function

The optimization method applied in this network is the adaptive moment estimation (Adam), which was proposed in [38] as an improvement to the stochastic gradient descent (SGD) since it adaptively handles gradients in sparse data. The loss function chosen with Adam was the mean absolute error as it gives a real estimate of how far the average distance prediction is from the actual data, thus helping the network minimize it.

3.3.5. Parameter Optimization

Optimum parameter choice was based on the grid search function using different values of batch sizes [64,128,256,512] and epochs [50,100,150,200,250] for both sub-models. The parameters that were most suitable to both sub-models were a batch size of 128 and 100 epochs, but the experiments were performed for 200 epochs to ensure the models learned sufficiently.

4. Results and Discussion

In this section, the performance evaluation of the deep learning architecture when compared to selected benchmarks is presented. Furthermore, the performance of the proposed approach was examined in terms of optimized energy efficiency, and optimized overall system throughput through MATLAB and Python simulations.

4.1. Deep Learning Model Results Evaluation

In order to evaluate the goodness of fit of the proposed model, 10-fold cross validation is used to compare the average results produced by the proposed model to those of other benchmarks based on the following metrics:
  • Mean Absolute Error (MAE), which measures the average differences between actual and predicted values.
M A E = 1 n   i = 1 n y i y ^ i
  • Root Mean Squared Error, which calculates the square root of the average of the squared differences between actual and predicted values as
R M S E = 1 n   i = 1 n y i y ^ i 2
given that the number of records in the test subset is represented by n , y i is the actual value, and y ^ i is the predicted value.
Multiple grid search experiments were performed to obtain optimal parameters for the models used in the benchmark comparisons. Table 4 and Table 5 show the generated optimal parameters for each benchmark using both the uplink and downlink datasets, respectively. The benchmarks used in our comparisons were the support vector regressor, random forest regressor, Adaboost regressor and multilayer perceptron.
Table 6 shows the average of all folds’ results when comparing the trained uplink model to support the vector regressor, random forest regressor, Adaboost regressor, and multilayer perceptron using their optimal parameters. The results show that the random forest regressor tended to produce an overfitted model as the training error was much lower than the testing error. The proposed model outperformed all the other methods in testing while maintaining a small difference from the training error thus showing no signs of overfitting. It can be noted that the results produced for the IoT-D distance tended to have a higher error that that of the CUE-IoTG in all the models. This can be related to the correlation presented in Section 3.2 where the correlation of the of the IoT-D with the inputs was lower than that of the CUE-IoTG making it harder to predict.
Table 7 shows the same comparison but using the downlink dataset to calculate the distances IoTG-CUE and BS-IoTD that minimize interference. It can be deduced from the results that the proposed model in this comparison resembles the performance of the support vector regressor, random forest regressor, and the multilayer perceptron when applying the optimal parameters previously declared in Table 5.
Another experiment involved splitting the dataset into a two-third training and one-third testing schema to train a single network to further analyze the results. The training data were used to build the model while keeping 20% for validation. Figure 4a,b shows the mean absolute error produced while training and validating for both models. Both figures show that after epoch 100, the results were hardly changing, thus not requiring any further training. It can be noted from the figures that the models were not overfitted since the training and validating errors were around the same values for each output independently.

4.2. Analytical Evaluation

In this section, further analytical evaluation of the results obtained when splitting the dataset into two-thirds training and one-third testing was performed. The records used for training the model were not used in the testing phase to assure that the analysis of results was not over-rated by data that the model had already learned from. The conditions for predicting the optimal required distance for controlling interference were revealed by analyzing the results obtained. Same experimental assumptions as in [14] were considered. The network parameters considered for simulation are listed in Table 1.
Figure 5 depicts the predicted required distance between any IoTD-D link for the uplink data communication and between BS-IoTD for downlink data communication with different values of SINRth for the proposed model using the analytical and deep learning model. For the uplink data communication, it was assumed that all the transmitted devices, whether they were CUEs or IoTDs, always had a maximum transmission power equal to 23 dBm. Thus, it can be noticed from Figure 5a, for the analytical and deep learning model, the optimum required distance between IoTD-D (dID) to decrease the interference at the destination increased when the distance between CUE-D (dCD) increased. In addition, it can be mentioned that in order for dCD to reach the maximum value, SINRth decreased gradually since increasing the transmission distance led to increasing the losses in the communication link. Consequently, decreasing transmission distance increased SINRth —for example, to have a communication link with SINRth equal to 0 dB the transmission distance dCD remained effective until it exceeded 836 m. Additionally, when dCD was 600.5 m, dID must be greater or equal to 647.33 m analytically and 646.8 m using the deep learning model. On the other hand, when the required SINRth for any communication link was 20 dB, the maximum transmission distance for reaching effective communication was 261.5 m. It can also be noticed that, the required distance between IoTD-D was equal to 317.2 and 316.72 assuming that dCD was 99.64 m, using analytical and deep learning model, respectively.
For the downlink data communication, the transmission power of BS and IoTG was 46 dBm and 43 dBm, respectively, which is considered high when comparing it with the CUE and IoTD transmission power. Therefore, it can be observed from Figure 5b that increasing the required SINRth led to an increase in the required distance between IoTG-CUE to avoid interference. For example, when SINRth was 0 dB and the downlink transmission distance (dBC) was 600.5 m, the required distance between IoTG-CUE (dGC) to avoid interference was 505.52 m numerically and 506.95 using deep learning, while when SINRth was 5 dB, dGC must be 677.5 m and 677.8 m for numerical and deep learning, respectively. On the other hand, when SINRth was 20 dB and dBC was 99.5 it can be found that dGC must be in the range of 265 m to avoid interference.
It is worth mentioning from Figure 5 that when comparing the results of the uplink and downlink, the highest transmission power of BS and IoTG and the shortest distance between IoTD-IoTG led to decreased interference at its destination. On the other hand, decreasing the transmission power led to increasing interference, which required an increase in the distance between any interference device and transmission link.
The same result was obtained when the IoT system performance is evaluated through uplink and downlink data communication, as demonstrated in Figure 6, to decrease the interference at IoTG (uplink) and IoTD (downlink). For increasing the system reliability, the distance between any transmission CUE and IoTG for the uplink (dCG) and for downlink (dGC) must be greater than the distance between IoTD and IoTG for uplink (dIG) and for downlink (dGI) —for example, as shown in Figure 6a in the case of uplink, when dIG was equal to 240.2 m; the distance dCG should be 240.6 m analytically and 241 m using deep learning when the required SINRth is 0 dB. While, when dIG was equal to 44 m and SINRth was 20 dB, dCG was 139.17 m analytically and 139.63 m using deep learning. On the other hand, in case of the downlink as demonstrated in Figure 6b, when the required SINRth was 0 dB and dGI was equal to 240.2 m the distance dGC should be 285.48 m analytically and 286.35 m using deep learning. While, when dGI was equal to 44 m, dGC should be 165.37 m analytically and 165.33 m using deep learning when SINRth was 20 dB. Additionally, it can be noticed that decreasing the distance between any source and destination links leads to decreasing the path loss and increasing the SINRth. Based on the proposed model, it is assumed that the transmission distance between CUE-BS (uplink) and BS-CUE (downlink) is greater than the transmission distance between IoTD-IoTG (uplink) and IoTG-IoTD (downlink). It has been concluded from Figure 6 that, the interference distance (dGC) during the downlink must be greater than the interference distance (dCG) during the uplink, this due to the highest transmission power of IoTG and the nearest distance between IoTG-IoTD, which increases the interference at CUE. That is why the interference distance during the downlink (dGC) should increase compared with the interference distance during the uplink (dGC).
The predicted required dID and dGC distance for decreasing the interference at any D (uplink) and decreasing the interference at CUE (downlink) was examined again in Figure 7 for the analytical and deep learning model for different transmission distances and against SINRth. Different uplink (dCD) and downlink transmission (dBC) distance values were assumed such as 66, 140, and 260 m and against SINRth, which varied from 0 to 20 dB to predict dID and dBI. As can be observed, when SINRth increases the predicted required uplink dID and downlink dBI must be greater than or equal to uplink dCD and downlink dBC, respectively, for decreasing the interference and at the same time satisfying the system requirements in term of SINRth—for example, as shown in Figure 7a when dCD was 260 m, the optimum required dID was 260.6 m analytically and 264.24 using deep learning when SINRth = 0 dB, while when SINRth = 18 dB for the same distance dCD, the predicted required distance dID was 907.72 m analytically and 903.74 using deep learning. On the other hand, when dID was 66 m and SINRth = 4 dB, dCD must be 83.09 and 85.75 analytically and based on deep learning, respectively. Furthermore, for the downlink data communication as shown in Figure 7b, for SINRth = 0 dB and dBC = 260 m, the required distance to avoid interference dGC was 218.77m and 219.24 using analytical and the deep learning algorithm, respectively, while for the same transmission distance dGC should be in the range of 619 m to avoid interference and fulfil the required SINRth, which is 18 dB. Additionally, when dBC was 66 m, the required distance for avoiding interference should be in the range of 55.6 m if SINRth is 0 dB and 176 m when SINRth = 20 dB.
Furthermore, Figure 8 demonstrates the predicted required uplink and downlink distances (dCG and dBI) for decreasing the interference at IoTG (uplink) and IoTD (downlink), respectively. A different scenario is proposed to evaluate the system performance for the uplink and downlink data communication, assuming that dIG and dGI are 104 m, 56 m, and 26.4 m for different values of SINRth. As shown in Figure 8a, in case of uplink, when the required SINRth increased, for different transmission dIG, the distance between dCG increased—for example, when dIG was equal to 104 m and SINRth was equal to 7 dB, the optimum required dCG for the analytical and deep learning model was 155.65 m and 155.54 m, respectively. However, for the same transmission distance when SINRth was equal to 18 dB, dCG was 294.2 m analytically and 294.46 m using the deep learning model. The same performance was obtained when the system was evaluated during the downlink as shown in Figure 8b, e.g., when SINRth was equal to 7 dB and dGI was equal to 104 m and, the optimum required dCG for the analytical and deep learning model was 185 m. However, for the same transmission distance dCG is 348 m and 348.86 m for the analytically and deep learning model, respectively, when SINRth was equal to 18 dB. Figure 7 and Figure 8 show that the required SINRth is an important parameter that should affect the predicted required interference distance for decreasing the interference at any destination. Additionally, using these results and based on the system requirements and environmental conditions, an adaptive smart system should be engaged to enhance the system performance for both CUE and IoT networks.
As mentioned earlier, since the dataset used in the analysis was split into two-thirds training and one-third testing, the results shown in Figure 9 and Figure 10 are based only on records available in the testing data. As a result, some of the SINRth values did not exist in testing records. Figure 9 demonstrates the optimized system throughput for the proposed approach for the uplink and downlink data communication using the analytical and deep learning model for different randomly chosen SINRth values. For fair performance evaluation three different scenarios were considered. Assuming that the distance combinations between the uplink distances (dCD and dIG) and the downlink distances (dBC and dGI) were considered to be the same; they were 260 m and 104 m, 140 m and 56 m, and 66 m and 26.4 m, respectively. The chosen distances represented long, intermediate, and short distances. As observed for the uplink and downlink data communication represented in Figure 9a,b, respectively, for the three different scenarios when the SINRth increased, the optimized system throughput increased. Additionally, it can be noticed that for the three scenarios for any SINRth, the optimized system throughput value was approximately identical for the analytical and deep learning models. This means that the proposed approach is capable of reaching the maximum system throughput regard less of the transmission was (long-intermediate-short), as the aim of the proposed model is to predict the interference transmission distance between any interfering node and any destination, for trying to prevent interference and increase system reliability.
The same performance was obtained when the optimized energy efficiency for the uplink and downlink was examined for the assumed three different scenarios in Figure 10. As depicted from Figure 10a uplink data communication and Figure 10b downlink data communication, the optimized energy efficiency always increased with the increased of SINRth for the analytical and deep learning models. The proposed model succeeded in keeping the optimized energy efficiency approximately the same for the three different assumed transmission distances. Figure 10 is correlated with the results obtained in Figure 9. These two results show the effectiveness of the proposed model in predicting the position of the interference nodes, as by knowing the distance between them and any destination helps prevent the interference, thus increasing the system performance.
As an extra assessment of the proposed model, the distances obtained from both the analytical and deep learning models were both input to Equations (20) and (21) to calculate the throughput and energy efficiency for different values of transmission power. The optimized system throughput for the proposed approach was evaluated once again in Figure 11 for four different randomly chosen SINRth values with different transmission powers for CUE (PC) and IoTD (PI). It was assumed that the values of SINRth were 5, 10, 15 and 20 dB, respectively. As depicted in Figure 11, for any SINRth increasing the transmission power leads to increasing the system throughput for the analytical and deep learning model. As the system is always limited by channel noise, pathloss and interference that is why the transmission power is one of the parameters, which can overcome the channel conditions. Thus, increasing or decreasing the transmission power must be considered according to the channel conditions and the required system QoS. Furthermore, by comparing the four different SINRth values it can be found that increasing the SINRth increases the overall system throughput, which is correlated with the results obtained in Figure 9.
Moreover, the optimized energy efficiency is analyzed in Figure 12 for the same chosen SINRth stated in Figure 9 and with different PC and PI. As shown in Figure 12 each value of SINRth yields a maximum transmission power that leads to an optimum energy efficiency—for example, when SINRth = 0 dB, the maximum transmission power for any sender node to reach the optimum energy efficiency is 2 or 4 dBm, while when SINRth = 5 dB, the maximum power is 4 or 6 dBm to reach the maximum energy efficiency. On the other hand, when SINRth is 20 dB, the maximum transmission power is energy efficiency is 8 dBm. It can be deduced from this figure that increasing the transmission power may lead to a decrease in the energy efficiency as the increment of the transmission power incr11eases the system cost and decreases the system energy efficiency. By comparing Figure 11 and Figure 12, increasing the transmission power increases the overall system throughput and at the same time may decreases the energy efficiency. Thus, for obtaining the maximum system throughput with the highest energy efficiency, the two performances can be jointly considered in order to obtain the required system performance based on the two metrics.

5. Conclusions

A novel interference avoidance system was proposed for a 5G network and IoT using analytical and deep learning techniques. First an analytical model was created and simulated using MATLAB to calculate the optimal distances required between IoTD and D. In addition, the model calculated the optimal distance between CUE and IoTG. A deep learning model was then proposed that adopted the 1D-CNN. 1D-CNN was recently introduced and has been proven to have low computational complexity, and thus managing to conserve processing and battery, which make it very suited to be deployed in devices in real-time applications. Consequently, the deep learning model on CUE and IoTD for uplink and on BS and IoTG for downlink could generate the appropriate interference distance to meet the near-optimal result. This model was assessed by a 10-fold cross validation data split using data generated from the MATLAB simulations and produced very low mean absolute error and root mean square error when compared to various benchmarks. Next, the analytics of the results of predicting the minimum acceptable interference distance between IoTD-D and CUE-IoTG for uplink and the distance between BS-IoTD and IoTG-CUE for downlink resulting in achieving near-optimal throughput and energy efficiency were demonstrated. Based on the results obtained in terms of system throughput and energy efficiency, it has been shown that the proposed model can exhibit the best performance under different environmental conditions. The problem of interference has been discussed and solved using the Lagrange optimization technique and deep learning. Both techniques have been used to predict the optimum interference distance between CUE-IoTG and IoTD-D for uplink and the optimum interference distance between IoTG-CUE and BS-IoTD for downlink. Additionally, based on the analytical and deep learning, it has been proven that the interference distance must be greater than the transmission distance between the CUE-D and IoTD-IoTG links to avoid or decrease the interference among any destination (BS or CUE receiver). In addition, it has been shown how during the downlink data communication, the high BS and IoTG lead to a decrease the interference distance, as increasing the transmission power leads to overcoming the interference at any communication link. Furthermore, the effect of SINRth and the transmission power on predicting the maximum required interference distance was investigated. It was shown that increasing SINRth leads to increasing the interference distance between CUE-IoTG, IoTD-D, IoTG-CUE and BS-IoTD. Moreover, it has been shown that increasing the transmission power increases the overall system performance. Additionally, among different values of transmission power, one can reach the maximum energy efficiency. The obtained results show that the proposed model can achieve the maximum system throughput and energy efficiency with suitable system reliability.

Author Contributions

Conceptualization, R.A.O.; Formal analysis, R.A.O.; Investigation, R.A.O. and S.N.S.; Method-ology, R.A.O. and S.N.S.; Resources, R.A.O., S.N.S. and Y.N.M.S.; Software, R.A.O. and S.N.S.; Validation, R.A.O. and S.N.S.; Writing—original draft, R.A.O., S.N.S. and Y.N.M.S.; Writing—review & editing, R.A.O., S.N.S. and Y.N.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, H.; Xiong, D.; Wang, P.; Liu, Y. A Lightweight XMPP Publish/Subscribe Scheme for Resource-Constrained IoT Devices. IEEE Access 2017, 5, 16393–16405. [Google Scholar] [CrossRef]
  2. Ahmad, I.; Kumar, T.; Liyanage, M.; Okwuibe, J.; Ylianttila, M.; Gurtov, A. Overview of 5G Security Challenges and Solutions. IEEE Commun. Stand. Mag. 2018, 2, 36–43. [Google Scholar] [CrossRef] [Green Version]
  3. Wazid, M.; Das, A.K.; Shetty, S.; Gope, P.; Rodrigues, J.J.P.C. Security in 5G-Enabled Internet of Things Communication: Issues, Challenges, and Future Research Roadmap. IEEE Access 2021, 9, 4466–4489. [Google Scholar] [CrossRef]
  4. Luong, N.C.; Hoang, D.T.; Wang, P.; Niyato, D.; Kim, D.I.; Han, Z. Data Collection and Wireless Communication in Internet of Things (IoT) Using Economic Analysis and Pricing Models: A Survey. IEEE Commun. Surv. Tutorials 2016, 18, 2546–2590. [Google Scholar] [CrossRef] [Green Version]
  5. Chettri, L.; Bera, R. A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems. IEEE Internet Things J. 2020, 7, 16–32. [Google Scholar] [CrossRef]
  6. Alhajri, M.I.; Ali, N.T.; Shubair, R.M. Classification of Indoor Environments for IoT Applications: A Machine Learning Approach. IEEE Antennas Wirel. Propag. Lett. 2018, 17, 2164–2168. [Google Scholar] [CrossRef]
  7. Abbasi, M.; Shahraki, A.; Taherkordi, A. Deep Learning for Network Traffic Monitoring and Analysis (NTMA): A Survey. Comput. Commun. 2021, 170, 19–41. [Google Scholar] [CrossRef]
  8. Hansen, E.B.; Bøgh, S. Artificial intelligence and internet of things in small and medium-sized enterprises: A survey. J. Manuf. Syst. 2021, 58, 362–372. [Google Scholar] [CrossRef]
  9. Hattab, G.; Cabric, D. Energy-Efficient Massive IoT Shared Spectrum Access Over UAV-Enabled Cellular Networks. IEEE Trans. Commun. 2020, 68, 5633–5648. [Google Scholar] [CrossRef]
  10. Seo, J.-B.; Jung, B.C.; Jin, H. Online Backoff Control for NOMA-Enabled Random Access Procedure for Cellular Networks. IEEE Wirel. Commun. Lett. 2021, 10, 1158–1162. [Google Scholar] [CrossRef]
  11. Liu, Y.; Liu, K.; Han, J.; Zhu, L.; Xiao, Z.; Xia, X.-G. Resource Allocation and 3-D Placement for UAV-Enabled Energy-Efficient IoT Communications. IEEE Internet Things J. 2021, 8, 1322–1333. [Google Scholar] [CrossRef]
  12. Kolawole, O.Y.; Biswas, S.; Singh, K.; Ratnarajah, T. Transceiver Design for Energy-Efficiency Maximization in mmWave MIMO IoT Networks. IEEE Trans. Green Commun. Netw. 2019, 4, 109–123. [Google Scholar] [CrossRef]
  13. Khodakhah, F.; Mahmood, A.; Österberg, P.; Gidlund, M. Multiple Access-Enabled Relaying with Piece-Wise and Forward NOMA: Rate Optimization under Reliability Constraints. Sensors 2021, 21, 4783. [Google Scholar] [CrossRef] [PubMed]
  14. Osman, R.A.; Zaki, A.I. Energy-Efficient and Reliable Internet of Things for 5G: A Framework for Interference Control. Electron. 2020, 9, 2165. [Google Scholar] [CrossRef]
  15. Budhiraja, I.; Kumar, N.; Tyagi, S. Deep-Reinforcement-Learning-Based Proportional Fair Scheduling Control Scheme for Underlay D2D Communication. IEEE Internet Things J. 2021, 8, 3143–3156. [Google Scholar] [CrossRef]
  16. Sakib, S.; Tazrin, T.; Fouda, M.M.; Fadlullah, Z.M.; Nasser, N. An Efficient and Lightweight Predictive Channel Assignment Scheme for Multiband B5G-Enabled Massive IoT: A Deep Learning Approach. IEEE Internet Things J. 2021, 8, 5285–5297. [Google Scholar] [CrossRef]
  17. Xiao, L.; Zhang, H.; Xiao, Y.; Wan, X.; Liu, S.; Wang, L.-C.; Poor, H.V. Reinforcement Learning-Based Downlink Interference Control for Ultra-Dense Small Cells. IEEE Trans. Wirel. Commun. 2019, 19, 423–434. [Google Scholar] [CrossRef]
  18. Gu, B.; Zhang, X.; Lin, Z.; Alazab, M. Deep Multiagent Reinforcement-Learning-Based Resource Allocation for Internet of Controllable Things. IEEE Internet Things J. 2021, 8, 3066–3074. [Google Scholar] [CrossRef]
  19. Kim, J.; Park, J.; Noh, J.; Cho, S. Autonomous Power Allocation Based on Distributed Deep Learning for Device-to-Device Communication Underlaying Cellular Network. IEEE Access 2020, 8, 107853–107864. [Google Scholar] [CrossRef]
  20. Xiao, Y.; Niu, G.; Xiao, L.; Ding, Y.; Liu, S.; Fan, Y. Reinforcement learning based energy-efficient internet-of-things video transmission. Intell. Converg. Netw. 2020, 1, 258–270. [Google Scholar] [CrossRef]
  21. Azari, A.; Masoudi, M. Interference management for coexisting Internet of Things networks over unlicensed spectrum. Ad Hoc Networks 2021, 120, 102539. [Google Scholar] [CrossRef]
  22. Babich, F.; Buttazzoni, G.; Vatta, F.; Comisso, M. Energy-Constrained Design of Joint NOMA-Diversity Schemes with Imperfect Interference Cancellation. Sensors 2021, 21, 4194. [Google Scholar] [CrossRef]
  23. Alexandre, L.C.; De Souza Filho, A.L.; Sodré, A.C. Indoor Coexistence Analysis Among 5G New Radio, LTE-A and NB-IoT in the 700 MHz Band. IEEE Access 2020, 8, 135000–135010. [Google Scholar] [CrossRef]
  24. Li, Y.; Liang, Y.; Liu, Q.; Wang, H. Resources Allocation in Multicell D2D Communications for Internet of Things. IEEE Internet Things J. 2018, 5, 4100–4108. [Google Scholar] [CrossRef]
  25. Fu, S.; Su, Z.; Jia, Y.; Zhou, H.; Jin, Y.; Ren, J.; Wu, B.; Huq, K.M.S. Interference Cooperation via Distributed Game in 5G Networks. IEEE Internet Things J. 2017, 6, 311–320. [Google Scholar] [CrossRef]
  26. Siddiqui, M.U.A.; Qamar, F.; Ahmed, F.; Nguyen, Q.N.; Hassan, R. Interference Management in 5G and Beyond Network: Requirements, Challenges and Future Directions. IEEE Access 2021, 9, 68932–68965. [Google Scholar] [CrossRef]
  27. Zhang, Q.; Zhang, L.; Liang, Y.-C.; Kam, P.-Y. Backscatter-NOMA: A Symbiotic System of Cellular and Internet-of-Things Networks. IEEE Access 2019, 7, 20000–20013. [Google Scholar] [CrossRef]
  28. Abrardo, A.; Moretti, M. Distributed Power Allocation for D2D Communications Underlaying/Overlaying OFDMA Cellular Networks. IEEE Trans. Wirel. Commun. 2017, 16, 1466–1479. [Google Scholar] [CrossRef] [Green Version]
  29. Fan, B.; Tian, H.; Jiang, L.; Vasilakos, A.V. A Social-Aware Virtual MAC Protocol for Energy-Efficient D2D Communications Underlying Heterogeneous Cellular Networks. IEEE Trans. Veh. Technol. 2018, 67, 8372–8385. [Google Scholar] [CrossRef]
  30. Elhalawany, B.M.; Ruby, R.; Wu, K. D2D Communication for Enabling Internet-of-Things: Outage Probability Analysis. IEEE Trans. Veh. Technol. 2019, 68, 2332–2345. [Google Scholar] [CrossRef]
  31. Huang, Y.; Liu, M.; Liu, Y. Energy-Efficient SWIPT in IoT Distributed Antenna Systems. IEEE Internet Things J. 2018, 5, 2646–2656. [Google Scholar] [CrossRef] [Green Version]
  32. Chae, S.H.; Jeon, S.-W.; Jeong, C. Efficient Resource Allocation for IoT Cellular Networks in the Presence of Inter-Band Interference. IEEE Trans. Commun. 2019, 67, 4299–4308. [Google Scholar] [CrossRef]
  33. Liu, C.-H.; Shen, Y.-H.; Lee, C.-H. Energy-Efficient Activation and Uplink Transmission for Cellular IoT. IEEE Internet Things J. 2020, 7, 906–921. [Google Scholar] [CrossRef] [Green Version]
  34. Staniec, K.; Kucharzak, M.; Jóskiewicz, Z.; Chowański, B. Measurement-Based Investigations of the NB-IoT Downlink Performance in Fading Channels. IEEE Wirel. Commun. Lett. 2021, 10, 1780–1784. [Google Scholar] [CrossRef]
  35. Kiranyaz, S.; Ince, T.; Hamila, R.; Gabbouj, M. Convolutional Neural Networks for patient-specific ECG classification. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2608–2611. [Google Scholar]
  36. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2021, 151, 107398. [Google Scholar] [CrossRef]
  37. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the International Conference on Computer Vision (ICCV), Las Condes, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar] [CrossRef] [Green Version]
  38. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Proposed network schematic. (a) Uplink data communication (b) Downlink data communication.
Figure 1. Proposed network schematic. (a) Uplink data communication (b) Downlink data communication.
Sensors 21 06555 g001
Figure 2. Spearman correlation of all features generated for (a) the uplink dataset and (b) the downlink dataset.
Figure 2. Spearman correlation of all features generated for (a) the uplink dataset and (b) the downlink dataset.
Sensors 21 06555 g002
Figure 3. Proposed deep learning network that is to be used by each device independently to calculate optimal output distances.
Figure 3. Proposed deep learning network that is to be used by each device independently to calculate optimal output distances.
Sensors 21 06555 g003
Figure 4. Mean absolute error generated by the training and validation data when calculating the output distances O-Dist1 and O-Dist2 for: (a) uplink communication dataset and (b) downlink communication dataset.
Figure 4. Mean absolute error generated by the training and validation data when calculating the output distances O-Dist1 and O-Dist2 for: (a) uplink communication dataset and (b) downlink communication dataset.
Sensors 21 06555 g004
Figure 5. (a) Distance between CUE-D versus predicted distance between IoTD-D (uplink) (b) Distance between BS-CUE versus predicted distance between IoTG-CUE (downlink).
Figure 5. (a) Distance between CUE-D versus predicted distance between IoTD-D (uplink) (b) Distance between BS-CUE versus predicted distance between IoTG-CUE (downlink).
Sensors 21 06555 g005
Figure 6. (a) Distance between IoTD-IoTG versus predicted distance between CUE-IoTG (uplink) (b) Distance between IoTG-IoTD versus predicted distance between BS-IoTD.
Figure 6. (a) Distance between IoTD-IoTG versus predicted distance between CUE-IoTG (uplink) (b) Distance between IoTG-IoTD versus predicted distance between BS-IoTD.
Sensors 21 06555 g006
Figure 7. (a) Signal-to-interference-ratio- plus-noise (SINRth) versus predicted distance between IoTD-D (uplink) (b) Signal-to-interference-ratio- plus-noise (SINRth) versus predicted distance between IoTG-CUE (dowlink).
Figure 7. (a) Signal-to-interference-ratio- plus-noise (SINRth) versus predicted distance between IoTD-D (uplink) (b) Signal-to-interference-ratio- plus-noise (SINRth) versus predicted distance between IoTG-CUE (dowlink).
Sensors 21 06555 g007
Figure 8. (a) Signal-to-interference-ratio-plus-noise (SINRth) versus predicted distance between CUE-IoTG (uplink) (b) Signal-to-interference-ratio-plus-noise (SINRth) versus predicted distance between BS-IoTD (downlink).
Figure 8. (a) Signal-to-interference-ratio-plus-noise (SINRth) versus predicted distance between CUE-IoTG (uplink) (b) Signal-to-interference-ratio-plus-noise (SINRth) versus predicted distance between BS-IoTD (downlink).
Sensors 21 06555 g008
Figure 9. Signal-to-interference-ratio-plus-noise (SINRth) versus optimized system throughput (a) uplink (b) downlink.
Figure 9. Signal-to-interference-ratio-plus-noise (SINRth) versus optimized system throughput (a) uplink (b) downlink.
Sensors 21 06555 g009
Figure 10. Signal-to-interference-ratio-plus-noise (SINRth) versus optimized energy efficiency (a) uplink (b) downlink.
Figure 10. Signal-to-interference-ratio-plus-noise (SINRth) versus optimized energy efficiency (a) uplink (b) downlink.
Sensors 21 06555 g010
Figure 11. Transmission power PC and PI versus optimized system throughput.
Figure 11. Transmission power PC and PI versus optimized system throughput.
Sensors 21 06555 g011
Figure 12. Transmission power PC and PI versus optimized energy efficiency.
Figure 12. Transmission power PC and PI versus optimized energy efficiency.
Sensors 21 06555 g012
Table 1. Simulation Parameters.
Table 1. Simulation Parameters.
ParametersValue
N o 174 dBm [31]
B10 MHz
SINRth20 dB [32]
Pc23 dBm [32]
PI23 dBm [32]
PB46 dBm [9,25]
PG43 dBm [33,34]
α4
γo10−1 [32]
fc2 GHz
Table 2. Statistical description of features in the generated uplink dataset.
Table 2. Statistical description of features in the generated uplink dataset.
SINRthCUE-DIoTD-IoTGIoTD-DCUE-IoTG
Number of records21,05521,05521,05521,05521,055
Minimum0.001.000.401.000.40
Maximum20.00840.00336.004644.00338.65
Mean7.94281.23112.49501.97168.77
Standard Deviation5.84190.8276.33395.3997.39
Table 3. Statistical description of features in the generated Downlink dataset.
Table 3. Statistical description of features in the generated Downlink dataset.
SINRthBS-CUEIoTG-IoTDIoTG-CUEBS-IoTD
Number of records21,05521,05521,05521,05521,055
Minimum010.40.840.48
Maximum20840336709400
Mean7.94281.23112.49354.39200.15
Standard Deviation5.84190.8276.33204.09115.22
Table 4. Optimal parameters generated for the benchmarks used in the uplink model evaluation.
Table 4. Optimal parameters generated for the benchmarks used in the uplink model evaluation.
BenchmarksIoTG-CUEBS-IoTD
Support vector regressorkernel = ‘rbf’, C = 220, gamma = 40Kernel = ‘rbf’, C = 200, gamma = 50
Random forest regressormax_depth = 100, max_features = 3, min_samples_leaf = 3, min_samples_split = 8, n_estimators = 1000max_depth = 90, max_features = 3,
min_samples_leaf = 3, min_samples_split = 8, n_estimators = 1000
Adaboost regressorlearning_rate = 0.01, loss = ‘Linear’,
n_estimators = 150
learning_rate = 1, loss = ‘linear’,
n_estimators = 150
Multilayer perceptronactivation = ‘tanh’, alpha = 0.05, solver = ‘sgd’, hidden_layer_sizes = (300,),
learning_rate = ‘adaptive’
activation = ‘tanh’, alpha = 0.05, solver = ‘sgd’, hidden_layer_sizes = (300,), learning_rate = ‘adaptive’
Table 5. Optimal parameters generated for the benchmarks used in the downlink model evaluation.
Table 5. Optimal parameters generated for the benchmarks used in the downlink model evaluation.
BenchmarksIoTG-CUEBS-IoTD
Support vector regressorkernel = ‘rbf’, C = 220, gamma = 40Kernel = ‘rbf’, C = 200, gamma = 50
Random forest regressormax_depth = 100, max_features = 3,
min_samples_leaf = 3, min_samples_split = 8, n_estimators = 1000
max_depth = 90, max_features = 3,
min_samples_leaf = 3,
min_samples_split = 8, n_estimators = 1000
Adaboost regressorlearning_rate = 0.1, loss = ‘square’,
n_estimators = 100
learning_rate = 1, loss = ‘linear’,
n_estimators = 100
Multilayer perceptronactivation = ‘tanh’, alpha = 0.05, solver = ‘sgd’, hidden_layer_sizes = (100,), learning_rate = ‘adaptive’activation = ‘tanh’, alpha = 0.05, solver = sgd, hidden_layer_sizes = (100,), learning_rate = ‘adaptive‘
Table 6. Average result of the 10-fold cross validation method comparing the proposed uplink model versus various benchmarks including the support vector regressor, random forest regressor, Adaboost regressor, and multilayer perceptron.
Table 6. Average result of the 10-fold cross validation method comparing the proposed uplink model versus various benchmarks including the support vector regressor, random forest regressor, Adaboost regressor, and multilayer perceptron.
IoTD-DCUE-IoTG
MAERMSEMAERMSE
BenchmarksTrainTestTrainTestTrainTestTrainTest
Support vector regressor12.8315.1496.2994.280.070.750.071.14
Random forest regressor2.5211.6335.3264.840.110.830.181.16
Adaboost regressor128.06129.21215.24216.7018.1318.3621.6921.90
Multilayer perceptron21.8624.6477.0080.970.160.780.261.16
Proposed model9.599.8466.0963.430.770.771.011.06
Table 7. Average result of the 10-fold cross validation method comparing the proposed downlink model versus various benchmarks including the support vector regressor, random forest regressor, Adaboost regressor, and multilayer perceptron.
Table 7. Average result of the 10-fold cross validation method comparing the proposed downlink model versus various benchmarks including the support vector regressor, random forest regressor, Adaboost regressor, and multilayer perceptron.
IoTG-CUEBS-IoTD
MAERMSEMAERMSE
BenchmarksTrainTestTrainTestTrainTestTrainTest
Support vector regressor0.171.560.242.370.140.890.201.34
Random forest regressor0.261.740.392.430.160.980.241.38
Adaboost regressor40.3940.7549.6650.1621.3621.6925.5225.83
Multilayer perceptron0.591.730.842.500.290.930.421.38
Proposed model1.641.472.162.060.940.891.251.24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Osman, R.A.; Saleh, S.N.; Saleh, Y.N.M. A Novel Interference Avoidance Based on a Distributed Deep Learning Model for 5G-Enabled IoT. Sensors 2021, 21, 6555. https://doi.org/10.3390/s21196555

AMA Style

Osman RA, Saleh SN, Saleh YNM. A Novel Interference Avoidance Based on a Distributed Deep Learning Model for 5G-Enabled IoT. Sensors. 2021; 21(19):6555. https://doi.org/10.3390/s21196555

Chicago/Turabian Style

Osman, Radwa Ahmed, Sherine Nagy Saleh, and Yasmine N. M. Saleh. 2021. "A Novel Interference Avoidance Based on a Distributed Deep Learning Model for 5G-Enabled IoT" Sensors 21, no. 19: 6555. https://doi.org/10.3390/s21196555

APA Style

Osman, R. A., Saleh, S. N., & Saleh, Y. N. M. (2021). A Novel Interference Avoidance Based on a Distributed Deep Learning Model for 5G-Enabled IoT. Sensors, 21(19), 6555. https://doi.org/10.3390/s21196555

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop