Next Article in Journal
Domain-Aware Adaptive Logarithmic Transformation
Previous Article in Journal
An Intelligent Human-like Motion Planner for Anthropomorphic Arms Based on Diversified Arm Motion Models
Previous Article in Special Issue
A Detailed Survey on Federated Learning Attacks and Defenses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectrum Sensing Algorithm Based on Self-Supervised Contrast Learning

1
School of Electronic and Information, Hangzhou Dianzi University, Hangzhou 310020, China
2
State Key Lab of Information Control Technology in Communication System of No. 36, Jiaxing 314000, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(6), 1317; https://doi.org/10.3390/electronics12061317
Submission received: 19 February 2023 / Revised: 7 March 2023 / Accepted: 8 March 2023 / Published: 9 March 2023
(This article belongs to the Special Issue Applications of AI in Wireless Communication)

Abstract

:
The traditional spectrum sensing algorithm based on deep learning requires a large number of labeled samples for model training, but it is difficult to obtain them in the actual sensing scene. This paper applies self-supervised contrast learning in order to solve this problem, and a spectrum sensing algorithm based on self-supervised contrast learning ( S S C L ) is proposed. The algorithm mainly includes two stages: pre-training and fine-tuning. In the pre-training stage, according to the characteristics of communication signals, data augmentation methods are designed to obtain the pre-trained positive sample pairs, and the features of the positive sample pairs of unlabeled samples are extracted by self-supervised contrast learning to obtain the feature extractor. In the fine-tuning stage, the parameters of the feature extraction layer are frozen, and a small number of labeled samples are used to update the parameters of the classification layer, and the features and labels are connected to get the spectrum sensing classifier. The simulation results demonstrate that the S S C L algorithm has better detection performance over the semi-supervised algorithm and the traditional energy detection algorithm. When the number of labeled samples used is only 10% of the supervised algorithm and the S N R is higher than −12 dB, the detection probability of the S S C L algorithm is higher than 97%, which is slightly lower than the supervised algorithm.

1. Introduction

With the popularity of wireless communication networks and the proliferation of wireless communication services, the demand of people for spectrum resources increases rapidly. The traditional “static spectrum allocation” [1] can no longer meet people’s demand for spectrum resources, namely, spectrum resources are allocated to a primary user ( P U ) according to certain requirements, Only PU can use this frequency band, and secondary users ( S U ) cannot use the authorized frequency band, which leads to a serious waste of spectrum resources. Cognitive radio (CR) [2,3,4] allows S U to use spectrum holes which are not currently occupied by PU, so that spectrum utilization is improved. Spectrum sensing (SS) [5] is a key technology of CR, which can find spectrum holes for S U to use and monitor P U activity at the same time to ensure that S U will not affect P U which is using authorized channels. Therefore, how to realize efficient spectrum detection has important research significance for alleviating the scarcity of spectrum resources. Traditional spectrum sensing methods mainly include energy detection method [6,7], matched filter detection method [8], cyclic feature detection method [9], blind detection method based on random matrix theory [10], etc. Among them, energy detection algorithm is one of the classic spectrum sensing algorithms due to its low algorithm complexity and good detection. However, The detection performance of these algorithms is greatly affected by manually designed detection statistics and thresholds. The spectrum sensing algorithm based on deep learning ( D L ) [11] automatically extracts signal features, and its detection performance is better than traditional methods. But the existing spectrum sensing algorithms based on D L require a large number of labeled samples, which limits their practical application.In practice, it is easy to obtain a large number of unlabeled samples, but it is time-consuming and laborious to label samples. Therefore, it is particularly important to realize spectrum sensing under a small number of labeled samples. The self-supervised contrast learning framework BYOL pre-trains the model through the sample pairs obtained by data augmentation of unlabeled samples, which is an effective way to pre-train models. According to the characteristics of communication signals, combined with the spectrum sensing task, we study six data augmentation methods to obtain positive sample pairs. A feature extraction network is designed, training the feature extractor by using positive sample pairs and self-supervised contrast learning. Finally, a spectrum sensing classifier is obtained by using a few labeled samples to fine-tune the classification layer. In summary, the contributions of this paper are as follows:
  • A spectrum sensing algorithm based on self-supervised contrast learning ( S S C L ) is proposed. The residual network is designed as the backbone network in B Y O L framework, and a large number of unlabeled samples are used to pre-train the backbone network by self-supervised comparative learning, and only a small number of labeled samples are used to fine-tune the linear layer. Compared with the existing supervised spectrum sensing algorithms, the proposed algorithm greatly reduces the dependence of model training on labeled samples.
  • In order to improve the spectrum sensing performance and ensure the effect of model pre-training, according to the characteristics of communication signals, six data augmentation methods are designed by adding complex white Gaussian noise, frequency offset or Rayleigh fading and clipping. The simulation experiments results show that most effective data augmentation method is the combination of adding noise and symmetric clipping.
  • The performance of the algorithm was evaluated by a large number of simulation experiments. Experimental results show that the performance of the proposed algorithm is better than the existing semi-supervised and energy detection algorithms. Only 10% of the labeled samples of the pre-training dataset were used to fine-tune the linear layer, when the S N R is higher than −10 dB, the detection probability of the proposed algorithm can reach 100%, and the detection performance of the proposed algorithm is close to the existing supervised learning algorithm.

2. Related Work

As is shown in Table 1, many scholars have used deep learning to automatically extract signal features for spectrum sensing in recent years. A convolutional neural network (CNN) is applied to spectrum sensing in [12,13,14]. Sample covariance matrices of each frequency band were connected in series as the input of CNN [12], and this method did not require model assumptions and improved spectrum sensing performance by learning hidden correlation features between sub bands. P U activity pattern was used for spectrum detection [13], in the offline training stage, the covariance matrix of the perception data of the current frame and history frame and the labeled P U state data were used to train CNN parameters; in the online recognition stage, the trained CNN performed real-time detection based on the current and historical sensing data. Signal spectrogram and CNN were used to detect the existence of PU signals [14]. Although the spectrum sensing algorithm based on CNN achieves better detection effect than the traditional algorithm, the sample features extracted from the shallow CNN model are limited, which confines the improvement of the detection effect. Literature [15] used CNN and long short-term memory network ( L S T M ) to extract local features and temporal features respectively, and performed feature fusion, and the spectrum sensing performance is better than that using only CNN. The feature extraction ability of the model is enhanced by the auxiliary residual block to improve the spectrum sensing performance in [16,17]. Literature [16] used the dense network, adding shortcuts at its both ends, and grayscale maps of the covariance matrix of the received signals as the network input, and the detection performance is better than spectrum sensing algorithms based on CNN and support vector machines ( S V M ). The normalized signal power spectrum was used as the input of the residual network, and 8 kinds of modulated signals and noise were used to train the residual network in [17]; the generalization ability of the model is good, when the S N R is −10 dB, the detection probability of signals outside the eight training signals can reach more than 90%. The Convolutional Block Attention Module ( C B A M ) was used as a feature extraction network in [18], and the perception data of the historical time slot and the current time slot were jointly used to identify the spectrum state of the current time slot; the method is robust to noise power uncertainty, and the detection probability can reach 90% when the S N R is −12 dB. Compared with the supervised algorithms of C N N and L S T M , C B A M algorithm achieves better detection performance.
The above spectrum sensing algorithms are all based on supervised learning, which requires a large number of labeled samples to train the model to achieve better detection performance than the traditional algorithms. Although it is not difficult to obtain a large number of unlabeled samples, labeling the samples is a time-consuming and laborious work, and the over-reliance on labeled samples limits the application of deep learning in the field of spectrum sensing. In order to solve the problem of difficulty in obtaining labeled samples in real scenes and to weaken the dependence of model training on labeled samples, Semi-Supervised learning and a small number of labeled samples were used to design S S D N N (Semi-supervised Deep Neural Network) algorithm in [19]. Firstly, features of a small number of labeled samples were extracted; and then a large number of unlabeled sample data were used to self-train and the ones with high confidence were labeled with pseudo-labels to expand the labeled sample set; finally, the extended sample set was used to retrain the network to obtain a classifier to realize spectrum sensing. Different from traditional supervised algorithms, S S D N N algorithm reduces the dependence of model training on labeled samples and achieves the detection performance close to that of the supervision algorithm.However, the algorithm complexity is high and the whole network needs to be trained several times. An unsupervised deep spectrum sensing algorithm was proposed in [20], which used variational auto-coding Gaussian mixture model to complete the feature clustering of signals. However, the high detection probability of this method depends on the large number of antennas and the large correlation between the received signals of each antenna, and the cost is large.
Self-supervised comparative learning [21] optimizes the network by the feature difference loss function between positive and negative samples, which can fully explore the features of data in unlabeled samples and model training does not depend on sample labels. Self-supervised comparative learning has been widely used in image [22], audio [23] fields, especially in the field of image, its effect approaches supervised learning. Positive and negative samples are obtained by data augmentation, and the two samples obtained by data augmentation of the same sample are called positive sample pairs, and the two samples obtained by data augmentation of different samples are called negative sample pairs. Different algorithms use different sample pairs to train the network. Self-supervised comparative learning frameworks such as M o C o [24], s i m C L R [25] and B Y O L [26] have been proposed successively since 2020. M o C o and s i m C L R use positive and negative sample pairs to train the network [27], whose requirement of the hardware and computation complexity are higher; B Y O L only uses positive sample pairs to train the network [26], in order to avoid model training collapse, which adopts different upper and lower branch networks and different parameters update methods of the two branch networks. In order to reduce the dependence of model training on labeled samples, make full use of unlabeled samples for feature extraction, and reduce the complexity and cost of the algorithm as much as possible, a spectrum sensing algorithm based on self-supervised comparative learning ( S S C L ) is proposed, which makes full use of a large number of unlabeled samples and self-supervised comparative learning to extract sample features needed to complete the spectrum sensing task, and then the spectrum sensing classifier is obtained by fine-tuning the classification layer with a very small amount of labeled samples.

3. Proposed Algorithm

3.1. Problem Description

In the single-node spectrum sensing environment, binary hypothesis model can be used to determine whether the primary user exists in Equation (1):
y ( n ) = u ( n ) H 0 h ( n ) s ( n ) + u ( n ) H 1 n = 0 , 1 , 2 , , N 1 .
where, y ( n ) is the complex signal received by S U ; u ( n ) is independent identically distributed Gaussian white noise subject to N(0, σ u 2 ); s ( n ) is the transmitted signal of P U , and s ( n ) is independent from u ( n ) , h ( n ) represents the channel gain between P U and S U ; H 0 represents the assumption that there is no P U in the frequency band; H 1 represents the assumption that the frequency band is occupied by PU; N indicates the signal length. Detection probability P d and false alarm probability P f are two indexes to measure the spectrum sensing performance of the algorithm in Equation (2):
P f = P ( H 1 | H 0 ) P d = P ( H 1 | H 1 ) .

3.2. Algorithm Design

The block diagram of S S C L algorithm based on B Y O L framework is shown in Figure 1. S S C L algorithm is mainly included pre-training stage and fine-tuning stage. In the pre-training stage, positive sample pairs are obtained by data augmentation of unlabeled samples, and then the positive sample pairs are used to pre-trained B Y O L framework networks to obtain feature extraction networks f θ . Then a small number of labeled samples Y L are used to fine-tune and update the parameters τ of the linear layer f τ , and the spectrum sensing classifier G ( w , b ) * is obtained. Finally, the test dataset is input to realize spectrum sensing.

3.2.1. Data Augmentation

Data augmentation is an important means for comparative learning to obtain positive and negative samples. Different data augmentation methods should be used for different signals to obtain better comparative learning results. Orthogonal demodulation of communication signals is commonly used to obtain in-phase branch signals I ( n ) and orthogonal branch signals Q ( n ) , therefore, zero IF communication signals y ( n ) are obtained as shown in Equation (3), assuming that N is even.
y ( n ) = I ( n ) + j Q ( n ) n = 0 , 1 , 2 , , N 1 .
Combining the communication signal characteristics and considering the final spectrum sensing task, Equations (4)–(10) can be used in data augmentation. By adding complex white Gaussian noise c ( n ) to y ( n ) , we can get y c ( n ) = y ( n ) + c ( n ) . The frequency offset processing of Equation (4) is used, where Δ f ω represents the frequency offset and f s represents the sampling frequency, and the ratio of the two is set 0.1. The influence of Rayleigh fading channel b ( n ) as shown in Equation (5). The real parts r 0 , n and imaginary parts r 1 , n are obtained after normalizing, and the real parts and imaginary parts are constructed into a two-dimensional matrix R as shown in Equation (6). Two different clipping methods such as Equations (7) and (8), and Equations (9) and (10) can be used to obtain the positive sample pairs v 0 and v 1 from the data matrix R. The designed six data augmentation methods are shown in Figure 2.
y o f f s e t ( n ) = y c ( n ) exp ( j ( 2 π f ω f s n ) ) .
y r a y l e ( n ) = y c ( n ) * b ( n ) .
R = r 0 , 0 r 0 , 1 r 0 , N 1 r 1 , 0 r 1 , 1 r 1 , N 1 .
v 0 = r 0 , 0 r 0 , 1 r 0 , N 1 2 r 1 , 0 r 1 , 1 r 1 , N 1 2 .
v 1 = r 0 , N 1 2 + 1 r 0 , N 1 2 + 1 r 0 , N 1 r 1 , N 1 2 + 1 r 1 , N 1 2 + 1 r 1 , N 1
v 0 = r 0 , 1 r 0 , 2 r 0 , N 1 2 r 0 , N 1 2 + 1 r 0 , N 1 2 + 2 r 0 , N 1 .
v 1 = r 1 , 1 r 1 , 2 r 1 , N 1 2 r 1 , N 1 2 + 1 r 1 , N 1 2 + 2 r 1 , N 1 .

3.2.2. Backbone Network Structure in SSCL Framework

The B Y O L framework in Figure 1 consists of a target network on the upper branch and an online network on the lower branch, and the target network consists of the embedding layer and projection layer, and the online network consists of the embedding layer, projection layer and prediction layer. The embedding layer is a feature extraction layer, and the specific network structure is shown in Table 2, which represents the dimension changes of positive samples in the process of pre-trained feature extraction. The residual blocks adopted are shown in Figure 3, residual block (a) has one more convolutional layer than residual block (b), and residual block (a) can reduce the feature dimension and increase the channel dimension.And the main layers and parameters are described as follows. For example, the signal length is 1024, the batch size is set to 64, and the input of the target network and online network are both 64 × 2 × 512 three-dimensional tensor; 1 × 15 , C o n v 1 d , 32 denotes 32 one-dimensional convolution layers with a kernel size of 1 × 15 ; 1 × 3 , M a x P o o l 1 d and 1 × 3 , A v g P o o l 1 d denote the 1D maximum pooling layer and 1D average pooling layer with 1 × 3 kernel size, respectively; r e s i d u a l b l o c k ( a ) , c 1 and r e s i d u a l b l o c k ( b ) , c 2 denotes that the number of one-dimensional convolution kernels in residual block ( a ) is c 1 and the number of one-dimensional convolution kernels in residual block ( b ) is c 2 , respectively. Projection layer and prediction layer are exactly the same structure as the Multilayer Perceptron ( M L P ) shown Figure 4, and the M L P consists of two linear layers, and only the first linear layer has Batch Normalization ( B N ) layer. The target network parameters ξ include parameters of the embedding layer f ξ and projection layer g ξ , and the online network parameters θ include parameters of the embedding layer f θ , projection layer g θ and prediction layer q θ .

3.2.3. Pre-Training

The pre-training stage is an important stage to realize the self-supervised contrast learning algorithm. The purpose of pre-training is to use a large number of unlabeled samples to train the parameters of the feature extraction module f θ in the online network, and the rest of the network structures only play the role of auxiliary pre-training and algorithm implementation. Positive samples pairs v i , v j ( i , j = 0 , 1 or 1 , 0 ) are input into target network and online network respectively, and Equations (11) and (12) are used to obtain z 0 , z 1 and z 1 , z 0 . The euclidean distance between z 0 , z 1 and z 1 , z 0 are taken as the loss functioin in Equation (13). Equation (14) is used to calculate the loss gradient of each batch, where I represents the batch size; The parameters of the online network are updated by Equation (15); finally, the momentum update method in Equation (16) is used to update the target network parameters, where μ is a constant between 0 and 1, which is taken as 0.99 in this paper.
z 0 q θ ( g θ ( f θ ( v 0 ) ) ) z 1 g ξ ( f ξ ( v 1 ) ) .
z 1 q θ ( g θ ( f θ ( v 1 ) ) ) z 0 g ξ ( f ξ ( v 0 ) ) .
l = 2 · ( z 0 , z 1 z 0 2 z 1 2 + z 1 , z 0 z 1 2 z 0 2 ) .
θ 1 I k = 1 I l k θ .
θ o p t i m i z e r ( θ , θ ) .
ξ μ ξ + ( 1 μ ) θ .

3.2.4. Fine-Tuning

The network in the fine-tuning stage is composed of two parts: embedding layer f θ in the online network and linear layer f τ whose output channel is 2. The linear layer realizes the transformation of feature dimension into classifier dimension. The purpose of fine-tuning is to correspond the learned features with labels, so as to obtain spectrum sensing classifier G ( w , b ) * . The parameters of the embedding layer f θ are frozen during fine-tuning, and only the parameter τ of the linear layer f τ are updated. When fine-tuning, the IQ channels of the signal are spliced into a 2 × N dimensional matrix, and labeled samples Y L are conducted and input into the fine-tuning network in Figure 1, and the cross-entropy loss function of each batch of samples is used shown in Equation (17), where L K represents the sample label, L k = 1 represents that there is the PU in the frequency band, L K = 0 represents that there is the noise only in the frequency band, p r o b ( Y L , H 1 ) and p r o b ( Y L , H 0 ) represent the probability of P U detected on the channel and the probability of noise detected only on the channel, respectively, and p r o b ( Y L , H 0 ) + p r o b ( Y L , H 1 ) = 1 . Linear layer parameters are updated for each batch by using Equations (18) and (19).
ϕ k = 1 I k = 1 I [ L k log p r o b ( Y k , H 1 ) + ( 1 L k ) log p r o b ( Y k , H 0 ) ] Y k Y L .
τ 1 I k = 1 I ϕ k τ .
τ o p t i m i z e r ( τ , τ ) .

3.3. Spectrum Sensing Algorithm

After pre-training and fine-tuning the spectrum sensing classifier G ( w , b ) * is obtained. Given the false alarm probability P f , the decision threshold γ can be obtained. The IQ path of sample X is spliced into a 2 × N dimensional matrix and input to the classifier, and the detection criterion is shown in Equation (20), where prob(X, H 1 ) represents the probability of the spectrum is occupied.
H 0 : p r o b ( X , H 1 ) < γ H 1 : p r o b ( X , H 1 ) γ .
In summary, the obtained SSCL algorithm is shown as Algorithm 1:
Algorithm 1 S S C L Algorithm.
Require: unlabeled samples Y, a small number of labeled samples Y L , the number of self-supervised training rounds e 0 , the number of fine-tuned training rounds e 1 ;
Ensure: the parameters of feature extraction network f θ and spectrum sensing classifier G ( w , b ) * are optimal;
  1:
for epoch = 1 to  e 0  do
  2:
   Perform data augmentation on unlabeled samples Y to obtain positive sample pairs and input them into target network and online network respectively;
  3:
   Calculate the feature expression of the two positive sample pairs according to Equations (11) and (12), and calculate the forward propagation loss function according to Equation (13);
  4:
   Calculate the loss function gradient of each batch according to Equation (14), update online network parameters and target network parameters according to Equations (15) and (16) respectively.
  5:
end for
  6:
Only the parameters of the feature extraction layer f θ in the online network are saved;
  7:
Load parameters of the feature extraction layer f θ in the online network;
  8:
A linear layer f τ is added after the feature extraction layer f θ to obtain the network to be fine-tuned.
  9:
for epoch = 1 to  e 1  do
  10:
   Input the labeled sample Y L into the fine-tuning network in Figure 1 and fix the parameters of the feature extraction layer f θ ;
  11:
   Calculate the loss function according to Equation (17), update the parameters of the linear layer according to Equations (18) and (19).
  12:
end for
  13:
The spectrum sensing classifier G ( w , b ) * is obtained.

4. Experimental Results

The performance of spectrum sensing algorithm based on self-supervised contrast learning ( S S C L ), spectrum sensing algorithm based on attention mechanism ( C B A M ) [18], spectrum sensing algorithm based on semi-supervised deep neural network ( S S D N N ) [19] and energy detection method ( E D ) [6] in this paper are analyzed by simulation.

4.1. Dataset

RML2016 dataset from [28] was used in this paper, in which PU signal adopts B P S K , Q P S K , 16 Q A M and 64 Q A M modulation types, carrier frequency is 902 MHz, and noise is additive white Gaussian noise. The pre-training sample set consists of 42,000 samples, including 21,000 samples for signal and 21,000 samples for noise, and the S N R ranges from −20 dB to 20 dB with an interval of 2 dB and 250 samples are generated for each kind signal at each S N R . Two fine-tuning datasets are generated, where the S N R of the signal samples ranges from −20 dB to 20 dB with an interval of 2 dB. The number of samples in fine-tuning dataset 1 is 4% of the total number of pre-training samples, a total of 1680 samples, 840 samples for signal and 840 samples for noise. The number of samples in fine-tuning dataset 2 is 10% of the total number of pre-training samples, a total of 4200 samples, 2100 samples for signal and 2100 samples for noise. The test set has a total of 13,000 signal samples, and the S N R ranges from −20 dB to 4 dB with an interval of 2 dB and 250 signals samples of each type are generated for each S N R .

4.2. Simulation Environment

MATLAB software was used to generate data sets.PyTorch1.12.0 framework and Python3.9 were used for experimental simulation, and the CPU was AMD Ryzen 7 5800 H with Radeon Graphics and the GPU was NVIDIA GeForce RTX 3060 Laptop GPU with 16 GB running memory.

4.3. The Influence of Different Data Augmentation Methods on P d

Signal length N = 1024, P f = 0.01 , fine-tuning data set 2 is used to fine-tune network, the probability curves of detection P d of S S C L algorithm with 6 kinds of data augmentation methods in Figure 5 are shown.
It can be found that when the detection probability of the six data augmentation methods reaches 90%, the required S N R of the S S C L algorithm using six data augmentation methods is −10.5 dB, −10.4 dB, −8.9 dB, −6.6 dB, −0.9 dB and 1.6 dB, respectively, so the most effective data augmentation method is the first. Namely first add complex gaussian noise, then splicing IQ signal into a two-dimensional matrix, finally cut into two two-dimensional symmetric matrices. Symmetric clipping after denoising signal samples can improve the contrast learning effect during pre-training, so that the model can extract higher-level features and improve the robustness of the model.All the following experiments adopts the first data augmentation method.

4.4. The Selection of Pre-Training Hyperparameters

Table 3 shows the hyperparameters of model pre-training.The batch size is set to 24 and the Adam optimizer is selected.In order to improve the training efficiency and make the model parameters as close as possible to the optimal solution, in the pre-training phase variable learning rate is adopted. The learning rate is set as 0.01 for the first 15 rounds of training, and then decreases to 0.1 times of the original for every 10 rounds of training, totaling 35 rounds of training. The relationship curve between the loss function and the number of training rounds is shown in Figure 6. The loss function gradually converges with the increase of the number of training rounds, and there is a sudden change in the loss value at the 15th round, which is caused by the learning rate changing from 0.01 to 0.001.

4.5. Influence of Signal Length N and False Alarm Probability P f on Algorithm Performance

When the signal length N is 512, 1024 and 2048, respectively, and P f is 0.06 and 0.1, the detection probability of the proposed algorithm under different S N R is shown in Figure 7, Figure 8 and Figure 9, respectively.
As can be seen from the figures, when P f is 0.1, the network is fine-tuned with fine-tuning sample set 2, S N R is −12 dB, and the signal length N is 512, 1024 and 2048, the detection probability of the proposed algorithm is 89.2%, 97.6% and 100 %, respectively. With the increase of signal length, the detection performance is getting better and better. The longer the signal length is, the higher the algorithm complexity is. Therefore, the performance and complexity of the algorithm are considered in a compromise, and the signal length N = 1024 is chosen in the following experiments. At the same time, it can be also seen that when the number of fine-tuning samples and the signal length are fixed, and P f increases from 0.06 to 0.1, P d also increases.
As can be also seen from the figures, when the false alarm probability and signal length remain unchanged, the detection probability also increases with the increase of the number of fine-tuning samples. When the signal length is 2048, the influence of fine-tuning sample number on the detection probability is much smaller than that when the signal length N is 512 and 1024. It can be observed from Figure 9 that when the false alarm probability is 0.1, the detection probability curves of different fine-tuned sample numbers almost coincide. This is because when the signal length is long enough, the pre-training can learn more useful features to distinguish signal from noise, and thus the features learned by the pre-training can be separated by using a small number of labeled samples in the fine-tuning. We conclude that the dependence of model training on labeled samples is reduced and the data features of unlabeled samples are fully mined and used as the prior knowledge of downstream spectrum sensing, by using self-supervised contrast learning.

4.6. Performance Comparison of Different Algorithms

When the false alarm probability P f is 0.1 and the signal length N is 1024, the spectrum sensing performances of the proposed S S C L algorithm, residual network-based supervised algorithm ( R e s N e t ), attention mechanism based supervised algorithm ( C B A M ) [18], semi-supervised algorithm ( S S D N N ) [19] and energy detection algorithm (ED) [6] are shown in Figure 10. The number of pre-training rounds and the setting of learning rate of S S C L algorithm are the same as the experiment in Section 4.4. The training dataset used by the two supervised algorithms ( R e s N e t and C B A M ) is the same as the pre-training sample set of the S S C L algorithm. The residual network used by R e s N e t algorithm is the same as the feature extraction module of the S S C L algorithm. The labeled sample set used by S S D N N algorithm is 15% of the number of pre-trained samples of the S S C L algorithm.
It can be seen from Figure 10 that the performance of the two supervision algorithms is better than other algorithms, while the performance of the energy detection algorithm is the worst. This is because supervised learning can learn the features of P U signal and noise from a large number of labeled samples. And the detection performance of R e s N e t algorithm is better than C B A M algorithm, which indicates that the residual network used in this paper has strong fitting ability. At the same time, it can be seen that the S S C L algorithm in this paper uses less labeled samples than S S D N N algorithm, and its spectrum sensing performance is much better than S S D N N algorithm. This is because the S S C L algorithm can learn useful signal features from a large number of unlabeled samples during pre-training, which gets rid of the excessive dependence of traditional model training on labeled samples. However, only samples with high confidence of S S D N N algorithm will be labeled with false labels and participate in model training, so the model can obtain limited information from limited labeled samples. It can also be seen that, although the performance of S S C L algorithm is worse than ResNet-based supervised algorithm at low S N R , the number of labels used by the S S C L algorithm is only 4% or 10% of the supervised algorithm, so when the S N R is greater than −14 dB, the performance of the S S C L algorithm is comparable to C B A M algorithm. When the S N R is −12 dB, the detection probability of the S S C L algorithm is 97.6%, and the detection probability of C B A M algorithm is 98.1%.

5. Conclusions

Aiming at the problem that it is difficult to obtain the received signal samples with labels, a spectrum sensing algorithm based on self-supervised contrast learning ( S S C L ) is proposed, which consists of two stages: pre-training and fine-tuning. In order to obtain pre-trained positive sample pairs, according to the characteristics of communication signals, six data augmentation methods are designed. A residual network with strong fitting ability is designed as a feature extraction module in the B Y O L framework, and a spectrum sensing classifier is obtained through self-supervised comparative learning after the pre-training and fine-tuning. In our proposed algorithm framework, experiment results show that the relatively effective data augmentation method is the combination of adding noise and symmetric clipping, the performance of the proposed S S C L algorithm is better than the existing semi-supervised algorithm and energy detection algorithm, and when the labeled samples used by S S C L algorithm are only 10% of those of C B A M supervised algorithm, the performance of the proposed algorithm is close to C B A M supervised algorithm. No matter in the model training stage or during the perception period, the state of the P U in this paper is always active or silent. However, in the actual communication environment, the P U is very likely to arrive or leave at any time, which will seriously affect the detection performance of the algorithm. In the following work, we will study how to improve the detection probability of dynamic primary user when there are insufficient label samples.

Author Contributions

Conceptualization, X.L.; methodology, X.L.; software, X.L. and Y.Z.; validation, S.Z. and S.D.; formal analysis, S.Z.; investigation, S.D; resources, Y.Z.; data curation, S.Z.; writing—original draft preparation, X.L.; writing—review and editing, X.L. and Z.Z.; visualization, X.L.; supervision, Z.Z.; project administration, Z.Z.; funding acquisition, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Research on Intelligent Spread Spectrum Anti-interference Technology in Complex Electromagnetic Environment, U19B2016 National Natural Science Foundation of China.

Data Availability Statement

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Christodoulopoulos, K.; Varvarigos, E. Static and dynamic spectrum allocation in flexi-grid optical networks. In Proceedings of the 2012 14th International Conference on Transparent Optical Networks (ICTON), Coventry, UK, 2–5 July 2012; pp. 1–5. [Google Scholar]
  2. Haykin, S.; Setoodeh, P. Cognitive radio networks: The spectrum supply chain paradigm. In Proceedings of the 2012 14th International Conference on Transparent Optical Networks, Coventry, UK, 2–5 July 2012; Volume 1, pp. 3–28. [Google Scholar]
  3. Bayrakdar, M.E.; Atmaca, S.; Karahan, A. A slotted aloha-based cognitive radio network under capture effect in rayleigh fading channels. Turk. J. Electr. Eng. Comput. Sci. 2016, 24, 3. [Google Scholar] [CrossRef]
  4. Alnabelsi, S.H.; Salameh, H.B.; Saifan, R.R.; Darabkh, K.A. A multi-layer hyper-graph routing with jamming-awareness for improved throughput in full-duplex cognitive radio networks. Eur. J. Inform. Syst. 2022, 1, 3. [Google Scholar] [CrossRef]
  5. Eltabie, M.; Abdelkader, F.; Ghuniem, A. Incorporating primary occupancy patterns in compressive spectrum sensing. IEEE Access 2019, 7, 29096–29106. [Google Scholar] [CrossRef]
  6. Zheng, Y.; Xia, Y.; Wang, H. Spectrum sensing performance based on improved energy detector in cognitive radio networks. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 27–29 June 2020; pp. 405–408. [Google Scholar]
  7. Turkyilmaz, Y.; Senturk, A.; Bayrakdar, M. Employing machine learning based malicious signal detection for cognitive radio networks. Concurr. Comput. Pract. Exp. 2023, 35, e7457. [Google Scholar] [CrossRef]
  8. Zhang, X.; Gao, F.; Chai, R.; Jiang, T. Matched filter based spectrum sensing when primary user has multiple power levels. China Commun. 2015, 12, 21–31. [Google Scholar] [CrossRef]
  9. Sherbin, K.; Sindhu, V. Cyclostationary feature detection for spectrum sensing in cognitive radio network. In Proceedings of the 2019 International Conference on Intelligent Computing and Control Systems, Madurai, India, 15–17 May 2019; pp. 1250–1254. [Google Scholar]
  10. Zili, W.; Xiaoou, S.; Xiaorong, W. Spectrum sensing detection algorithm based on eigenvalue variance. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference, Chongqing, China, 24–26 May 2019; pp. 1656–1659. [Google Scholar]
  11. IEEE Draft Framework and Process for Deep Learning Evaluation; IEEE: Piscataway, NJ, USA, 2022; pp. 1–30.
  12. Zhang, J.; He, Q.; Rui, H.; Xu, X. Multiband joint spectrum sensing via covariance matrix-aware convolutional neural network. IEEE Commun. Lett. 2022, 26, 1578–1582. [Google Scholar] [CrossRef]
  13. Xie, D.; Liu, C.; Liang, Y. Activity pattern aware spectrum sensing:a CNN-based deep learning approach. IEEE Commun. Lett. 2019, 23, 1025–1028. [Google Scholar] [CrossRef]
  14. Chew, D.; Cooper, B. Spectrum sensing in interference and noise using deep learning. In Proceedings of the 2020 54th Annual Conference on Information Sciences and Systems, Princeton, NJ, USA, 18–20 March 2020. [Google Scholar]
  15. Xu, M.; Yin, Z.; Wu, M.; Wu, Z.; Zhao, Y.; Gao, Z. Spectrum sensing based on parallel CNN-LSTM network. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference, Antwerp, Belgium, 25–28 May 2020; pp. 1–5. [Google Scholar]
  16. Jianxin, G.; Xianfeng, X.; Ruixiang, N.; Jingyi, W. Spectrum sensing method for residual-dense networks. J. Commun. 2021, 42, 182–191. [Google Scholar]
  17. Zheng, S.; Chen, S.; Qi, P.; Zhou, H.; Yang, X. Spectrum sensing based on deep learning classification for cognitive radios. China Commun. 2020, 17, 138–148. [Google Scholar] [CrossRef]
  18. Cong, Z.; Changwen, J.; Rong, D. Spectrum perception scheme based on convolutional neural network and attention mechanism. Wirel. Commun. Tech. 2022, 31, 1–5. [Google Scholar]
  19. Yupei, Z.; Zhiji, Z. Limited data spectrum sensing based on semi-supervised deep neural network. IEEE Access 2021, 9, 166423–166435. [Google Scholar]
  20. Xie, J.; Fang, J.; Liu, C.; Yang, L. Unsupervised deep spectrum sensing: A variational auto-encoder based approach. IEEE Trans. Veh. Technol. 2020, 69, 5307–5319. [Google Scholar] [CrossRef]
  21. LeKhac, P.; Healy, G.; Smeaton, A. Contrastive representation learning: A framework and review. IEEE Access 2020, 8, 193907–193934. [Google Scholar] [CrossRef]
  22. Kaiming, H.; Haoqi, F.; Yuxin, W.; Saining, X.; Girshick, R. Momentum contrast for unsupervised visual representation learning. arXiv 2019, arXiv:1911.05722. [Google Scholar]
  23. Niizumi, D.; Takeuchi, Y.; Ohishi, N.; Kashino, K. BYOL for audio: Self-supervised learning for general-purpose audio representation. In Proceedings of the 2021 International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  24. Chen, X.; Fan, H.; Girshick, R.; He, K. Improved baselines with mo-mentum contrastive learning. arXiv 2020, arXiv:2003.04297. [Google Scholar]
  25. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. arXiv 2020, arXiv:2002.05709. [Google Scholar]
  26. Jeanbastien, G.; Florian, S.; Florent, A.; Corentin, T.; Pierre, R. Bootstrap your own latent: A new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 2020, 33, 21271–21284. [Google Scholar]
  27. Khan, A.; AlBarri, S.; Manzoor, M. Contrastive Self-supervised learning: A survey on different architectures. In Proceedings of the 2022 2nd International Conference on Artificial Intelligence, Islamabad, Pakistan, 30–31 March 2022; pp. 1–6. [Google Scholar]
  28. Shea, J.; West, N. Radio machine learning dataset generation with GNU radio. In Proceedings of the GNU Radio Conference, Virtually, 26–30 September 2016. [Google Scholar]
Figure 1. SSCL framework.
Figure 1. SSCL framework.
Electronics 12 01317 g001
Figure 2. Six data augmentation methods.
Figure 2. Six data augmentation methods.
Electronics 12 01317 g002
Figure 3. Two residual blocks. residual block (a) and residual block (b).
Figure 3. Two residual blocks. residual block (a) and residual block (b).
Electronics 12 01317 g003
Figure 4. MLP.
Figure 4. MLP.
Electronics 12 01317 g004
Figure 5. The P d corresponding to the six data augmentation methods.
Figure 5. The P d corresponding to the six data augmentation methods.
Electronics 12 01317 g005
Figure 6. Pre-training loss.
Figure 6. Pre-training loss.
Electronics 12 01317 g006
Figure 7. Detection probability when N = 512.
Figure 7. Detection probability when N = 512.
Electronics 12 01317 g007
Figure 8. Detection probability when N = 1024.
Figure 8. Detection probability when N = 1024.
Electronics 12 01317 g008
Figure 9. Detection probability when N = 2048.
Figure 9. Detection probability when N = 2048.
Electronics 12 01317 g009
Figure 10. Comparison of detection performance of different algorithms.
Figure 10. Comparison of detection performance of different algorithms.
Electronics 12 01317 g010
Table 1. Spectrum sensing algorithm based on deep learning.
Table 1. Spectrum sensing algorithm based on deep learning.
ReferencesLearning WaysNetworksInput Features
[12]supervisedCNNcovariance matrix
[13]supervisedCNNcovariance matrix
[14]supervisedCNNspectrogram
[15]supervisedCNN+LSTMIQ
[16]supervisedResNetGrayscale map
[17]supervisedResNetpower spectrum
[18]supervisedCBAMcovariance matrix
[19]semi-supervisedCNNIQ
[20]unsupervisedVAERGB image
Table 2. Residual network structure.
Table 2. Residual network structure.
IndexesNetwork LayersOutput Dimensions
1input 2 × 512
2 1 × 15 , C o n v 1 d , 32 32 × 249
3 1 × 15 , C o n v 1 d , 32 32 × 124
4residual block(b), 32 32 × 124
5residual block(b), 32 32 × 124
6residual block(a), 64 64 × 126
7residual block(b), 64 64 × 126
8residual block(a), 128 128 × 31
9residual block(b), 128 128 × 31
10residual block(a), 256 256 × 16
11residual block(b), 256 256 × 16
12 256 × 16 , AvgPool1d 256 × 7
Table 3. The setting of the hyperparameter.
Table 3. The setting of the hyperparameter.
HyperparametersValues
epoch35
Initial learning rate0.01
Learning rate decline cycle10
Coefficient of learning rate decline0.1
optimizerAdam
batch size64
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Zhao, Z.; Zhang, Y.; Zheng, S.; Dai, S. Spectrum Sensing Algorithm Based on Self-Supervised Contrast Learning. Electronics 2023, 12, 1317. https://doi.org/10.3390/electronics12061317

AMA Style

Li X, Zhao Z, Zhang Y, Zheng S, Dai S. Spectrum Sensing Algorithm Based on Self-Supervised Contrast Learning. Electronics. 2023; 12(6):1317. https://doi.org/10.3390/electronics12061317

Chicago/Turabian Style

Li, Xinyu, Zhijin Zhao, Yupei Zhang, Shilian Zheng, and Shaogang Dai. 2023. "Spectrum Sensing Algorithm Based on Self-Supervised Contrast Learning" Electronics 12, no. 6: 1317. https://doi.org/10.3390/electronics12061317

APA Style

Li, X., Zhao, Z., Zhang, Y., Zheng, S., & Dai, S. (2023). Spectrum Sensing Algorithm Based on Self-Supervised Contrast Learning. Electronics, 12(6), 1317. https://doi.org/10.3390/electronics12061317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop