Next Article in Journal
Development and Core Technologies for Intelligent SWaP3 Infrared Cameras: A Comprehensive Review and Analysis
Next Article in Special Issue
A Hybrid Rule-Based and Machine Learning System for Arabic Check Courtesy Amount Recognition
Previous Article in Journal
Powder Bed Monitoring Using Semantic Image Segmentation to Detect Failures during 3D Metal Printing
Previous Article in Special Issue
Intraclass Image Augmentation for Defect Detection Using Generative Adversarial Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network

1
National Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China
2
China Research Institute of Radiowave Propagation, Qingdao 266107, China
3
Glasgow College, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(9), 4187; https://doi.org/10.3390/s23094187
Submission received: 10 March 2023 / Revised: 17 April 2023 / Accepted: 18 April 2023 / Published: 22 April 2023

Abstract

:
Automatic modulation classification (AMC) plays an important role in intelligent wireless communications. With the rapid development of deep learning in recent years, neural network-based automatic modulation classification methods have become increasingly mature. However, the high complexity and large number of parameters of neural networks make them difficult to deploy in scenarios and receiver devices with strict requirements for low latency and storage. Therefore, this paper proposes a lightweight neural network-based AMC framework. To improve classification performance, the framework combines complex convolution with residual networks. To achieve a lightweight design, depthwise separable convolution is used. To compensate for any performance loss resulting from a lightweight design, a hybrid data augmentation scheme is proposed. The simulation results demonstrate that the lightweight AMC framework reduces the number of parameters by approximately 83.34% and the FLOPs by approximately 83.77%, without a degradation in performance.

1. Introduction

Adapting the modulation used for changing channel conditions is a crucial way to maximize the throughput of a communication link and maintain continuous link reliability in a mobile environment [1,2,3,4]. By changing the modulation method, the transmitted signal can adjust to the changing channel environment, minimizing the difference between channel capacity and throughput without sacrificing robustness [5]. Automatic modulation classification (AMC) [6,7,8,9,10,11] is a crucial technology for non-cooperative communications, serving as an intermediate step in signal detection and demodulation in communications. AMC has a wide range of applications in both military and civilian domains. In the military field, AMC can interfere with the enemy modulation signal when identified, while in the civilian field, it is the basis for spectrum allocation and signal demodulation [12]. Given the current increasingly complex electromagnetic environment, AMC technology has garnered significant attention from researchers.
The general AMC process comprises two primary steps: signal preprocessing and classification. Signal preprocessing involves noise reduction and estimation of signal parameters, such as carrier frequency and signal power. The classification step in AMC methods can be broadly categorized into likelihood ratio-based methods [13,14] and feature-based methods [15,16,17]. The likelihood ratio-based method [18] uses the modulated signal to establish the likelihood function, and then compares the calculated likelihood ratio function value with the threshold value to complete the signal classification. The likelihood ratio-based AMC method yields the optimal solution at the cost of high computational effort. It has high computational complexity and requires channel state information of the communication system. The feature-based AMC method has relatively low computational complexity but relies heavily on specialized knowledge to construct complex feature engineering. This includes higher-order statistics-based, parametric statistics [19], constellation diagrams, cyclic spectrum [20], time-frequency transformation domain, and others. Moreover, the classifier is designed for specific features, such as artificial neural networks (ANNs), support vector machines (SVMs), K-nearest neighbors (KNNs), etc.
In recent years, deep learning techniques have made significant breakthroughs in the fields of image [21] and natural language processing [22]. Hence, many researchers have attempted to apply deep learning techniques to AMC. Artificial intelligence has numerous applications in wireless communication, including signal detection [23,24,25], channel estimation [26,27,28,29,30], radio frequency fingerprint identification [31,32,33], and beamforming [34,35]. Prior to the development of artificial intelligence, classical methods mainly designed features through signal processing technologies, and then machine learning-based classifiers were applied for identification. However, there are many shortcomings of the characteristics designed by artificial knowledge and experience, such as weak performance, poor environmental adaptability, and difficulty in dealing with increasingly complex and numerous wireless transmitters. From a processing flow perspective, AMC using deep learning methods is primarily based on image transformation and original sampled signals. Image transformation-based AMC involves converting the original sampled signal into an image and then using deep learning to classify modulations, such as images. The commonly used image transformations include time-frequency diagrams [36,37,38], constellation maps [39], and others. On the other hand, deep learning is used to perform AMC on the sampled signal [40]. The sampled signal already contains the necessary information for AMC, without increasing the computational effort of conversion to an image, making it suitable for application in spatially embedded computers.
Convolutional neural networks (CNNs) are the most widely used network structures in image processing and long short-term memory (LSTM) networks [41,42] are widely used in natural language processing (NLP); both are gradually receiving attention from experts in the field of automatic modulation classification. O’shea et al. [43] first introduced CNN into the field of automatic modulation classification and proposed an AMC method based on CNN. West et al. [44] introduced a network structure combining CNN and LSTM, such as CLDNN [45], with an accuracy of more than 80% for the classification of 11 modulations when the signal-to-noise ratio (SNR) of ≥0 dB. O’shea et al. [46] transformed the two-dimensional convolution in VGG [47] and ResNet [48]; Lin et al. [49] used one-dimensional convolution for extracting time-domain features with an accuracy of over 90% for the classification of 24 modulations when SNR ≥ 10 dB. Krzyston et al. [50] fully considered the characteristics of electromagnetic signals, treated the in-phase quadrature (IQ) signals as complex signals, and designed a convolutional neural network in the complex form; its accuracy exceeded 80% when SNR ≥ 10 dB in the classification of 11 schemes.
Unfortunately, however, the above methods only compare the classification accuracy and do not investigate the parameters, computational efforts, and complexity. Due to the limitations of size and power consumption, the storage space and computational power of lightweight equipment are significantly restricted. Furthermore, deep neural networks have a large number of parameters and high computational complexity, limiting their application in communication systems. To address this challenge, reduce the size of the network model, decrease its complexity, and increase the model inference speed, researchers in the field of image processing [51] have proposed lightweight network structures, such as SqueezeNet [52], MobileNet [53], and ShuffleNet [54] by optimizing the structure of neural networks.
In this paper, we propose a lightweight complex-valued residual network (CVResNet)-based AMC method. A hybrid data augmentation method was designed to compensate for the decrease in classification performance resulting from the lightweight design. The main contributions of this paper are summarized as follows.
  • A lightweight residual network based on complex-valued operations is proposed, named CVResNet, aiming to address the high computational effort and complexity of traditional non-lightweight networks.
  • A hybrid data augmentation method is designed to compensate for the potential performance degradation caused by the lightweight network.
  • Comparative experiments are conducted in the same scenario, and the results demonstrate that CVResNet can significantly reduce computational complexity and effort while achieving better classification performance.

2. Signal Model and Problem Formulation

2.1. Signal Model

The signal data in this paper take into account the effects of the phase shift and additive Gaussian noise; the signal model can be expressed as
x ( n ) = h e j θ ( n ) s ( n ) + w ( n ) , n 0 , 1 , 2 , , N 1 ,
where s ( n ) denotes the transmitted baseband signal sequence, h denotes the channel gain, θ ( n ) denotes the time-varying phase offset, and w ( n ) represents the additive white Gaussian noise. N is the number of sampling points of the signal. The radio signal x ( n ) received by the receiver consists of in-phase (I) and quadrature (Q) components, which can be seen as the real and imaginary parts of x ( n ) , respectively. x ( n ) can be expressed as
x ( n ) = I Q = r e a l ( x ( n ) ) i m a g ( x ( n ) ) .

2.2. System Model

The framework of the deep learning-based AMC system is shown in Figure 1. In this system, the classifier is critical, as it needs to classify the modulation scheme based on the pre-processed data.
In the training process, the goal is to obtain a mapping function f F : X Y with a low number of parameters. X and Y represent the sample space and category space, respectively. Due to the limited amount of data, the neural network can approximate the mapping function between X and Y by finding the mapping relationship between existing data and labels in the database. The optimization target can be expressed as
min f F E ( x , y ) D { L c e [ f ( x ) , y ] } ,
where D is the existing dataset, y is the modulation category label of x, and L c e is the cross-entropy loss.
In the testing process, the neural network is employed as the classifier. For a communication system, an accurate and fast classifier facilitates the subsequent correct demodulation and the stability of the whole system. The classifier infers the probability that the input signal belongs to each modulation type in the modulation type pool P = { p j , j = 1 , 2 , , J } , where J represents the number of modulated types and p j represents the modulated type. The inference of the classifier is shown as
p = arg max p j P P ( p j | x ) ,
where P ( p j | x ) is the probability that the input signal x belongs to modulation type p j and p is the classification result.

3. Our Proposed AMC Method

In this section, the proposed AMC method is described in four subsections, including the framework of the proposed AMC method, details of the lightweight CVResNet, hybrid data augmentation, and the training procedure.

3.1. Framework of the Proposed AMC Method

The framework of the proposed AMC method is shown in Figure 2. Our proposed AMC method is composed of a training process and a testing process. In the training process, the parameters of an excellent lightweight network are trained using a hybrid DA method. In the testing process, the lightweight network is used for modulating signal classification.

3.2. Details of the Lightweight CVResNet

3.2.1. Deep Residual Neural Network

In neural network models, the depth of the network has a significant impact on the performance of the model. Deeper networks can perform more complex and powerful feature extraction, but this also leads to problems such as gradient disappearance, gradient explosion, and network degradation. To avoid these problems, this paper chooses the residual network as the feature extraction network for AMC. The residual network and the residual unit (Resunit) are shown in Figure 3. In the residual network, the Resunit extracts the features, the average pooling (AvgPool) reduces the feature dimension, and ’linear’ is the fully connected layer that classifies the features to obtain modulated classification results.

3.2.2. Complex Convolution

Regular neural networks are defined in the real-valued domain and their weight parameters and the data passed in the network are real-valued numbers. The complex-valued neural network can be considered an extension of the conventional neural network from the real-valued domain to the complex-valued domain, where the network parameters are all complex-valued numbers and allow the input of the network to be in the form of complex-valued numbers. Therefore, it can be used to process modulated signals in IQ format and better retain the information contained in the modulated signals.
To reduce the complexity caused by complex-valued neural networks, only complex convolution [55] is used in this paper. As shown in Figure 4, the idea is to treat the real and imaginary parts of the complex numbers as logically distinct real-valued subjects and to simulate complex operations internally with real-valued arithmetic. By defining the input complex-valued data S = I + i Q and the convolution kernel K = a + i b , where a and b are independent real convolution kernels, the convolution result of S with K can be expressed as
S K = ( I a Q b ) + i ( Q a + I b ) ,
where ∗ is the convolution operation. From this formula, it can be seen that complex convolution is the conversion of the convolution of complex numbers with complex numbers into the convolution of real numbers with real numbers, and combining them, so that complex convolution can be achieved without modifying the real-valued convolution operation.

3.2.3. Depthwise Separable Convolution

Despite the powerful feature extraction capabilities of complex-convolutional residual networks, the complex structure, large number of parameters, and long training time make it difficult to deploy the complex, convolutional residual networks on actual lightweight devices for AMC. To solve the problem, this paper combines complex convolution with depthwise separable convolution and proposes a lightweight depthwise separable complex convolution. It is done by replacing the real convolution in the complex convolution with a depthwise separable convolution [56,57]; this method can significantly reduce the number of parameters of convolutional layers. As shown in Figure 5, the depthwise separable convolution consists of depthwise convolution and pointwise convolution.
Each convolution kernel is responsible for one channel, and each channel is convolved by only one convolution kernel. This process generates the same number of channels in the feature map as the number of channels in the input. The operation of pointwise convolution is similar to the regular convolution operation, with a convolution kernel of size 1 × 1 × W , where W is the number of channels in the previous layer. Here, the convolution operation combines the maps from the previous layer in the depth direction with weighting to generate a new feature map.
Although depthwise separable convolution reduces computational complexity and parameter count, it can provide similar performance to ordinary convolutions in practical applications. This is mainly because—in many automatic modulation recognition tasks—the correlation between spatial features and channel features is relatively weak. By separating these two types of features, depthwise separable convolution can significantly reduce computational costs with minimal performance loss.

3.3. Hybrid Data Augmentation

Data augmentation methods can prevent models from overfitting, improve model robustness, and address sample imbalances. To compensate for the potential performance degradation caused by using lightweight networks, we propose to improve classification performance by using a hybrid data augmentation method. The flow of this hybrid data augmentation method is shown in Figure 6. Specifically, the hybrid data augmentation method is implemented using both rotation, which expands the number of samples in the dataset, and RandMix, which serves as a regularization method to enhance model robustness and improve generalization performance. The hybrid data augmentation method is described as follows.

3.3.1. Rotation

The rotation data augmentation method works directly on the original signal dataset and serves to expand the number of samples in the dataset to improve the generalization ability of the model. The principle of rotation data augmentation is shown in Figure 7. Specifically, for a given original signal sample ( I , Q ) , the augmented samples ( I , Q ) can be obtained after the following rotational transformation:
I Q = r cos ( α + β ) r sin ( α + β ) = r cos α cos β r sin α sin β r sin α cos β + r cos α sin β = cos α sin α sin α cos α I Q ,
where α refers to the angle of rotation, β and r refer to the inherent properties of the original signal sample. In this paper, the augmented signals are obtained by rotating the original signal by 0, 90, 180, and 270 degrees in the counterclockwise direction, so a set of original signal samples is augmented into four sets of signal samples.

3.3.2. RandMix

The RandMix data augmentation method works on the signal dataset after augmentation using rotation and is useful as an effective regularization method to improve the generalization performance and robustness of the model. The principle of RandMix data augmentation is shown in Figure 8. For a given signal sample, it is first sliced into M signal slices with equal lengths based on its total length. These M signal slices are then randomly combined to create an augmented signal sample.

3.4. Training Procedure

In summary, based on the above modules, the full training procedure for the proposed AMC method is described in Algorithm 1.
Algorithm 1 Training procedure of the proposed AMC method.
Require:
  •    D: Training dataset;
  •    T: Number of training iterations;
  •    B: Number of batches in a training iteration;
  •    N: Length of each datum;
  •    M: Number of segments for each datum;
  •     θ : Parameters of the lightweight CVResNet;
  •     l r : Learning rate;
     Data augmentation:
 1:  D r o t a t e = D , D π / 2 , D p i , D 3 p i / 2 R o t a t e D ;
     Training procedure:
 2: for  t = 0 to T 1  do:
 3:       for  b = 0 to B 1  do:
 4:             Sample a batch training dataset ( x , y ) from D r o t a t e .
                 Forward propagation:
                 Evenly split x .
 5:              x x 0 N / M 1 , x N / M 2 N / M 1 , , x ( M 1 ) N / M N 1 = X ;
                 Randomly mix X .
 6:              R a n d m i x X x * ;
                 Get the output of lightweight CVResNet.
 7:               y ^ = f ( θ t , b ; x * ) ;
                 Calculate the loss.
 8:               L = E y log y ^
                 Backward propagation:
 9:               θ A d a m ( θ , L , l r )
 10:       end for
 11:       end for
 12: end for
 13: end for

4. Simulation Results and Analysis

4.1. Experiment Environment and Parameters

The simulation was conducted using PyTorch as the backend on a GTX 1080Ti platform. To assess the classification performance and model complexity of the lightweight CVResNet, we used the open-source dataset RadioML 2016.10A [58]. Five modulated scheme data were selected from this dataset, with a modulation scheme model pool of P = B P S K , Q P S K , 8 P S K , Q A M 16 , Q A M 64 and an SNR of 2 dB. The dataset contained 1000 samples for each modulated type per SNR. For training, validating, and testing, we used 420, 180, and 400 samples, respectively. During the neural network training process, the best model on the validation set was saved as the final classification model. The simulation parameters are shown in detail in Table 1.
The dataset parameters are shown in Table 2.

4.2. Simulation Results

4.2.1. Comparative Experiments

This paper compares our AMC method with three comparison methods, i.e., support vector machine (SVM), CVResNet, and lightweight CVResNet. SVM uses support vector machine to classify the original samples, CVResNet uses a non-lightweight network structure to implement classification, and lightweight ResNet uses a lightweight network structure to classify signal samples.

4.2.2. Classification Accuracy: Proposed Method vs. Comparative Methods

The simulation results are shown in Figure 9 and Table 3, and it can be seen that the proposed method has the highest classification accuracy and F1-Score. Compared with SVM and lightweight ResNet, the proposed method can achieve at least 30% improvement; compared with the non-lightweight network CVResNet, it can achieve about 2% improvement.

4.2.3. Confusion Matrix: Proposed Method vs. Comparative Methods

The confusion matrix is an important index used to measure the performance of classification, which can intuitively show the classification of each sample. Figure 10 shows the confusion matrix results of the four methods. It can be seen that the proposed method has the best classification performance.

4.2.4. Ablation Experiments

In this paper, ablation experiments were carried out to verify the effectiveness of the two data augmentation methods. The experimental results are shown in Figure 11 and Table 4. It can be seen that both rotation and RandMix methods can effectively improve the classification performance of the proposed lightweight CVResNet.

4.2.5. Features Visualization

To observe the feature extraction abilities of different methods more intuitively, the features extracted can be visualized by using the t-distributed stochastic neighbor embedding (t-SNE) [59] to reduce the extracted features to two dimensions. The feature map and silhouette coefficient (SC) are shown in Figure 12.

4.2.6. Complexity Analysis

The depthwise separable convolution splits a standard convolution into two convolutions (depth-by-depth, point-by-point), thus reducing the number of parameters and the total computational effort. In this paper, the lightweight model is based on the depthwise separable convolution, so the space complexity and time complexity of the standard convolution and the depthwise separable convolution are analyzed here. The space complexity is measured by the number of parameters in the convolutional layer, and the time complexity is measured by floating point operations (FLOPs). The space complexity and time complexity of the standard convolution are shown as
P s t d O ( K h K w C i n C o u t ) ,
T s t d O ( K h K w C i n C o u t G h G w ) ,
where P s t d and T s t d , respectively, represent the parameters and FLOPS; K h and K w , respectively, represent the two dimensions of the convolution kernel; C i n and C o u t , respectively, represent the number of input channels and output channels; G h and G w , respectively, represent the two dimensions of the output characteristic graph. Compared to standard convolution, the space complexity and time complexity of the depthwise separable convolution can be expressed as
P s e p O ( K h K w C i n + C i n C o u t ) ,
T s e p O ( K h K w C i n G h G w + C i n C o u t G h G w ) .
The ratios of the space complexity and time complexity of depthwise separable convolution and standard convolution are as follows.
P s e p P s t d = K h K w C i n + C i n C o u t K h K w C i n C o u t = 1 C o u t + 1 K h K w ,
T s e p T s t d = K h K w C i n G h G w + C i n C o u t G h G w K h K w C i n C o u t G h G w = 1 C o u t + 1 K h K w .
From Equations (11) and (12), it can be found that the number of parameters and FLOPS of the depthwise separable convolution is much smaller compared to the standard convolution, so the model can be more lightweight. A detailed comparison of the complexity of the lightweight CVResNet in this paper and the standard CVResNet is shown in Table 5. From the table, we can see that the number of parameters in the lightweight CVResNet has decreased by about 83.34%, and the FLOPs have decreased by about 83.77%, compared to those of the standard CVResNet.

5. Conclusions

This article proposes a lightweight and efficient AMC method and validates it using a modulated signal dataset. The experimental results demonstrate that this method has higher classification accuracy, stronger robustness, and better generalization ability compared to three other comparison methods, with the accuracy improved by at least 2%. Additionally, the ablation experiment further confirms the effectiveness of each module in this method, with an accuracy improvement of about 8%. In the complexity analysis, the proposed lightweight AMC method outperforms the non-lightweight method, with advantages in both model complexity and computational complexity. However, the effect of noise on feature extraction is not considered in this paper, so future work will explore various signal-to-noise ratios and other schemes, such as Transformer or LSTM for effective feature extraction from time-series data. Overall, the proposed method has broad applications in various IQ signal-based electromagnetic signal identification tasks, including radiation source identification. Based on complex-valued separable convolution, the network framework can be easily modified or expanded according to the target task requirements, and further performance improvements can be achieved by combining with other data augmentation methods.

Author Contributions

Methodology, F.W. and T.S.; Software, F.W., C.H. and Q.L.; Validation, C.H.; Investigation, T.S.; Writing—original draft, F.W.; Visualization, Q.L.; Supervision, T.S.; Project administration, Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number U20B2038.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The research data can be available via email requirement.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, H.; Guo, S.; Gui, G.; Yang, Z.; Zhang, J.; Sari, H.; Adachi, F. Deep learning for physical-layer 5G wireless techniques: Opportunities, challenges and solutions. IEEE Wirel. Commun. 2020, 27, 214–222. [Google Scholar] [CrossRef]
  2. Guan, G.; Liu, M.; Tang, F.; Kato, N.; Adachi, F. 6G: Opening new horizons for integration of comfort, security and intelligence. IEEE Wirel. Commun. 2020, 27, 126–132. [Google Scholar]
  3. Dobre, O.A. Signal identification for emerging intelligent radios: Classical problems and new challenges. IEEE Instrum. Meas. Mag. 2015, 18, 11–18. [Google Scholar] [CrossRef]
  4. Tu, Y.; Lin, Y.; Zha, H.; Zhang, J.; Wang, Y.; Mao, S. Large-scale real-world radio signal recognition with deep learning. Chin. J. Aeronaut. 2022, 35, 35–48. [Google Scholar] [CrossRef]
  5. Banerjee, S.; Santos, J.; Hempel, M.; Ghasemzadeh, P.; Sharif, H. A novel method of near-miss event detection with software defined radar in improving railyard safety. Safety 2019, 5, 55. [Google Scholar] [CrossRef]
  6. Smith, A.; Fei, Z.; Evans, M.; Downey, J. Modulation classification of satellite communication signals using cumulants and neural networks. In Proceedings of the IEEE Cognitive Communications for Aerospace Applications Workshop (CCAA), Cleveland, OH, USA, 27–28 June 2017; pp. 1–8. [Google Scholar]
  7. Chang, S.; Huang, S.; Zhang, R.; Feng, Z.; Liu, L. Multi-task learning based deep neural network for automatic modulation classification. IEEE Internet Things J. 2022, 9, 2192–2206. [Google Scholar] [CrossRef]
  8. Ke, Z.; Vikalo, H. Real-time radio technology and modulation classification via an LSTM Auto-Encoder. IEEE Trans. Wirel. Commun. 2022, 21, 370–382. [Google Scholar] [CrossRef]
  9. Zhang, X.; Wang, Y.; Lin, Y.; Gui, G. A comprehensive survey of deep learning-based automatic modulation recognition methods. Radio Commun. Technol. 2022, 48, 697–710. [Google Scholar]
  10. Zhang, X.; Zhao, H.; Zhu, H.; Adebisi, B.; Gui, G.; Gacanin, H.; Adachi, F. NAS-AMR: Neural architecture search based automatic modulation recognition method for integrating sensing and communication system. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1374–1386. [Google Scholar] [CrossRef]
  11. Fu, X.; Gui, G.; Wang, Y.; Gacanin, H.; Adachi, F. Automatic modulation classification based on decentralized learning and ensemble learning. IEEE Trans. Veh. Technol. 2022, 71, 7942–7946. [Google Scholar] [CrossRef]
  12. Zhao, N.; Yu, F.R.; Sun, H.; Li, M. Adaptive power allocation schemes for spectrum sharing in interference alignment-based cognitive radio networks. IEEE Trans. Veh. Technol. 2018, 65, 3700–3714. [Google Scholar] [CrossRef]
  13. Sills, J.A. Maximum-likelihood modulation classification for PSK/QAM. In Proceedings of the Military Communications (MILCOM), Atlantic City, NJ, USA, 31 October–3 November 1999; pp. 217–220. [Google Scholar]
  14. Wei, W.; Mendel, J.M. Maximum-likelihood classification for digital amplitude-phase modulations. IEEE Trans. Commun. 2000, 48, 189–193. [Google Scholar] [CrossRef]
  15. Hong, L.; Ho, K.C. Identification of digital modulation types using the wavelet transform. In Proceedings of the Military Communications (MILCOM), Atlantic City, NJ, USA, 31 October–3 November 1999; pp. 427–431. [Google Scholar]
  16. Swami, A.; Sadler, B.M. Hierarchical digital modulation classification using cumulants. IEEE Trans. Commun. 2000, 48, 416–429. [Google Scholar] [CrossRef]
  17. Hatzichristos, G.; Fargues, M.P. A hierarchical approach to the classification of digital modulation types in multipath environments. In Proceedings of the IEEE Conference Record of Thirty-Fifth Asilomar Conference on Signals, Systems and Computers (ACSSS), Pacific Grove, CA, USA, 4–7 November 2001; pp. 1494–1498. [Google Scholar]
  18. Panagiotou, P.; Anastasopoulos, A.; Polydoros, A. Likelihood ratio tests for modulation classification. In Proceedings of the IEEE Architectures and Technologies for Information Superiority (ATIS), Los Angeles, CA, USA, 22–25 October 2000; pp. 670–674. [Google Scholar]
  19. Nandi, A.K.; Azzouz, E.E. Algorithms for automatic modulation recognition of communication signals. IEEE Trans. Commun. 1998, 46, 431–436. [Google Scholar] [CrossRef]
  20. Gardner, W.A.; Spooner, C.M. Cyclic spectral analysis for signal detection and modulation recognition. In Proceedings of the IEEE Military Communications Conference (MILCOM), San Diego, CA, USA, 23–26 October 1988; pp. 419–424. [Google Scholar]
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; Volume 1, pp. 1097–1105. [Google Scholar]
  22. Arisoy, E.; Sethy, A.; Ramabhadran, B.; Chen, S. Bidirectional recurrent neural network language models for automatic speech recognition. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 5421–5425. [Google Scholar]
  23. Ye, H.; Li, G.Y.; Juang, B.-H. Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wirel. Commun. Lett. 2018, 7, 114–117. [Google Scholar] [CrossRef]
  24. Qin, Z.; Ye, H.; Li, G.Y.; Juang, B.-H.F. Deep learning in physical layer communications. IEEE Wirel. Commun. 2019, 26, 93–99. [Google Scholar] [CrossRef]
  25. Guan, G.; Liu, F.; Sun, J.; Yang, J.; Zhou, Z.; Zhao, D. Flight delay prediction based on aviation big data and machine learning. IEEE Trans. Veh. Technol. 2020, 69, 140–150. [Google Scholar]
  26. He, H.; Wen, C.; Jin, S.; Li, G.Y. Deep learning-based channel estimation for beamspace mmWave massive MIMO systems. IEEE Wirel. Commun. Lett. 2018, 7, 852–855. [Google Scholar] [CrossRef]
  27. Soltani, M.; Pourahmadi, V.; Mirzaei, A.; Sheikhzadeh, H. Deep learning-based channel estimation. IEEE Commun. Lett. 2019, 23, 652–655. [Google Scholar] [CrossRef]
  28. Fan, G.; Sun, J.; Gui, G.; Gacanin, H.; Adebisi, B.; Ohtsuki, T. Limited CSI feedback using fully convolutional neural networks for FDD massive MIMO systems. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 672–682. [Google Scholar] [CrossRef]
  29. Gui, G.; Wang, J.; Yang, J.; Liu, M.; Sun, J.-L. Frequency division duplex massive multiple-input multiple-output downlink channel state information acquisition techniques based on deep learning. J. Data Acquis. Process. 2022, 37, 502–511. [Google Scholar]
  30. He, Z.; Yin, J.; Wang, Y.; Gui, G.; Adebisi, B.; Ohtsuki, T.; Gacanin, H.; Sari, H. Edge device identification based on federated learning and network traffic feature engineering. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1898–1909. [Google Scholar] [CrossRef]
  31. Peng, Y.; Liu, P.; Wang, Y.; Gui, G.; Adebisi, B.; Gacanin, H. Radio frequency fingerprint identification based on slice integration cooperation and heat constellation trace figure. IEEE Wirel. Commun. Lett. 2022, 11, 543–547. [Google Scholar] [CrossRef]
  32. Shen, G.; Zhang, J.; Marshall, A.; Cavallaro, J.R. Towards scalable and channel-robust radio frequency fingerprint identification for LoRa. IEEE Trans. Inf. Forensics Secur. 2022, 17, 774–787. [Google Scholar] [CrossRef]
  33. Tao, M.; Wang, C.; Fu, X.; Wang, Y. Survey of few-shot learning methods for specific emitter identification. J. Nantong Univ. 2023; early access. [Google Scholar]
  34. Huang, H.; Liu, M.; Gu, G.; Haris, G.; Adachi, F. Unsupervised learning-inspired power control methods for energy-efficient wireless networks over fading channels. IEEE Trans. Wirel. Commun. 2022, 21, 9892–9905. [Google Scholar] [CrossRef]
  35. Huang, H.; Peng, Y.; Yang, J.; Xia, W.; Gui, G. Fast beamforming design via deep learning. IEEE Trans. Veh. Technol. 2020, 69, 1065–1069. [Google Scholar] [CrossRef]
  36. Zhang, M.; Diao, M.; Guo, L. Convolutional neural networks for automatic cognitive radio waveform recognition. IEEE Access 2017, 5, 11074–11082. [Google Scholar] [CrossRef]
  37. Zhou, X.; He, X.; Zheng, C. Radio signal recognition based on image deep learning. J. Commun. 2019, 40, 114–125. [Google Scholar]
  38. Qu, Z.; Wang, W.; Hou, C.; Hou, C. Radar signal intra-pulse modulation recognition based on convolutional denoising autoencoder and deep convolutional neural network. IEEE Access 2019, 7, 112339–112347. [Google Scholar] [CrossRef]
  39. Peng, S.; Jiang, H. Modulation recognition using hierarchical deep neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 718–727. [Google Scholar] [CrossRef]
  40. Wang, Y.; Gui, G.; Gacanin, H.; Ohtsuki, T.; Sari, H.; Adachi, F. Transfer learning for semi-supervised automatic modulation classification in ZF-MIMO systems. IEEE J. Emerg. Sel. Top. Circuits Syst. 2020, 10, 231–239. [Google Scholar] [CrossRef]
  41. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  42. Li, Z.; Hoiem, D. Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2935–2947. [Google Scholar] [CrossRef] [PubMed]
  43. Shea, T.; Corgan, J.; Clancy, T. Convolutional radio modulation recognition networks. In Proceedings of the International Conference on Engineering Applications of Neural Networks (INNS), Aberdeen, UK, 2–5 September 2016; pp. 213–226. [Google Scholar]
  44. West, N.; Shea, T. Deep architectures for modulation recognition. In Proceedings of the 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 6–9 March 2017; pp. 1–6. [Google Scholar]
  45. Sainath, T.N.; Vinyals, O.; Senior, A.; Sak, H. Convolutional, long short-term memory, fully connected deep neural networks. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 4580–4584. [Google Scholar]
  46. Shea, T.; Roy, T.; Clancy, T. Over-the-air deep learning based radio signal classification. IEEE J. Sel. Top. Signal Process. 2018, 12, 168–179. [Google Scholar] [CrossRef]
  47. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  48. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  49. Lin, Y.; Tu, Y.; Dou, Z.; Wu, Z. The application of deep learning in communication signal modulation recognition. In Proceedings of the 2017 IEEE/CIC International Conference on Communications in China (ICCC), Qingdao, China, 22–24 October 2017; pp. 1–5. [Google Scholar]
  50. Krzyston, J.; Bhattacharjea, R.; Stark, A. Complex-valued convolutions for modulation recognition using deep learning. In Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar]
  51. Cui, T.S.; Cui, K.; Huang, Y.H.; Junshe, A. Convolutional neural network satellite signal automatic modulation recognition algorithm. J. Beijing Univ. Aeronaut. Astronaut. 2022, 48, 986–9949. [Google Scholar]
  52. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  53. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  54. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  55. Wang, Y.; Gui, G.; Gacanin, H.; Ohtsuki, T.; Dobre, O.A.; Poor, H.V. An efficient specific emitter identification method based on complex-valued neural networks and network nompression. IEEE J. Sel. Areas Commun. 2021, 39, 2305–2317. [Google Scholar] [CrossRef]
  56. Chollet, F. Xception: Deep Learning With Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  57. Fu, X.; Gui, G.; Wang, Y.; Ohtsuki, T.; Adebisi, B.; Gacanin, H.; Adachi, F. Lightweight network and model Aggregation for automatic modulation classification in wireless communications. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
  58. O’Shea, T.J.; West, N. Radio machine learning dataset generation with GNU radio. In Proceedings of the GNU Radio Conference (GRCon), Boulder, CO, USA, 12–16 September 2016; pp. 1–5. [Google Scholar]
  59. Maaten, L.; Hinton, G. Visualizing Data Using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Figure 1. System framework of AMC based on the neural network.
Figure 1. System framework of AMC based on the neural network.
Sensors 23 04187 g001
Figure 2. The framework of the lightweight CVResNet.
Figure 2. The framework of the lightweight CVResNet.
Sensors 23 04187 g002
Figure 3. The structure of ResNet.
Figure 3. The structure of ResNet.
Sensors 23 04187 g003
Figure 4. Complex convolution.
Figure 4. Complex convolution.
Sensors 23 04187 g004
Figure 5. Depthwise separable convolution, where (a) depthwise convolution, and (b) pointwise convolution.
Figure 5. Depthwise separable convolution, where (a) depthwise convolution, and (b) pointwise convolution.
Sensors 23 04187 g005
Figure 6. Hybrid data augmentation.
Figure 6. Hybrid data augmentation.
Sensors 23 04187 g006
Figure 7. Principle of rotation.
Figure 7. Principle of rotation.
Sensors 23 04187 g007
Figure 8. Principle of RandMix.
Figure 8. Principle of RandMix.
Sensors 23 04187 g008
Figure 9. The classification accuracy of the proposed AMC method and comparative AMC methods.
Figure 9. The classification accuracy of the proposed AMC method and comparative AMC methods.
Sensors 23 04187 g009
Figure 10. The confusion matrices of the proposed AMC method and comparative AMC methods.
Figure 10. The confusion matrices of the proposed AMC method and comparative AMC methods.
Sensors 23 04187 g010
Figure 11. Results of ablation experiments for different methods.
Figure 11. Results of ablation experiments for different methods.
Sensors 23 04187 g011
Figure 12. Feature visualization.
Figure 12. Feature visualization.
Sensors 23 04187 g012
Table 1. Simulation parameters.
Table 1. Simulation parameters.
Simulation ParametersValues
Data dimension 2 × 128
Number of D3000
Number of D v a l 2000
OptimizerAdam
LossCross Entropy Loss
Epoch100
Batch-size512
Learning rate0.001
Table 2. Dataset parameters.
Table 2. Dataset parameters.
Simulation ParametersContents
Modulation typeBPSK, QPSK, 8PSK, 16QAM, 64QAM
Data format (I/Q) 2 × 128
Standard deviation of the sampling rate offset0.01 Hz
Maximum sample rate offset50 Hz
Carrier frequency offset standard deviation0.01 Hz
Maximum carrier frequency offset500 Hz
No. of sine waves in frequency selective fading8
Sampling rate200 kHz
NoiseAWGN
Table 3. The classification accuracy of the proposed AMC method and comparative AMC methods.
Table 3. The classification accuracy of the proposed AMC method and comparative AMC methods.
MethodsSVMCVResNetLightweight CVResNetProposed
Accuracy51.50%94.30%65.95%96.40%
F1-Score0.4410.9430.6360.964
Table 4. Results of ablation experiments.
Table 4. Results of ablation experiments.
MethodsRotateRandMixRotate+RandMix (Our Proposed)
Accuracy88.85%67.40%96.40%
Table 5. The parameters, FLOPS, model size of CVResNet, and lightweight CVResNet.
Table 5. The parameters, FLOPS, model size of CVResNet, and lightweight CVResNet.
NetworkParametersFLOPsSize/KB
CVResNet1,848,965117,750,2727376
Lightweight CVResNet308,117 (83.34%↓)19,115,008 (83.77%↓)1424 (80.69%↓)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, F.; Shang, T.; Hu, C.; Liu, Q. Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network. Sensors 2023, 23, 4187. https://doi.org/10.3390/s23094187

AMA Style

Wang F, Shang T, Hu C, Liu Q. Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network. Sensors. 2023; 23(9):4187. https://doi.org/10.3390/s23094187

Chicago/Turabian Style

Wang, Fan, Tao Shang, Chenhan Hu, and Qing Liu. 2023. "Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network" Sensors 23, no. 9: 4187. https://doi.org/10.3390/s23094187

APA Style

Wang, F., Shang, T., Hu, C., & Liu, Q. (2023). Automatic Modulation Classification Using Hybrid Data Augmentation and Lightweight Neural Network. Sensors, 23(9), 4187. https://doi.org/10.3390/s23094187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop