Next Article in Journal
Contribution of Atmospheric Depositions to Inventory of Nutrients in the Coastal Waters of Crimea
Previous Article in Journal
Continuous Monitoring of Transmission Lines Sag through Angular Measurements Performed with Wireless Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Data Augmentation Method for Automatic Modulation Recognition from Low-Data Imbalanced-Class Regime

1
College of Information and Communication, National University of Defense Technology, Wuhan 430000, China
2
School of Electrical Engineering, Naval University of Engineering, Wuhan 430000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3177; https://doi.org/10.3390/app13053177
Submission received: 4 February 2023 / Revised: 24 February 2023 / Accepted: 26 February 2023 / Published: 1 March 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
The application of deep neural networks to address automatic modulation recognition (AMR) challenges has gained increasing popularity. Despite the outstanding capability of deep learning in automatic feature extraction, predictions based on low-data regimes with imbalanced classes of modulation signals generally result in low accuracy due to an insufficient number of training examples, which hinders the wide adoption of deep learning methods in practical applications of AMR. The identification of the minority class of samples can be crucial, as they tend to be of higher value. However, in AMR tasks, there is a lack of attention and effective solutions to the problem of Imbalanced-class in a low-data regime. In this work, we present a practical automatic data augmentation method for radio signals, called SigAugment, which incorporates eight individual transformations and effectively improves the performance of AMR tasks without additional searches. It surpasses existing data augmentation methods and mainstream methods for solving low-data and imbalanced-class problems on multiple datasets. By simply embedding SigAugment into the training pipeline of an existing model, it can achieve state-of-the-art performance on benchmark datasets and dramatically improve the classification accuracy of minority classes in the low-data imbalanced-class regime. SigAugment can be trained for uniform use on different types of models and datasets and works right out of the box.

1. Introduction

With extensive research and application of deep neural networks in various fields, the performance of deep learning-based automatic modulation recognition (AMR) has improved tremendously over traditional signal processing methods [1]. The key enablers of deep learning are the availability of large datasets, the advent of GPUs, and the continued development of deep network architectures [2]. AMR tasks based on deep learning cannot be performed without high-quality labeled data, which are required for both algorithmic research and practical deployment. The amount of data in existing open-source datasets is the same for all modulation types [1,3,4]. However, deep learning-based AMR tasks face three real-world challenges: (1) insufficient data; (2) imbalanced classes; and (3) high computational demands when computational power is limited. Sensor data are always skewed by class imbalance, i.e., a small number of classes have a large number of sample points, while other classes have only a few samples. There is also the possibility of addressing the low-data regime problem [5]. Data are crucial not only in communications but also in high-tech fields such as finance, automation, and blockchain [6,7,8]. Deep learning models struggle to achieve excellent performance when confronted with a class imbalance in a low-data regime, resulting in poor classification performance for minority classes with insufficient data. This has become a major challenge because the minority class has a higher cost of misidentification because it represents favorable samples that are rare or expensive in nature.
A number of recent studies have focused on the problem of insufficient data and imbalanced classes, such as resampling and reweighting based on the sample size of each class. Resampling is achieved by increasing the weight of the minority class while decreasing the weight of the majority class. Upsampling is the most commonly used method in recent years [9,10,11,12], which can increase the likelihood of overfitting and significantly increase training complexity, particularly at higher oversampling rates, making it difficult for classifiers to achieve good generalization performance in testing or in practical applications [13,14]. The most successful application of this strategy, which sets the sample weights inversely proportional to the class frequencies, is in the field of target detection for balancing background classes and other classes [15]. Recent research on large, real-world long-tail datasets has revealed that this strategy performs poorly [16]. In contrast, data augmentation methods expand the dataset by transforming the original data to generate synthetic samples [17,18,19,20,21]. It is a simple and practical technique. It is more widely used than other methods because it can be effectively applied to various types of class imbalance and low-data regime problems. Although the straightforward application of data augmentation may exacerbate the class imbalance, the method is extremely effective at suppressing overfitting. Furthermore, data augmentation can be used in combination with class rebalancing and module improvement [22]. In this paper, we only focus on the effective data augmentation method for AMR from low-data imbalanced-class regime. The balancing method and model design that go with it are beyond the scope of this paper.
Research into data augmentation methods for modulated signals has received little attention in comparison to computer vision and speech [18]. Existing research has shown that data augmentation can significantly improve deep learning model classification performance under class-balanced conditions. The main question at this point is how to create effective data augmentation methods and strategy combinations to improve modulation recognition performance in low-data regimes with class imbalance. Domainality, model and data distribution independence, and complexity must all be considered when developing data augmentation methods for modulated signals. Signal samples differ from image data in terms of intrinsic characteristics, the most notable of which are temporal dependency and spatial dependency, where spatial correlation refers to the correlation between the I and Q components [23]. Signal samples can be transformed in both the time and frequency domains, so that corresponding data augmentation methods can be designed in the transformed domains, but the temporal complexity of these transformations needs to be considered [24]. There are only two effective augmentation methods in the current study, namely rotation and flip [18], and further research is needed to design more effective methods. Existing data augmentation methods have yet to be tested for adaptability to different models and datasets. Data augmentation methods for signal data in simple channel environments, for example, may be ineffective for data in complex channel environments. Data augmentation methods for recurrent neural networks (RNNs) might not work for convolutional neural networks (CNNs), and augmentation methods for frequency domain data might not work for time domain data. Existing research has not investigated data augmentation methods that are effective for multiple types of models. As a result, selecting and combining augmentation algorithms is challenging.
Despite numerous works devoted to providing optimal recognition accuracy, model design solutions, and data processing means for AMR tasks, current research on these tasks suffers from the following problems: (1) There has not been enough attention paid to the problems of insufficient data and imbalanced classes, which are prevalent in practical applications; (2) There are many methods used to deal with the aforementioned problems in the image domain, but they are all difficult to transfer directly to modulated signal classification tasks. (3) Existing data augmentation methods can vary significantly in their performance for different models.
In this paper, we focus on the problems of AMR tasks in the low-data imbalanced-class regime. Our approach is to use automated data augmentation techniques. The idea is to design a set of individual data transformations based on the characteristics of the modulated signal. The augmentation sequence for the data samples is then randomly selected from this set of transformations. We present four new augmentation transformations and a combination method based on radio signal characteristics: channel shuffle, inversion, split and permutation, scaling, and flip and channel shuffle. In experiments, empirical results on several datasets with unbalanced classes and small sample sizes show that our model significantly outperforms existing state-of-the-art(SOTA) methods. Furthermore, extensive validation experiments and ablation studies can validate the aforementioned preliminary discovery.
Our approach differs from previous work in the following ways: (1) When compared to RandAugment data augmentation, our approach does not require setting hyperparameters or grid search; additionally, RandAugment targets image data and SigAugment targets communication signal data; (2) When compared to existing individual data augmentation methods, SigAugment automatically selects augmentation sequences during training, which is more adaptive to the model and data; (3) When compared to existing resampling methods, our method implements online data transformation with little to no additional training overhead and is better at combating overfitting and improving model generalisation performance.
We summarize the key contributions of this paper as follows.
  • To the best of our knowledge, this is the first study of class imbalance modulation recognition in low-data regimes, and it will serve as a reference for researchers and the community in order to better understand the problem of class imbalance modulation recognition using data augmentation methods;
  • We demonstrate that existing rebalancing methods are limited in their ability to solve the class imbalance and small sample size problems in AMR tasks, both because they increase training costs and provide very limited improvements in recognition performance;
  • We propose a new automated data augmentation method for modulated signal data, called SigAugment, which can be used without additional hyperparameters compared to existing methods and can be adapted to models with different structures. Experimental results demonstrate that SigAugment can outperform existing SOTA methods on different types of datasets. In particular, the gain is significant on small datasets with unbalanced classes.

2. Related Works

Due to the accumulation of large data volumes and the widespread use of deep neural networks, deep learning-based AMR methods have outperformed traditional methods in terms of performance. Existing research has concentrated on developing deep neural network models based on open-source datasets and conducting studies on improving recognition accuracy, real-time performance, and the trade-off between the two [1,3,4,23,25,26,27,28,29,30,31,32,33,34]. Studies on the representation of radio signal data have also begun, which will provide additional domain knowledge aspects for deep learning-based automatic modulation identification methods [24]. Furthermore, data augmentation techniques have been shown to improve model classification accuracy [18,30]. In practice, however, AMR as a time series classification task is frequently subject to class imbalance and a low-data regime problem. When using small sample sizes, recent studies have used few-shot learning techniques to solve the modulated signal recognition problem [5]. In contrast, little attention has been paid to the study of class imbalance in modulated signal data. In contrast, the class imbalance problem has received extensive attention and discussion in the fields of computer vision, speech, and text [13,14,16,35,36]. The synthetic minority oversampling technique (SMOTE) for oversampling to artificially mitigate the imbalance is a conventional approach to solving the imbalance classification problem [37]. This method is widely used, and there are numerous variations of it [9,10,11,12].
However, the resampling strategy may alter the distribution of the original data, compromising the model’s performance. Over-sampling, in particular, generates a large amount of redundant data, reducing model training efficiency, and the strategy overfits minority classes, making it difficult to transfer the model’s excellent performance on the training set to the test data, often resulting in poor generalization performance on very imbalanced data. On the other hand, downsampling can result in significant information loss in the sample and even underfitting. The cost-sensitive reweighting technique is another alternative. Reweighting is the process of assigning different weights to different classes or samples, primarily to solve the class imbalance problem by reweighting the losses of different classes [15,35]. Samples from minority classes have a higher loss than samples from majority classes because the features learned in minority classes tend to be weaker. This can easily lead to the model overfitting the minority class [36]. A side effect of assigning higher weights to hard samples is the focus on noisy data. Assigning high weights to samples in modulation recognition tasks, which frequently contain a large number of samples with very low signal-to-noise ratios (SNRs), may reduce overall recognition performance. Other approaches are discussed in the recent review [13].
Data augmentation techniques are a simpler and more effective approach that is widely used for tasks such as classification, target detection, and speech recognition [18,19,20,21,30,38,39], and they are especially effective for class imbalance and low-data regime problems [40]. To the best of our knowledge, rotation and flip are effective modulation signal augmentation methods [18], and it has been experimentally demonstrated that data augmentation methods can achieve higher recognition accuracy in low-data regimes than when no data augmentation methods are available for the entire dataset. Modulated signal data augmentation for imbalanced-class and low-data regimes, on the other hand, has not been thoroughly investigated.

3. Methods

3.1. Signal Model

A typical communication system is made up of three parts: a transmitter, a channel, and a receiver. To transmit wireless signals over long distances in the air, the transmitter must modulate the low-frequency baseband signal to the high-frequency carrier signal in a specific way. The receiver uses the opposite modulation process after receiving the modulated signal to separate the baseband signal from the modulated signal. The identification of the signal modulation type is a critical step in signal demodulation. Signals received by the receiver are typically modulated in-phase and quadrature (IQ). In quadrature are two signals whose phases are 90 degrees apart. In mathematical terms, the samples obtained after sampling the signals can be expressed as L-dimensional arrays with two channels, denoted as follows:
x = I Q
where I = i 1 , i 2 , , i L , Q = q 1 , q 2 , , q L . In addition to representing the signal as IQs, amplitudes and phases (APs) can also be used to represent the signals.
x = A ϕ
where A = I 2 + Q 2 , ϕ = arctan 2 ( Q / I ) . A single sample in a deep learning-based AMR task is typically represented as ( x , y ) , where y denotes the label of sample x.

3.2. Data Augmentation Methods for AMR

Data augmentation is a set of techniques for artificially increasing the amount of data by generating new samples from existing data. This includes applying small transformations to data, combining data samples, and generating new data samples with deep learning models. Data augmentation is frequently used to prevent overfitting, improve model generalization, address class imbalances, and reduce the cost of collecting and labeling data. A number of traditional data augmentation methods have been successfully applied to computer vision and speech-related tasks, achieving SOTA performance. The basic linear data augmentation methods are rotation, scale, shear, and flip, while non-linear data augmentation methods include mosaic [19] and mixup [20]. Non-linear augmentation methods and their combinations are typically more effective than simple linear augmentation methods in tasks such as image classification and target detection. The same data augmentation methods that have been used successfully in the image domain are now widely used in speech. It is standard practice to first transform the raw speech signal into a Mel spectrogram before applying a data augmentation transform [41]. If an end-to-end architecture is used and the raw audio data are directly fed into a deep neural network model, widely used and proven methods for audio data augmentation include pitch shifting, time stretching, and random frequency filtering [21]. In the medical field, various data augmentation methods and combinations of methods for data from wearable sensors have been validated. These data augmentation methods are based on domain knowledge and thus enable efficient expansion of the data.
Modulated signal data augmentation can be seen as the injection of a priori knowledge about the invariant properties of radio signal data for certain transformations. In realistic scenarios, the data used for model training are obtained from a limited number of scenarios, while the target application of the model may exist in different conditions, such as fluctuating channel environments. Therefore, the amount of data has a decisive impact on the performance of the model. On the other hand, deep learning models usually contain huge amounts of parameters in order to enhance their representational capabilities. If there is a mismatch between the amount of data and the size of the model, the model is prone to overfitting. Data augmentation, on the other hand, can expand the input space not covered by the data, thus implicitly increasing the amount of data and extending the diversity of the data, preventing the model from overfitting the training data, and improving the generalization ability of the deep learning model on the test data.
However, the most critical issue facing practical applications is how to design data augmentation methods for the model so as to obtain greater improvements in the model’s performance. This process relies on domain knowledge, as modulated signal data are a type of time series data, unlike electrocardiogram signal data in the medical field [42], which are not used to distinguish an event in a regular signal sequence but prefers a pattern of signal sample points over a period of time, which is difficult to detect intuitively in the absence of expert knowledge. Therefore, when designing augmentation methods for modulated signals, we must not only focus on the commonalities with other time-series data but also take into account the unique characteristics of modulated signals. In particular, a modulated signal sample consists of I and Q components, which are closely related at each point in time, as is evident from the APs representation of the signal, and therefore it may be difficult to obtain useful synthetic data by transforming the I or Q components alone.
Based on this principle, we proposed four new and one combination of label-preserving transformations for modulated signals. There were three augmentation methods proposed in the existing work [18], i.e., jitter, rotation, and flip, and five new augmentation methods proposed in this paper, i.e., channel shuffle, inversion, split and permutation, scaling, and flip and channel shuffle. As shown in Figure 1, we randomly selected CPSFK and QPSK from the dataset as examples to demonstrate the effect of eight augmentation transformations. A brief description of each transformation is given below.
The Jitter (Jit) method augments the original samples with noise. Deep neural networks typically overfit when they learn high-frequency features that may or may not be useful. Gaussian noise with a zero mean contains data points at almost every frequency, effectively distorting high-frequency features. This also means that lower-frequency components are distorted. Model learning can thus be improved by adding a moderate amount of noise. Unlike previous research, in which Gaussian noise is added with a fixed standard deviation, our augmentation method is sample dependent. In Jit augmentation, the standard deviation σ 2 of Gaussian noise is 5% of each sample, and the augmented sample x ˜ is denoted as follows:
x ˜ = x + N ( 0 , σ 2 )
Rotation (Rot) is currently considered to be the most efficient transformation for AMR tasks [18]. Rot rotates the constellation diagram of the signal clockwise by θ , θ 0 , π / 2 , π , 3 π / 2 . In terms of the constellation diagram, the rotation transformation is thus spatially invariant. We can then obtain an augmented signal sample ( A ˜ , ϕ ˜ ) [18].
A ˜ ϕ ˜ = cos θ sin θ sin θ cos θ A ϕ
Flip inverts the sign of the IQ component [18], so the transformation exists in four cases: the I component is converted to a negative value, the Q component is converted to a negative value, both the I and Q components are converted to negative values, and both the I and Q components are held constant. This can be expressed mathematically as follows:
I ˜ Q ˜ = α I β Q
where α , β 1 , 1 .
Channel shuffle (CS) swaps the channels of the IQ array. This transformation does not change the correspondence between the I and Q components at each time point, and the values do not change; only the channels are shuffled. Swapping the two channels does not change the labels of the samples, and this transformation reduces the dependence of the model on the relationship between the locations of the IQ components. The augmented data are represented as follows:
I ˜ Q ˜ = Q I
Inversion (Inv) is the process of inverting a time series. This transformation fully utilizes the AMR task, which is required to discover patterns within the IQ signal data cycle that do not change when the sequence is inverted. Inverting the sequence, on the other hand, reduces the model’s reliance on the sequence’s preceding and following orders and focuses more on the signal’s global properties.
I ˜ Q ˜ = i L , i L 1 , , i 1 q L , q L 1 , , q 1
Split and permutation (SP) divide the IQ sample into S segments, which are then disrupted, with S ranging from 1 to 8. The greater the value of S, the more the intrinsic pattern of the sample in the time series is disrupted, reducing the model’s reliance on high-frequency useless features and shifting the model’s focus to local period features. However, S should not be too large because, for higher-order modulated signals with more samples required to form the intrinsic mode, such as QAM64, slicing the signal too thin will prevent the model from effectively capturing the full modulation pattern.
The scaling (Sca) transformation scales the IQ components by a random number with a mean of 1 and a variance equal to the sample variance. The scaled sample is as follows:
x ˜ = x × N ( 0 , σ 2 )
The flip and channel shuffle (FCS) transformation is a combination of the Flip and CS transformations, which means it can take eight different forms.

3.3. SigAugment

The goal of data augmentation is to cover, as far as possible, situations that are not covered by the original input data through efficient transformations that may occur in test situations. It is not feasible to simply stack the data generated by all data augmentation methods together to achieve an increase in data volume. Since each data augmentation method performs differently in different models, stacking the data may compromise the performance of the model. One possible approach is to use an automatic data augmentation strategy [38]. AutoAugment is a method for learning augmentation strategies from data that designs a search space in which each strategy consists of many sub-strategies and uses a search algorithm to find the best strategy that allows the model to produce the highest validation performance on the target dataset. However, the search process of AutoAugment is expensive, and therefore it usually finds optimized hyperparameters through proxy tasks on small datasets where it is doubtful whether these hyperparameters are optimal on the target dataset. RandAugment [39] simplifies the space for augmented policy search and allows the search for hyperparameters to be carried out directly on the target dataset.
Therefore, we proposed an efficient automatic data augmentation method for radio signal data named SigAugment. The execution of SigAugment during the model training process is shown in Figure 2. For a batch of IQ samples, SigAugment selects the data augmentation sequence according to two alternative strategies provided: (1) selecting a constant number of transformations from the set of transformations and (2) selecting a random number of transformations from the set of transformations. Depending on the strategies, we named them SigAugment-C and SigAugment-R, respectively. The proposed RandAugment approach consists of two steps:
1. The first step is to define a set of data transformations. In this paper, these transformations included the following: jitter, rotation, flip, channel shuffle, inversion, split and permutation, scaling, and flip and channel shuffle. This set is extensible.
2. The second step is to select either the SigAugment-R or SigAugment-C method to obtain the transformation sequence. If SigAugment-R is used, a random number of transformation sequences are obtained for each sample in each training batch. If SigAugment-C is used, a constant number of transformation sequences is obtained for each sample in each training batch.
With just one line of code, the method can be used as a plug-and-play component for training any deep learning-based AMR model. SigAugment is an online data augmentation method that provides two alternative transform selection methods, SigAugment-C and SigAugment-R, both of which use a fixed transform magnitude. SigAugment-C inherits from RandAugment the use of a fixed number of transformations, while SigAugment-R uses a random number of transformations. The augmentation transformations set, for example, contains T = 8 methods, and SigAugment-C employs N = 4 transformations. If the effect of the order of augmentation transformations is ignored, the possible augmentation combinations are C T N , while the possible SigAugment-R combinations are i = 0 T C T i . In SigAugment, we used the eight data augmentation methods described in the previous section to construct the transformation pool. SigAugment-R randomly generated the N transformations used for augmentation. The value of N was generated randomly generated at each epoch, and N was the maximum number of transformations that could be chosen, N [ 0 , T ] . Therefore, the number of transformations generated by SigAugment-R may not be the same for each epoch.

4. Results

4.1. Datasets and Empirical Settings

The RADIOML 2016.10A dataset (2016A) [3] was generated in GNU Radio using the GNU Radio channel model. The 2016A dataset contains 220,000 signal samples with a category number of 11. Considering that the AM-SSB signal in the dataset may be channel noise, we removed it from the dataset. In addition, existing studies have shown that the average accuracy of existing models is below 20% when the SNR is below −8 dB [43]. Therefore, we chose an SNR range of −8 dB to 18 dB for the data, and the censored dataset, therefore, contains 140,000 signal samples. To control the degree of data imbalance, we used an imbalance factor β used in [44] to describe the severity of the class imbalance problem with the number of training session samples for the most frequent class and the least frequent class, e.g., β = N m a x / N m i n . The imbalance factors we use in experiments are 10, 20, and 50.
Table 1 shows the composition of each dataset. Each dataset consists of a training set, a validation set, and a testing set. The training and validation sets are used for training sessions. The 2016A and 2016A-1 datasets are class-balanced datasets. The 2016A dataset is the full version of the 2016A dataset, without the AM-SSB signals and with signal samples below −8 dB. This dataset is used for comparative analysis with existing methods. We follow the mainstream approach in the existing literature by dividing the dataset into a training, validation, and test set in a 6:2:2 ratio. The second dataset is the baseline dataset under class-balanced conditions, denoted 2016A-1, which contains the same test dataset as the five imbalanced datasets and the remaining 70,000 samples as training and validation sets. The ratio of training to validation sets on the 2016A-1 and imbalanced datasets is 3:1. The 2016A-10, 2016A-10a, 2016A-10b, 2016A-20, and 2016A-50 datasets are class-imbalanced datasets with an imbalance degree β of 10, 10, 10, 20, and 50, respectively.
Table 2 shows the number of samples per class in the datasets used for training sessions. We sampled from the original dataset using the default class order in the 2016A dataset to form the imbalanced datasets. The most frequent class to the least frequent class are, in order, as follows: 8PSK, AM-DSB, BPSK, CPFSK, GFSK, PAM4, QAM16, QAM64, QPSK, and WBFM. The number of samples per class in 2016A-10a and 2016A-10b is half and one-fifth, respectively, of the number in the 2016A-10 dataset. The 2016A-10a and 2016A-10b datasets are used to validate the effect of the number of samples on the performance of the algorithm for the same degree of imbalance. In addition, the difference between the 2016A-10b dataset and the 2016A-50 dataset is a fivefold difference in the number of the most frequent class. They can be used to analyze the effect of the number of head classes on the model.
The training process is terminated when the validation loss does not decrease after 25 epochs or when the number of training epochs reaches 300. We used the Adam [45] optimizer and a warm-up learning rate with an initial learning rate of 0.001 to minimize the cross-entropy loss. The model was built with Tensorflow [46] and trained with two GPU cards.
We use the average classification accuracy of all classes across all SNRs to evaluate the performance algorithm in our tests, as the distribution of classes in the test dataset is balanced. We used the average results of three independent experiments. The standard deviation is also used to measure the stability of the model. Assuming C is the number of classes and A i j is the classification accuracy of the i t h class in the j t h experiment on a single model, the comparison accuracy is expressed as follows:
M e a n _ A v e r a g e _ A c c u r a c y = 1 3 j = 1 3 i = 1 C A i j

4.2. Deep Learning Models for Evaluating Data Augmentation Methods

Three representative SOTA models are selected for the evaluation of the proposed data augmentation algorithm. These three models are DAE [27], SE-MSFN [29], and LSTM2 [28]. The reasons for selecting them for the algorithm evaluation are as follows: First, the selected models represent three different types of network structures: DAE, LSTM2, and SE-MSFN are auto-encoder, RNN, and CNN structures, respectively; second, they are superior in performance; DAE is a lightweight neural network for computationally constrained platforms, while LSTM2 and SE-MSFN, as representatives of high-accuracy models, achieved SOTA performance on several modulation recognition benchmark datasets. A detailed description of the selected models is given below.
The Denoising Auto-Encoder (DAE) [27] is a lightweight network structure that is based on the denoising auto-encoder and RNNs. The DAE’s inputs are L2-normalized amplitudes, and the classifier’s encoder is a two-layer LSTM, while the decoder is a shared, fully connected layer. Since the network output includes both the decoder and the classification output, the loss function includes both the reconstruction and classification losses.
LSTM2 [28] uses amplitudes and phases (APs) as model inputs, as does DAE, and the backbone is stacked with two layers of LSTM. LSTM2 is a simple structure with reasonable classification performance that is widely used in existing studies as a benchmark model.
SE-MSFN [29] is a CNN classifier that uses large kernel convolution, multi-scale structure, and a combined attention mechanism to extract features from original IQ samples. It has SOTA performance on several benchmark datasets and has the advantages of fast convergence and high recognition accuracy. At the same time, its large number of parameters and model complexity make it suitable for use on high-capacity platforms.

4.3. Comparison Methods

A detailed, empirical comparison of 85 variants of minority oversampling techniques (SMOTE) is presented in [47]. We use the top four SMOTE variants as representatives of upsampling methods based on the comprehensive ranking of various over-sampling techniques in the paper, and they are polynome-fit-SMOTE [9], ProWSyn [10], SMOTE-IPF [11], and Lee [12].
Focal loss was first introduced in RetinaNet to address the imbalance between background classes and other classes in target detection tasks [15]. Focal loss is useful for classification in cases where the classes are highly imbalanced. It downscales well-classified examples and focuses on hard samples. For a sample misclassified by the classifier, the loss value is much higher than the corresponding loss value for a well-classified sample. The focal loss adds a modulating factor α t ( 1 p t ) γ to cross-entropy loss, with tunable focusing parameter γ and balanced variant α t . p t is the category probability estimated by the model. Focal loss is defined as follows:
F L ( p t ) = α t ( 1 p t ) γ log ( p t )
As we have several unbalanced datasets in our experiments, we used the recommended parameter settings in focal loss, i.e., α t = 0.25 and γ = 2.0 . We implement the focal loss method with Tensorflow’s additional functional interface, tensorflow_addons.losses.SigmoidFocalCrossEntropy.
Data augmentation methods. The data augmentation methods used for comparison include individual data augmentation methods such as rotation, flip, and jitter [18], as well as the five augmentation methods proposed in this paper.

4.4. Baseline Experiments

We set up three sets of baseline experiments. The goals of the baseline experiments include determining the best hyperparameters, evaluating the performance of the baseline models, and assessing the effect of the datasets on the baseline models.
In the first set of experiments, we validated the accuracy and stability of the selected evaluation models DAE [27], SE-MSFN [29], and LSTM2 [28] for various batch sizes, since the setting of hyperparameters has a significant impact on the training of deep learning models [48,49]. During our experiments, we found that the hyperparameter batch size has a significant impact on model training. The test results on the 2016A dataset are shown in Figure 3, where the vertical error bars represent the standard deviation of the three experiments, and the shorter the error bars, the better the model stability. On the balanced dataset, we can see that (1) the recognition accuracy of the three models decreases as the batch size increases; (2) the SE-MSFN recognition accuracy outperforms the LSTM2 and DAE; and (3) the SE-MSFN and LSTM2 stability outperform the lightweight model. We set the batch size of all subsequent experiments to 128 to ensure a fair comparison and to account for performance, training efficiency, and stability.
In the second set of experiments, we analyze and compare the performance of the baseline model on the datasets 2016A, 2016A-1, 2016A-10, 2016A-20, and 2016A-50. As can be seen from Figure 4, the performance of the baseline models tends to decrease as the degree of data imbalance increases and the amount of data decreases. The best-performing model on each dataset is SE-MSFN, which not only has consistently optimal recognition accuracy, but also has good model stability. SE-MSFN and LSTM2 perform similarly on both the well-balanced and the slightly imbalanced datasets. At imbalance factors of 20 and 50, the accuracy of LSTM2 decreased sharply, and the average recognition accuracy of SE-MSFN exceeded that of LSTM2 by 15.51% and 14.79% for the three experiments, and LSTM2 was 11.70% and 7.15% lower than the DAE of the lightweight model, respectively. Figure 5 shows the training process of LSTM2. The model was trained with high accuracy on the 2016A-1 and 2016A-50 datasets, and the validation loss started to increase at around 40 epochs, while the validation accuracy leveled off. We used an early stop strategy to avoid overfitting by monitoring the validation loss. However, the validation accuracy of the model on 2016A-1 was significantly higher than the results on 2016A-50. As the training process progressed, the gap between the model’s training and validation accuracy grew significantly larger on 2016A-50 than on 2016A-1, indicating that LSTM2 overfitted more severely on the small-scale imbalanced dataset.
In the third set of experiments, we investigate the two models with the best average recognition performance (SE-MSFN and LSTM2) on a class balance dataset for each category. Figure 6 shows the results. We can easily see that the average recognition accuracy of both models for WBFM signals, which are hard samples, is less than 40%. This is due to the fact that the modulated signal is a real audio stream with silent periods generated from a real audio stream with silent periods [28]. All existing methods make effective identification difficult. In addition, as can be seen from Figure A1, the models easily confuse QAM16 and QAM64 due to the fact that QAM16 and QAM64 are members of the same modulation family, while QAM16 is a subset of QAM64. When the SNR is below 0 dB, the 8BPSK and QPSK signals have a lower recognition accuracy, resulting in an average recognition accuracy of less than 80%. According to our rules for constructing our class-imbalanced data, the signals of the four modulation types QAM16, QAM64, QPSK, and WBFM belong to the minority class, the tail-end class of the long-tailed distribution data. As a result, As a result, it will be more difficult to identify these four signals in the class-imbalanced datasets.

4.5. Results of Data Augmentation Methods on the Balanced Datasets

We compared the performance of various data augmentation methods on two class-balanced datasets. These methods include Jit, Rot, Flip, CS, Inv, SP, Sca, FCS, and SigAugment. Experimental results are shown in Table 3 and Table 4.
On the benchmark dataset 2016A, SE-MSFN achieves an optimal recognition accuracy of 62.96%, whereas using the SigAugment proposed in this paper, a SOTA mean average accuracy of 64.44% can be easily achieved on the LSTM2 model without changing the model structure. The average accuracy of the DAE and SE-MSFN models also improved from 58.45% and 62.96% to 62.63% and 63.94%, respectively. The improvement was greater for lightweight models than for larger models such as the SE-MSFN (4.18% vs. 0.98%). Individual augmentation transforms, for the most part, improve the model’s recognition accuracy; however, there are some that have a negative effect on a specific model; for example, training SE-MSFN on the 2016A dataset with Rot and FCS decreased the accuracy by 0.24% and 0.68%, respectively. Individual transformations perform differently across models and datasets. Individual transformations that are most helpful for DAE, SE-MSFN, and LSTM2 on the 2016A dataset are Rot, CS, and FCS, in that order. FCS, SP, and Rot are the most helpful individual transformations for DAE, SE-MSFN, and LSTM2 on the 2016A-1 dataset, while our proposed SigAugment method consistently works, the two SigAugment transformation selection modes are only slightly different for different datasets and models. This demonstrates that our proposed SigAugment method can be applied effectively to a variety of datasets and deep neural network models.
On the 2016A-1 dataset, the gains from the three baseline models using SigAugment are 5.73%, 3.98%, and 5.48%, respectively, which are 1.55%, 3.00%, and 3.14% higher than the gains obtained on the 2016A dataset. The 2016A dataset is widely used in existing studies, and it achieves the best available average recognition accuracy of 64.44% without changing the model structure and using only SigAugment-C training LSTM2. At 0 dB, the three experiments achieve accuracies of 91.05%, 91.60%, and 91.85%, respectively, compared to 88.30%, 87.05%, and 87.80% without using any data augmentation method. When used to train SE-MSFN, the two augmentation methods, Rot and FCS, performed significantly differently on the two datasets. Rot and FCS have a negative impact on SE-MSFN recognition performance on the 2016A dataset, which has extremely low SNRs (SNR < 8 dB), but they perform well on the 2016A-1 dataset. We suspect that this is due to interference in the model’s training after using the Rot and FCS transforms on signals with very low SNRs, which results in poor performance.

4.6. Results on Class-Imbalanced Datasets

On the class-imbalanced datasets, the comparison is divided into two parts: the comparison of the proposed methods with the existing representative class rebalancing methods on the class-imbalanced datasets and the comparison of the representative methods in terms of the recognition accuracy of each modulation class.
On the three class-imbalanced datasets, Table 5 compares the performance of the proposed methods to the representative class rebalancing methods and individual transformations. Based on the experimental results in the table, the following discussion and analysis will expand from top to bottom and from left to right.
As the degree of imbalance in the dataset increases, so does the performance of the baseline model. SE-MSFN outperforms DAE and LSTM2 in terms of stability. First, we investigate the performance of each model on each dataset. The performance gap between the lightweight model DAE and the other two models is larger than on the balanced dataset. In class rebalancing methods, the focal loss is only useful for recognition when training a specific model on a specific dataset, and it is usually comparable to the baseline model, i.e., when using the cross-entropy loss function. The four up-sampling methods performed very differently. When training LSTM2 on the 2016A-20 dataset, the most helpful methods are Polynom_fit_SMOTE_poly and SMOTE_IPF, with accuracy gains of 14.12% and 14.80%, respectively. In most other cases, the gains in recognition accuracy are negligible, and the models frequently struggle to converge when training LSTM2 and DAE. On the three datasets in that order, we calculate the mean accuracy values for each of the four up-sampling methods and then compare them to the baseline method, with gains of 1.74%, 2.58%, −34.43%, −1.28%, 1.12%, 3.63%, −25.66%, −5.32%, and 17.55%. When training DAE and LSTM2, we can see that the up-sampling methods always result in a significant decrease in accuracy on three datasets. The overall improvement in performance is minor and highly variable. Furthermore, the up-sampling methods generate data offline, which can significantly increase the number of training samples and thus reduce model training efficiency.
Our proposed online data augmentation strategies do not explicitly increase the number of samples, but rather transform the original training data at each epoch of training, only increasing the computational cost associated with the transformations. Similarly, we compute the average accuracy of SigAugment-C and SigAugment-R across models and datasets, with gains of 3.70%, 9.78%, 13.24%, 6.65%, 14.01%, 31.46%, 0.62%, 12.60%, and 29.92% on the three datasets in that order. With an average gain of 24.88% across the three datasets, SigAugment had the greatest improvement for the LSTM2 model.
Furthermore, the greater the degree of imbalance in the dataset and the smaller the data size, the greater the gain from data augmentation. This is consistent with the expectation that smaller datasets require more regularization. Furthermore, when we compare the performance of SigAugment on the 2016A-50 dataset to the baseline model on the 2016A dataset, we find that even though the number of samples in the training set is reduced by a factor of 5 while the degree of sample imbalance increases by a factor of 5, on the same test set, our data augmentation strategies achieve recognition accuracies comparable to the baseline model (60.16% vs. 66.05%, 70.93% vs. 71.08%, and 72.61% vs. 70.06%). Figure 7 shows the loss and accuracy of LSTM2 training on the 2016A-50 dataset using the proposed SigAugment-R method. By comparing with Figure 5b, it is clear that the gap between the training loss and validation loss of LSTM2 is quite small by using SigAugment-R. This shows that SigAugment-R is effective in preventing overfitting.
In addition, we report the results of our experiments on 2016A-10a and 2016A-10b in the Appendix, and the results are shown in Table A1. We can draw similar conclusions. Comparing the results of the model on 2016A-10b and 2016A-50, we find that the sample size of the head category has a greater impact on the resampling approaches, as it can lead to more severe overfitting problems in the model. In contrast, it has less impact on the individual data augmentation transformations and almost no impact on our proposed automated data augmentation method.
The individual data augmentation methods are scored according to the following rules: we rank all transformations from highest to lowest gain, with the top three receiving one point and the remaining receiving none. The total score for three models and three datasets is nine. Figure 8 shows the scoring results. The top three scoring transforms are Rot, SP, and FCS, with SP and FCS being new to this paper; the Sca and Jit transforms often find it difficult to outperform other augmentation methods because they add some perturbation to the original signals.
Based on the average recognition accuracy of the models, we conducted a comparative analysis of the models and methods. Figure 9 shows the recognition accuracy of various modulation types at various SNRs on the 2016A-50 dataset. We analyze the model with the highest median average accuracy across the three experiments. The data augmentation methods proposed in this paper can effectively improve the performance of the model on class-imbalanced datasets. In particular, for the minority classes (PAM4, QAM16, QAM64, QPSK, and WBFM), training LSTM2 with SigAugment-R improved the average recognition accuracy on these five classes from 3.23% to 51.86%. When the SNR is above 0 dB, LSTM2 with SigAugment-R increased the average recognition accuracy of all classes from 49.94% to 83.38%.
Figure 10 shows the results for another model, the SE-MSFN. Comparing with Figure 9, we see that the WBFM signal is a hard sample for both models. The LSTM2 identifies QAM16 better than the SE-MFSN. The SE-MFSN has better results for QAM64. Both models show significant recognition gains for minority class samples and high SNR cases after using our data augmentation method. These results show that the proposed SigAugment is able to effectively mitigate the effects of class imbalance and low-data regime.

4.7. Ablation Study

We use the RandAugment [39] ablation experiment methodology to validate the contribution of each transformation to the SigAugment. Table 6 shows the results. When a specific transformation is added to the transformation set, the average difference in test accuracy increases. SE-MSFN models were trained on the 2016A-10, 2016A-10a, 2016A-10b, 2016A-20, and 2016A-50 datasets with SigAugment-R for ablation study. We can see that almost all of the transformations can improve test accuracy, with the SP transformation being the most helpful for SigAugment-R. However, Table 5 shows that the Rot transformation alone achieves the top-two average recognition accuracy gains on the three datasets, whereas the gains from the Rot transform (0.80%, 1.22%, 0.86%, 0.43%, and 2.98%) for SigAugment-R are relatively small compared to the other transforms, which does not appear to meet our intuitive expectations. Several experiments, however, confirmed this. We believe that this is an open question and that the issue of the gain of individual transforms on combined enhancement merits a more in-depth investigation that is beyond the scope of this paper.
Figure 11 and Figure 12 depict in detail the effect of the number of transforms on the accuracy and stability of the SigAugment method. From the above four sets of experiments, we can see that the proposed SigAugment data augmentation methods are insensitive to hyperparameters. For different values of N, the variance in the results of the model tested on two small datasets is tiny. In general, SigAugment-R uses a maximum number of transforms equal to the total number of transforms in the transformation pool, and larger N values give higher gains, while for SigAugment-C, N = 4 seems to be a better choice. In practice, the transformation selection mode for SigAugment can thus be chosen based on the model and data.
Figure 8 depicts the top three transformations on the class-imbalanced dataset: Rot, SP, and FCS. To validate the effectiveness of the proposed method, we used the SE-MSFN to validate the performance of joint augmentation using the above three transformations on six datasets: 2016A-1, 2016A-10, 2016A-10a, 2016A-10b, 2016A-20, and 2016A-50. On all datasets, SigAugment outperforms the joint augmentation method, as shown in Table 7. Furthermore, when Table 5 and Table 7 are combined, we can see that the joint augmentation method outperforms the use of individual data augmentation methods.
In a real communication environment, the low-data category of modulation is not constant. We construct datasets with different majority and minority classes in two ways. (1) By reversing the default list of modulation classes, i.e., changing the order from 8PSK, AM-DSB, BPSK, CPFSK, GFSK, PAM4, QAM16, QAM64, QPSK, and WBFM to WBFM, QPSK, QAM64, QAM16, PAM4, GFSK, CPFSK, BPSK, 8PSK, and AM-DSB. (2) By randomly shuffling the list but fixing the random seeds during each training session so that all experiments are repeatable, the order of the modulation classes obtained is AM-DSB, CPFSK, WBFM, BPSK, 8PSK, QAM16, PAM4, GFSK, QPSK, and QAM64. This gives us six new unbalanced datasets. Detailed information can be found in Table A2 and Table A3 in Appendix A.3.
Based on previous experiments, the experimental protocol for these six datasets is to select one representative method from each of the rebalancing and data augmentation methods to experiment with. These two methods are Polynom_fit_SMOTE_poly and SigAugment-R. The experimental results are shown in Table A4 and Table A5. We obtained similar results to the previous experiments. However, we also note that for the DAE model, our proposed SigAugment method is not always valid. This may be related to the encoder structure of a network such as DAE. Overly complex data augmentation is difficult to encode and decode effectively for the DAE model.

5. Discussion

In this paper, we present an efficient data augmentation method for class imbalance modulation recognition in the low-data regime. In particular, we present four new and one combination of label-preserving transformations as well as an efficient automatic data augmentation strategy that does not require a separate search for data augmentation policies. To validate their effectiveness, the proposed methods are thoroughly evaluated and compared using multiple types of datasets and three representative SOTA deep neural network models. Particularly in low-data, imbalanced-class regimes, the proposed method can significantly improve the recognition performance of deep neural network models.
There are still some intriguing unanswered questions. Despite the fact that we provide two methods for obtaining the final augmented sequence, there is no guarantee that the combination of augmentations obtained by these two methods is optimal. Is it then possible to use a multi-stage selection approach, such as selecting the more helpful transformations first, such as Rot and SP, and then randomly selecting other transformations with smaller contributions? Furthermore, questions such as how the individual augmentation methods in the SigAugment transform set interact with one another and whether some of these combinations cancel each other out need to be thoroughly investigated.
All of the transformations used in SigAugment are label-preserving. Label interpolation generation methods, such as mixup [20], have been shown to be more effective than label-preserving methods in a variety of tasks in the vision domain. So, can label interpolation methods be extended to raw IQ or AP data in the communications signal domain? This could be the next area to take a gander at. However, based on our preliminary results, direct interpolation of samples and labels for mixing does not work. However, mixing samples from the same category while preserving the labels is worth a shot.
Based on the results in Table 5, we chose the best-performing class rebalancing method from each of the three baseline models on the 2016A-50 dataset and added these methods and their corresponding models to the SigAugment method for joint training, as shown in Table 8. We can see that this combined approach does not result in improved performance but rather lower recognition accuracy. It is an intriguing phenomenon that the two methods used in the combination both improve the model’s performance, but the combination fails. In order to find a solution, it may be worthwhile to examine the mechanism of the combination in greater detail in future work.
Furthermore, when we compare Figure 8 to Table 6, we see that the three best methods for individual transformations are Rot, SP, and FCS, whereas for automatic data augmentation methods, we evaluate the gain from a specific transformation by adding it to the transformation pool of SigAugment; when we use the same approach for individual transformations, the top three methods are SP, Inv, and FCS, which does not appear to be the expected result. This also highlights the fact that the relationship between the effectiveness of individual data augmentation methods and their combined overall effect is complicated. As a result, the expansion of the SigAugment transformation pool appears to be more open and full of possibilities, and augmentation techniques that do not work as well when applied independently might achieve notable benefits in SigAugment.

6. Conclusions

In this paper, we address the AMR task for class imbalance in the low-data regime from the perspective of extending the original data distribution. We do not follow the traditional idea of rebalancing the classes when the AMR tasks face both low-data and imbalanced-class regime problems, because this would lead to severe overfitting of the model, and the performance of different approaches varies widely across different datasets and deep neural networks. As a result, we propose a more fundamental data augmentation approach to address the problem of insufficient data and unbalanced classes. To augment the data online, we propose SigAugment, a practical and effective data augmentation strategy. Based on the 2016A dataset, we created four sub-datasets and thoroughly tested the algorithm on three representative SOTA models. The experimental results validate the effectiveness of the proposed method.
SigAugment can be seamlessly integrated into existing data generation pipelines with just one line of code, and the algorithm can be easily extended by researchers. We hope that the proposed method can be validated and applied in more practical scenarios.

Author Contributions

Conceptualization, S.W. and Z.W.; methodology, S.W.; software, S.W.; validation, S.W.; formal analysis, F.L.; resources, Z.W.; data curation, S.W.; writing—original draft preparation, S.W.; writing—review and editing, H.M., S.W., and Z.S.; visualization, F.L., Z.L., and H.M.; supervision, F.L., H.M., and Z.S.; project administration, H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Key R&D Program of China Grant No. 2022ZD0115300 and the Scientific Research Plan of the National University of Defense Technology under Grant No. ZK20-38.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Confusion Matrix and Recognition Accuracy for Each Modulation Class at Different SNRs

Figure A1. Confusion matrix of (a) SE-MSFN, (b) LSTM2 on the 2016A-1 dataset. Values less than 0.01 are omitted. Values greater than 0.5 in the confusion matrix are shown in purple, the darker the value, the greater the confusion. Values between 0.05 and 0.5 are shown in yellow.
Figure A1. Confusion matrix of (a) SE-MSFN, (b) LSTM2 on the 2016A-1 dataset. Values less than 0.01 are omitted. Values greater than 0.5 in the confusion matrix are shown in purple, the darker the value, the greater the confusion. Values between 0.05 and 0.5 are shown in yellow.
Applsci 13 03177 g0a1

Appendix A.2. Results on Three Class-Imbalanced Datasets 2016A-10, 2016A-10a, and 2016A-10b

Table A1. On three class-imbalanced datasets, 2016A-10, 2016A-10a, and 2016A-10b, the proposed methods were compared to existing representative class rebalancing methods and individual transformations. The accuracy (%) is calculated as the mean of three experiments. The best-performing individual model results are shown in bold, while the best-performing dataset results are shown in bold and red.
Table A1. On three class-imbalanced datasets, 2016A-10, 2016A-10a, and 2016A-10b, the proposed methods were compared to existing representative class rebalancing methods and individual transformations. The accuracy (%) is calculated as the mean of three experiments. The best-performing individual model results are shown in bold, while the best-performing dataset results are shown in bold and red.
Methods2016A-102016A-10a2016A-10b
DAESE-MSFNLSTM2DAESE-MSFNLSTM2DAESE-MSFNLSTM2
Baseline66.0571.0870.0662.5566.3550.8450.7358.3743.58
Focal loss65.8473.5570.2562.4767.2951.0958.3258.1743.25
Polynom_fit_SMOTE_poly66.9574.0435.2663.3668.3546.5144.3759.5347.44
ProWSyn67.2873.4618.1963.6967.2563.0350.2261.1626.32
SMOTE_IPF67.4573.1236.2261.4066.5426.7645.1560.6225.78
Lee69.4774.0552.8662.4967.9157.7244.7561.6334.38
Jit67.8170.8971.6762.8665.6050.9359.0159.6135.35
Rot72.9477.7881.1565.0173.2873.1658.9067.0759.07
Flip69.2574.3673.2464.4173.2071.8954.9965.3855.55
CS71.6276.8273.1563.8570.0767.1958.0062.7945.22
Inv67.2974.0770.8463.7269.2762.2950.6260.8637.12
SP69.2177.5776.5264.5973.1475.2759.7566.4255.72
Sca66.0571.0468.4862.5564.9253.8854.2059.8141.78
FCS70.3975.4176.6964.7666.7666.8249.5968.0467.27
SigAugment-C66.9980.3683.5264.4177.8281.0459.7072.6469.65
SigAugment-R72.5181.3683.0764.9278.4980.1861.5073.2669.01

Appendix A.3. Results on Six Class-Imbalanced Datasets 2016A-10-Re, 2016A-20-Re, 2016A-50-Re, 2016A-10-Ra, 2016A-20-Ra, and 2016A-50-Ra

Table A2. Number of samples per class in the datasets created by reversing the original class list.
Table A2. Number of samples per class in the datasets created by reversing the original class list.
DatasetClassTotal Number
WBFMQPSKQAM64QAM16GFSKCPFSKQAM16BPSKAM-DSB8PSK
2016A-10-Re70006300560049004200350028002100140070038,500
2016A-20-Re700028002450210017501400112084056035020,370
2016A-50-Re70001260112098084070056042028014013,300
Table A3. Number of samples per class in the datasets created by random shuffling of the original class list.
Table A3. Number of samples per class in the datasets created by random shuffling of the original class list.
DatasetClassTotal Number
AM-DSBCPFSKWBFMBPSK8PSKQAM16PAM4GFSKQPSKQAM64
2016A-10-Ra70006300560049004200350028002100140070038,500
2016A-20-Ra700028002450210017501400112084056035020,370
2016A-50-Ra70001260112098084070056042028014013,300
Table A4. Results of the proposed SigAugment method trained on the 2016A-10-Re, 2016A-20-Re, and 2016A-50-Re datasets. The top1 mean average accuracy has been highlighted.
Table A4. Results of the proposed SigAugment method trained on the 2016A-10-Re, 2016A-20-Re, and 2016A-50-Re datasets. The top1 mean average accuracy has been highlighted.
Methods 2016A-10-Re2016A-20-Re2016A-50-Re
DAESE-MSFNLSTM2DAESE-MSFNLSTM2DAESE-MSFNLSTM2
BaselineMean66.2574.4665.5560.3363.2553.2446.9354.4248.77
Std2.011.034.070.270.870.142.680.602.34
Polynom_fit_SMOTE_polyMean63.2743.6169.8262.2532.5861.6657.2025.1159.93
Std1.532.171.720.724.441.411.632.131.43
SigAugment-RMean64.0079.4179.8661.5277.7277.7656.8471.3069.64
Std0.090.920.170.600.140.201.991.380.79
Table A5. Results of the proposed SigAugment method trained on the 2016A-10-Ra, 2016A-20-Ra, and 2016A-50-Ra datasets. The top1 mean average accuracy has been highlighted.
Table A5. Results of the proposed SigAugment method trained on the 2016A-10-Ra, 2016A-20-Ra, and 2016A-50-Ra datasets. The top1 mean average accuracy has been highlighted.
Methods 2016A-10-Ra2016A-20-Ra2016A-50-Ra
DAESE-MSFNLSTM2DAESE-MSFNLSTM2DAESE-MSFNLSTM2
BaselineMean65.3471.2670.4647.5162.1449.7848.8146.3344.82
Std0.740.290.510.570.941.986.953.471.70
Polynom_fit_SMOTE_polyMean63.2743.6169.8259.1632.6159.3259.9623.6259.42
Std1.532.171.720.704.310.190.131.790.53
SigAugment-RMean69.1880.5181.4864.1175.2074.2859.8466.8771.72
Std1.561.240.210.380.740.270.550.990.54

References

  1. O’Shea, T.J.; Roy, T.; Clancy, T.C. Over-the-Air Deep Learning Based Radio Signal Classification. IEEE J. Sel. Top. Signal Process. 2018, 12, 168–179. [Google Scholar] [CrossRef] [Green Version]
  2. Bengio, Y.; Lecun, Y.; Hinton, G. Deep learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar] [CrossRef]
  3. O’Shea, T.J.; West, N. Radio Machine Learning Dataset Generation with GNU Radio. In Proceedings of the GNU Radio Conference, Boulder, CO, USA, 12–16 September 2020; pp. 1–6. [Google Scholar]
  4. Tekbıyık, K.; Ekti, A.R.; Görçin, A.; Kurt, G.K.; Keçeci, C. Robust and fast automatic modulation classification with CNN under multipath fading channels. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–6. [Google Scholar]
  5. Zhou, Q.; Zhang, R.; Mu, J.; Zhang, H.; Zhang, F.; Jing, X. Amcrn: Few-shot learning for automatic modulation classification. IEEE Commun. Lett. 2021, 26, 542–546. [Google Scholar] [CrossRef]
  6. Jin, T.; Xia, H. Lookback option pricing models based on the uncertain fractional-order differential equation with Caputo type. J. Ambient. Intell. Humaniz. Comput. 2021. [Google Scholar] [CrossRef]
  7. Ren, Z.; Han, X.; Yu, X.; Skjetne, R.; Leira, B.J.; Sævik, S.; Zhu, M. Data-driven simultaneous identification of the 6DOF dynamic model and wave load for a ship in waves. Mech. Syst. Signal Process. 2023, 184, 109422. [Google Scholar] [CrossRef]
  8. Xu, J.; Zhao, Y.; Chen, H.; Deng, W. ABC-GSPBFT: PBFT with grouping score mechanism and optimized consensus process for flight operation data-sharing. Inf. Sci. 2023, 624, 110–127. [Google Scholar] [CrossRef]
  9. Gazzah, S.; Amara, N.E.B. New oversampling approaches based on polynomial fitting for imbalanced data sets. In Proceedings of the 2008 the Eighth Iapr International Workshop on Document Analysis Systems, Nara, Japan, 16–19 September 2008; pp. 677–684. [Google Scholar]
  10. Barua, S.; Islam, M.M.; Murase, K. ProWSyn: Proximity weighted synthetic oversampling technique for imbalanced data set learning. In Proceedings of the Advances in Knowledge Discovery and Data Mining: 17th Pacific-Asia Conference, PAKDD 2013, Gold Coast, Australia, 14–17 April 2013; Springer: Berlin, Germany, 2013. Part II 17. pp. 317–328. [Google Scholar]
  11. Sáez, J.A.; Luengo, J.; Stefanowski, J.; Herrera, F. SMOTE–IPF: Addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering. Inf. Sci. 2015, 291, 184–203. [Google Scholar] [CrossRef]
  12. Lee, J.; Kim, N.r.; Lee, J.H. An over-sampling technique with rejection for imbalanced class learning. In Proceedings of the 9th International Conference on Ubiquitous Information Management and Communication, Bali, Indonesia, 8–10 January 2015; pp. 1–6. [Google Scholar]
  13. Branco, P.; Torgo, L.; Ribeiro, R.P. A survey of predictive modeling on imbalanced domains. ACM Comput. Surv. (CSUR) 2016, 49, 1–50. [Google Scholar] [CrossRef] [Green Version]
  14. Tarawneh, A.S.; Hassanat, A.B.; Altarawneh, G.A.; Almuhaimeed, A. Stop Oversampling for Class Imbalance Learning: A Review. IEEE Access 2022, 10, 47643–47660. [Google Scholar] [CrossRef]
  15. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  16. Mahajan, D.; Girshick, R.; Ramanathan, V.; He, K.; Paluri, M.; Li, Y.; Bharambe, A.; Van Der Maaten, L. Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 181–196. [Google Scholar]
  17. Wen, Q.; Sun, L.; Yang, F.; Song, X.; Gao, J.; Wang, X.; Xu, H. Time series data augmentation for deep learning: A survey. arXiv 2020, arXiv:2002.12478. [Google Scholar]
  18. Huang, L.; Pan, W.; Zhang, Y.; Qian, L.; Gao, N.; Wu, Y. Data augmentation for deep learning-based radio modulation classification. IEEE Access 2019, 8, 1498–1506. [Google Scholar] [CrossRef]
  19. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  20. Zhang, H.; Cisse, M.; Dauphin, Y.; Lopez-Paz, D. mixup: Beyond empirical risk management. In Proceedings of the 6th International Conference Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018; pp. 1–13. [Google Scholar]
  21. Salamon, J.; Bello, J.P. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Process. Lett. 2017, 24, 279–283. [Google Scholar] [CrossRef]
  22. Hu, Z.; Tan, B.; Salakhutdinov, R.R.; Mitchell, T.M.; Xing, E.P. Learning data manipulation for augmentation and weighting. Adv. Neural Inf. Process. Syst. 2019, 32, 15764–15775. [Google Scholar]
  23. Xu, J.; Luo, C.; Parr, G.; Luo, Y. A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition. IEEE Wirel. Commun. Lett. 2020, 9, 1629–1632. [Google Scholar] [CrossRef]
  24. Liu, X.; Li, C.J.; Jin, C.T.; Leong, P.H.W. Wireless Signal Representation Techniques for Automatic Modulation Classification. IEEE Access 2022, 10, 84166–84187. [Google Scholar] [CrossRef]
  25. Wang, Y.; Yang, J.; Liu, M.; Gui, G. LightAMC: Lightweight automatic modulation classification via deep learning and compressive sensing. IEEE Trans. Veh. Technol. 2020, 69, 3491–3495. [Google Scholar] [CrossRef]
  26. Tunze, G.B.; Huynh-The, T.; Lee, J.M.; Kim, D.S. Sparsely connected CNN for efficient automatic modulation recognition. IEEE Trans. Veh. Technol. 2020, 69, 15557–15568. [Google Scholar] [CrossRef]
  27. Ke, Z.; Vikalo, H. Real-Time Radio Technology and Modulation Classification via an LSTM Auto-Encoder. IEEE Trans. Wirel. Commun. 2022, 21, 370–382. [Google Scholar] [CrossRef]
  28. Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef] [Green Version]
  29. Wu, X.; Wei, S.; Zhou, Y. Deep multi-scale representation learning with attention for automatic modulation classification. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
  30. Guo, L.; Wang, Y.; Hou, C.; Lin, Y.; Zhao, H.; Gui, G. Ultra Lite Convolutional Neural Network for Automatic Modulation Classification. arXiv 2022, arXiv:2208.04659. [Google Scholar]
  31. Huynh-The, T.; Hua, C.H.; Pham, Q.V.; Kim, D.S. MCNet: An Efficient CNN Architecture for Robust Automatic Modulation Classification. IEEE Commun. Lett. 2020, 24, 811–815. [Google Scholar] [CrossRef]
  32. Shi, F.; Yue, C.; Han, C. A lightweight and efficient neural network for modulation recognition. Digit. Signal Process. A Rev. J. 2022, 123, 103444. [Google Scholar] [CrossRef]
  33. West, N.E.; O’Shea, T. Deep architectures for modulation recognition. In Proceedings of the 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 6–9 March 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, F.; Luo, C.; Xu, J.; Luo, Y. An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation. IEEE Commun. Lett. 2021, 25, 3287–3290. [Google Scholar] [CrossRef]
  35. Megahed, F.M.; Chen, Y.J.; Megahed, A.; Ong, Y.; Altman, N.; Krzywinski, M. The class imbalance problem. Nat. Methods 2021, 18, 1270–1272. [Google Scholar] [CrossRef]
  36. Cui, Y.; Jia, M.; Lin, T.Y.; Song, Y.; Belongie, S. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 9268–9277. [Google Scholar]
  37. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  38. Cubuk, E.D.; Zoph, B.; Mané, D.; Vasudevan, V.; Le, Q.V. AutoAugment: Learning Augmentation Strategies From Data. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–17 June 2019; pp. 113–123. [Google Scholar] [CrossRef]
  39. Cubuk, E.D.; Zoph, B.; Shlens, J.; Le, Q.V. Randaugment: Practical Automated Data Augmentation with a Reduced Search Space. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 13–19 June 2020; Volume 2020, pp. 3008–3017. [Google Scholar]
  40. Zhang, Y.; Kang, B.; Hooi, B.; Yan, S.; Feng, J. Deep long-tailed learning: A survey. arXiv 2021, arXiv:2110.04596. [Google Scholar]
  41. Liu, F.; Shen, T.; Luo, Z.; Zhao, D.; Guo, S. Underwater target recognition using convolutional recurrent neural networks with 3-D Mel-spectrogram and data augmentation. Appl. Acoust. 2021, 178, 107989. [Google Scholar] [CrossRef]
  42. Um, T.T.; Pfister, F.M.; Pichler, D.; Endo, S.; Lang, M.; Hirche, S.; Fietzek, U.; Kulić, D. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, UK, 13–17 November 2017; pp. 216–220. [Google Scholar]
  43. Zhang, F.; Luo, C.; Xu, J.; Luo, Y.; Zheng, F.C. Deep Learning Based Automatic Modulation Recognition: Models, Datasets, and Challenges. Digit. Signal Process. 2022, 129, 103650. [Google Scholar] [CrossRef]
  44. Zhou, B.; Cui, Q.; Wei, X.S.; Chen, Z.M. Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9719–9728. [Google Scholar]
  45. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  46. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A system for Large-Scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  47. Kovács, G. An empirical comparison and evaluation of minority oversampling techniques on a large number of imbalanced datasets. Appl. Soft Comput. 2019, 83, 105662. [Google Scholar] [CrossRef]
  48. Keskar, N.S.; Mudigere, D.; Nocedal, J.; Smelyanskiy, M.; Tang, P.T.P. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv 2016, arXiv:1609.04836. [Google Scholar]
  49. Masters, D.; Luschi, C. Revisiting small batch training for deep neural networks. arXiv 2018, arXiv:1804.07612. [Google Scholar]
Figure 1. Example of 8 transforms for CPSFK and QPSK samples. For the raw IQ samples, the blue curve represents the I component, and the orange curve represents the Q component. Despite the inclusion of several possible transformations, such as Rot, Flip, and SP, only one of these forms is shown in the figure.
Figure 1. Example of 8 transforms for CPSFK and QPSK samples. For the raw IQ samples, the blue curve represents the I component, and the orange curve represents the Q component. Despite the inclusion of several possible transformations, such as Rot, Flip, and SP, only one of these forms is shown in the figure.
Applsci 13 03177 g001
Figure 2. The proposed SigAugment and pipeline for online augmentation during model training.
Figure 2. The proposed SigAugment and pipeline for online augmentation during model training.
Applsci 13 03177 g002
Figure 3. Performance of the baseline model for recognition on the 2016A dataset at various batch sizes.
Figure 3. Performance of the baseline model for recognition on the 2016A dataset at various batch sizes.
Applsci 13 03177 g003
Figure 4. Comparison of recognition performance of the baseline models on five datasets.
Figure 4. Comparison of recognition performance of the baseline models on five datasets.
Applsci 13 03177 g004
Figure 5. Training curves of LSTM2 on the (a) 2016A-1 and (b) 2016A-50 datasets.
Figure 5. Training curves of LSTM2 on the (a) 2016A-1 and (b) 2016A-50 datasets.
Applsci 13 03177 g005
Figure 6. Classification accuracies (%) for (a) SE-MSFN and (b) LSTM2 at each SNR and modulation type on the 2016A dataset, with the values in parentheses in the legend being the average recognition accuracy for each class across the full SNRs.
Figure 6. Classification accuracies (%) for (a) SE-MSFN and (b) LSTM2 at each SNR and modulation type on the 2016A dataset, with the values in parentheses in the legend being the average recognition accuracy for each class across the full SNRs.
Applsci 13 03177 g006
Figure 7. Training curves of LSTM2 with SigAugment-R on the 2016A-50 dataset.
Figure 7. Training curves of LSTM2 with SigAugment-R on the 2016A-50 dataset.
Applsci 13 03177 g007
Figure 8. Score for individual transformations.
Figure 8. Score for individual transformations.
Applsci 13 03177 g008
Figure 9. Recognition accuracy (%) of the (a) LSTM2 and (b) LSTM2 with SigAugment-R on the 2016A-50 dataset for each modulation class, with the values in parentheses in the legend representing the average recognition accuracy for each class across all SNRs.
Figure 9. Recognition accuracy (%) of the (a) LSTM2 and (b) LSTM2 with SigAugment-R on the 2016A-50 dataset for each modulation class, with the values in parentheses in the legend representing the average recognition accuracy for each class across all SNRs.
Applsci 13 03177 g009
Figure 10. Recognition accuracy (%) of the (a) SE-MSFN and (b) SE-MSFN with SigAugment-R on the 2016A-50 dataset for each modulation class, with the values in parentheses in the legend representing the average recognition accuracy for each class across all SNRs.
Figure 10. Recognition accuracy (%) of the (a) SE-MSFN and (b) SE-MSFN with SigAugment-R on the 2016A-50 dataset for each modulation class, with the values in parentheses in the legend representing the average recognition accuracy for each class across all SNRs.
Applsci 13 03177 g010
Figure 11. Validation on the 2016A-10b dataset of (a) the effect of the maximum number of transformations on SigAugment-R and (b) the effect of the number of transformations on SigAugment-C.
Figure 11. Validation on the 2016A-10b dataset of (a) the effect of the maximum number of transformations on SigAugment-R and (b) the effect of the number of transformations on SigAugment-C.
Applsci 13 03177 g011
Figure 12. Validation on the 2016A-50 dataset of (a) the effect of the maximum number of transformations on SigAugment-R and (b) the effect of the number of transformations on SigAugment-C.
Figure 12. Validation on the 2016A-50 dataset of (a) the effect of the maximum number of transformations on SigAugment-R and (b) the effect of the number of transformations on SigAugment-C.
Applsci 13 03177 g012
Table 1. Composition of individual datasets.
Table 1. Composition of individual datasets.
Dataset β Training SetValidation SetTesting Set
2016A1120,00040,00040,000
2016A-1152,50017,50070,000
2016A-101028,875962570,000
2016A-10a1014,437481370,000
2016A-10b105775192570,000
2016A-202015,277509370,000
2016A-50509975332570,000
Table 2. The number of samples per class in the datasets used for the training sessions.
Table 2. The number of samples per class in the datasets used for the training sessions.
DatasetClassTotal Number
8PSKAM-DSBBPSKCPFSKGFSKPAM4QAM16QAM64QPSKWBFM
2016A16,00016,00016,00016,00016,00016,00016,00016,00016,00016,000160,000
2016A-1700070007000700070007000700070007000700070,000
2016A-1070006300560049004200350028002100140070038,500
2016A-10a3500315028002450210017501400105070035019,250
2016A-10b1400126011209808407005604202801407700
2016A-20700028002450210017501400112084056035020,370
2016A-5070001260112098084070056042028014013,300
Table 3. The average recognition accuracy (%) of the baseline models on the 2016A dataset using various data augmentation methods and the gain (%) when compared to no data augmentation. The top1 average accuracy and gain have been highlighted.
Table 3. The average recognition accuracy (%) of the baseline models on the 2016A dataset using various data augmentation methods and the gain (%) when compared to no data augmentation. The top1 average accuracy and gain have been highlighted.
MethodsDAESE-MSFNLSTM2
MeanStdGainMeanStdGainMeanStdGain
Baseline58.452.75-62.960.13-62.100.18-
Jit59.192.210.7463.120.040.1661.890.63−0.21
Rot60.890.942.4462.720.20−0.2463.400.131.31
Flip62.590.224.1363.010.300.0563.410.191.32
CS61.460.993.0163.470.320.5062.490.170.39
Inv62.560.244.1163.220.250.2662.790.110.70
SP62.250.243.7963.290.070.3362.770.130.67
Sca59.762.181.3162.970.180.0062.480.280.38
FCS62.330.273.8862.280.23−0.6863.520.061.42
SigAugment-C62.420.083.9763.080.230.1264.440.132.34
SigAugment-R62.630.104.1863.940.110.9864.300.162.20
Table 4. The average recognition accuracy (%) of the baseline models on the 2016A-1 dataset using various data augmentation methods and the gain (%) when compared to no data augmentation. The top1 average accuracy and gain have been highlighted.
Table 4. The average recognition accuracy (%) of the baseline models on the 2016A-1 dataset using various data augmentation methods and the gain (%) when compared to no data augmentation. The top1 average accuracy and gain have been highlighted.
MethodsDAESE-MSFNLSTM2
MeanStdGainMeanStdGainMeanStdGain
Baseline74.090.26-79.830.53-79.790.46-
Jit74.440.170.3580.710.890.8878.923.70−0.86
Rot76.710.172.6382.850.183.0283.960.154.17
Flip76.160.142.0782.270.322.4383.490.033.71
CS75.570.091.4982.770.112.9380.043.790.25
Inv77.201.403.1182.860.373.0382.240.152.45
SP78.462.374.3882.960.133.1383.020.233.24
Sca75.131.821.0579.700.96−0.1480.020.510.23
FCS78.712.614.6282.230.202.4083.710.353.92
SigAugment-C77.681.273.5983.810.293.9885.290.055.50
SigAugment-R79.821.235.7383.630.463.8085.270.125.48
Table 5. On three class-imbalanced datasets, 2016A-10, 2016A-20, and 2016A-50, the proposed methods were compared to existing representative class rebalancing methods and individual transformations. The accuracy (%) is calculated as the mean of three experiments. The best-performing individual model results are shown in bold, while the best-performing dataset results are shown in bold and red.
Table 5. On three class-imbalanced datasets, 2016A-10, 2016A-20, and 2016A-50, the proposed methods were compared to existing representative class rebalancing methods and individual transformations. The accuracy (%) is calculated as the mean of three experiments. The best-performing individual model results are shown in bold, while the best-performing dataset results are shown in bold and red.
Methods2016A-102016A-202016A-50
DAESE-MSFNLSTM2DAESE-MSFNLSTM2DAESE-MSFNLSTM2
Baseline66.0571.0870.0656.5863.6448.0059.3554.0542.59
Focal loss65.8473.5570.2561.2064.6846.4256.9752.4243.25
Polynom_fit_SMOTE_poly66.9574.0435.2660.4365.9662.1237.1458.2760.22
ProWSyn67.2873.4618.1952.9566.2632.8434.8258.0959.87
SMOTE_IPF67.4573.1236.2253.0461.8562.8029.3420.9960.39
Lee69.4774.0552.8654.7864.9648.7433.4557.5960.07
Jit67.8170.8971.6762.9762.9358.2757.8854.8034.49
Rot72.9477.7881.1563.2971.8372.5860.8760.7867.56
Flip69.2574.3673.2463.6170.8771.3550.8760.5267.13
CS71.6276.8273.1562.1168.7668.0546.1056.8845.16
Inv67.2974.0770.8462.7968.3965.1060.6657.8445.22
SP69.2177.5776.5263.2673.4074.2760.6664.1864.14
Sca66.0571.0468.4851.1262.6951.9749.7254.1043.51
FCS70.3975.4176.6963.2470.7473.3650.2162.6569.98
SigAugment-C66.9980.3683.5263.2576.9779.3459.7762.3872.47
SigAugment-R72.5181.3683.0763.2278.3279.5760.1670.9372.61
Table 6. The average improvement brought to SigAugment-R by each transformation. In terms of gain, we highlighted the top two transformations.
Table 6. The average improvement brought to SigAugment-R by each transformation. In terms of gain, we highlighted the top two transformations.
Methods2016A-102016A-10a2016A-10b2016A-202016A-50
MeanstdGainMeanstdGainMeanstdGainMeanstdGainMeanstdGain
Baseline81.360.07-78.490.31-73.260.13-78.320.99-70.931.37-
Jit79.830.441.5377.620.720.8872.121.481.1477.700.680.6270.091.530.84
Rot80.560.460.8077.211.171.2872.400.740.8678.070.210.2567.941.412.98
Flip80.660.070.7077.680.200.8172.180.611.0875.234.883.0970.150.610.78
CS80.380.110.9877.310.291.1972.070.341.1876.891.031.4368.441.902.48
Inv79.590.831.7776.420.612.0769.670.713.5976.930.981.3969.730.381.20
SP79.190.452.1775.090.603.4069.500.433.7573.372.794.9464.461.356.47
Sca80.540.470.8278.180.590.3270.890.472.3778.551.01−0.2370.390.940.54
FCS80.500.280.8677.720.290.7773.260.130.9678.550.63−0.2370.690.550.24
Table 7. Comparison of the proposed method with the joint augmentation method.
Table 7. Comparison of the proposed method with the joint augmentation method.
Methods2016A-12016A-102016A-10a2016A-10b2016A-202016A-50
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
Baseline79.830.5371.080.8266.350.4658.370.3863.642.2254.050.72
Joint Augmentation83.120.1179.460.7976.060.8670.310.4877.230.8168.421.16
SigAugment-R83.630.4681.360.0778.490.3173.260.1378.320.9970.931.37
Table 8. Results of the proposed SigAugment method combined with the class rebalancing methods trained on the 2016A-50 dataset.
Table 8. Results of the proposed SigAugment method combined with the class rebalancing methods trained on the 2016A-50 dataset.
MethodDAESE-MSFNLSTM2
Focal LossLeePolynom_fit_SMOTE_poly
Baseline56.9757.5960.22
Combination method57.9124.3067.91
SigAugment-R60.1670.9372.61
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, S.; Sun, Z.; Wang, Z.; Liao, F.; Li, Z.; Mi, H. An Efficient Data Augmentation Method for Automatic Modulation Recognition from Low-Data Imbalanced-Class Regime. Appl. Sci. 2023, 13, 3177. https://doi.org/10.3390/app13053177

AMA Style

Wei S, Sun Z, Wang Z, Liao F, Li Z, Mi H. An Efficient Data Augmentation Method for Automatic Modulation Recognition from Low-Data Imbalanced-Class Regime. Applied Sciences. 2023; 13(5):3177. https://doi.org/10.3390/app13053177

Chicago/Turabian Style

Wei, Shengyun, Zhaolong Sun, Zhenyi Wang, Feifan Liao, Zhen Li, and Haibo Mi. 2023. "An Efficient Data Augmentation Method for Automatic Modulation Recognition from Low-Data Imbalanced-Class Regime" Applied Sciences 13, no. 5: 3177. https://doi.org/10.3390/app13053177

APA Style

Wei, S., Sun, Z., Wang, Z., Liao, F., Li, Z., & Mi, H. (2023). An Efficient Data Augmentation Method for Automatic Modulation Recognition from Low-Data Imbalanced-Class Regime. Applied Sciences, 13(5), 3177. https://doi.org/10.3390/app13053177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop