Next Article in Journal
Real-Time LEO Satellite Clocks Based on Near-Real-Time Clock Determination with Ultra-Short-Term Prediction
Previous Article in Journal
Advancements in Remote Sensing Imagery Applications for Precision Management in Olive Growing: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compound Jamming Recognition Based on a Dual-Channel Neural Network and Feature Fusion

1
Wuhan Early Warning Academy, Wuhan 430019, China
2
Unit 61516 of PLA, Beijing 100071, China
3
Unit 95980 of PLA, Xiangyang 441021, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(8), 1325; https://doi.org/10.3390/rs16081325
Submission received: 27 February 2024 / Revised: 7 April 2024 / Accepted: 8 April 2024 / Published: 10 April 2024
(This article belongs to the Section AI Remote Sensing)

Abstract

:
Jamming recognition is a significant prior step to achieving effective jamming suppression, and the precise results of the jamming recognition will be beneficial to anti-jamming decisions. However, as the electromagnetic environment becomes more complex, the received signals may contain both suppression jamming and deception jamming, which is more challenging for existing methods focused on a single kind of jamming. In this paper, a recognition method for compound jamming based on a dual-channel neural network and feature fusion is proposed. First, feature images of compound jamming are extracted by the short-time Fourier transform and the wavelet transform. Feature images are then employed as inputs for the proposed network. During parallel processing in dual-channel, the proposed network can adaptively extract and learn task-relevant features via the attention modules. Finally, the output features in dual-channel are fused in the fusion subnetwork. Compared with existing methods, the proposed method can yield better recognition performance with less inference time. Additionally, compared with existing fusion strategies, the fusion subnetwork can further improve the recognition performance under low jamming-to-noise ratio conditions. Results with the semi-measured datasets also verify the feasibility and generalization performance of the proposed method.

Graphical Abstract

1. Introduction

In an increasingly complex and changeable battlefield environment, accurately obtaining true target information by radar is key to the outcome of modern war [1]. With the rapid development of radar jamming technology, there are many new types of jamming patterns [2]. The jamming style has also changed rapidly from the previous single jamming form to multi-class compound jamming. In particular, the feasibility of the existing radar anti-jamming methods for a single specific type may be greatly influenced when mainlobe and sidelobe jamming, deception, and suppression jamming are simultaneously compounded. As a significant prior step in effective jamming suppression, compound jamming sensing can accurately recognize the unknown jamming patterns in radar echoes, which can provide vital prior information for the best anti-jamming strategies. Then, the jamming will be suppressed effectively and the target detection ability will be improved [3].
In general, compound jamming mainly consists of mainlobe deception jamming and sidelobe suppression jamming. The typical practical scenario is that the enemy in the far area releases support suppression jamming received by victim radar from the sidelobe, while the enemy aircraft in the near area uses the mounted jamming pod to release self-defense deception jamming received by victim radar from the mainlobe [4]. The mainlobe and sidelobe jamming, deception, and suppression jamming are compounded to interfere with the victim radar. Since compound jamming recognition is mainly based on intra-pulse information [5], it should be completed before pulse compression. Existing anti-jamming strategies in modern radars are mainly adaptive beam-forming, which can suppress sidelobe jamming to a certain extent, but the suppression performance of mainlobe jamming is poor. Considering the strong energy of sidelobe suppression jamming [6] and the high deception characteristic of mainlobe deception in practice, we mainly focus on the residual sidelobe suppression jamming combined with mainlobe deception after adaptive beam-forming in this paper.
Existing jamming recognition methods generally rely on better feature extraction and the selection of classifiers to improve recognition performance. As for feature extraction, there are kinds of commonly used features from different domains, including features in the time-frequency domain, bi-spectral features, singular spectrum features, power spectrum features, and so on. Thanks to the rapid development of machine learning, classifiers such as the decision tree, the random forest (RF), the K-nearest neighbor (KNN) algorithm, and the support vector machine (SVM) are also widely used to classify and recognize these features of jamming signals. Focusing on smeared spectrum (SMSP) jamming and chopping and interleaving (C&I) jamming, a shape feature based recognition method via the two-dimensional feature map, is proposed in [7]. The bi-spectral features of received jamming signals are calculated and these features are then converted into gray-scale maps. An SVM is finally used to classify the features of two jamming signals. In [8], an unconventional jamming recognition method for wideband radar based on visibility graphs is designed. The received time series signals are converted into visibility graphs and four types of features on visibility graphs are then extracted. The RF classifier is used to recognize five kinds of active jamming signals. Results show that the average accuracy is over 90% when the jamming-to-noise ratio (JNR) is 0 dB.
Later, learning from the successful experience of deep learning in image classification problems in the computer vision field and text classification problems, various neural networks are also introduced to jamming recognition problems. Considering the number of input dimensions, existing methods can be roughly categorized as feature sequence-based methods, feature image-based methods, and corresponding combination methods. In [9], a signal recognition method based on autocorrelation feature sequence is proposed. The backbone structure of the method is a bi-directional long short-term memory (BiLSTM) network enhanced by a self-attention module, and simulations verify its effectiveness. A jamming recognition method based on singular value decomposition (SVD) and a back-propagation (BP) neural network is proposed in [10]. The difference singular values of jamming signals are obtained by the SVD and then they are employed as inputs to the BP network. Simulation results show the average recognition accuracy of four kinds of active jamming signals is 90% when the JNR is 5 dB. However, it is required to select hyper-parameters manually. In [11], an LSTM network and a ResNet are combined to extract high-dimensional features of raw jamming signal sequences in the time domain and recognize four kinds of jamming signals. Simulations show the recognition method achieves more than 98.3% average accuracy when the JNR is 0 dB.
Due to the outstanding performance of convolutional neural networks (CNNs) in image classification, lots of methods based on CNNs and feature images are proposed to realize jamming recognition. A compound jamming recognition method based on power spectrum feature images and JRNet is proposed in [12]. The method can effectively recognize ten kinds of compound jamming signals under low JNRs, while paying more attention to suppression jamming. Using the fractional Fourier transform, a multi-branch CNN enhanced by an attention mechanism is proposed to recognize eight types of jamming signals in [13]. Simulation results indicate that the proposed CNN achieves more than 99% accuracy when the JNR is −3 dB. In [14], a lightweight improved MobileViT for TFIs is employed to recognize six kinds of jamming signals and the method can effectively reduce the computational complexity. Similarly, using time-frequency images (TFIs) by the STFT, an inverse ResNet enhanced by a channel attention module is designed to recognize eight kinds of jamming signals in [15], where features in the time-frequency domain and image channel domain are combined to promote recognition performance. Simulation results show the average recognition accuracy is close to 100% when JNR is −8 dB. In [16], a jamming signal classification method for cognitive unmanned aerial vehicles radios via a generalized dynamic Bayesian network is investigated.
Further, lots of methods are trying to incorporate the advantages of sequence-based methods and image-based methods, and features in multiple domains are combined via various fusion algorithms to improve the robustness of recognition methods. A parallel network structure of one-dimensional CNN and two-dimensional CNN is designed in [17], which uses feature sequences in the frequency domain and TFIs as inputs. Simulation results show the parallel structure is capable of effectively recognizing ten kinds of compound jamming signals when JNR is greater than 0 dB. Similarly focused on features in the frequency domain and time-frequency domain, a parallel network structure of the ResNet and the LSTM is proposed in [18]. Experiments indicate that the method can achieve 94.8% average accuracy for six kinds of active jamming signals. In [19], a recognition method based on Bayesian decision theory and feature fusion is designed, where multiple features are extracted by the bi-spectrum transformation. The kernel density estimation is then used to improve the Bayesian decision theory, and simulations verify that the method is capable of classifying three kinds of deception jamming.
In general, most existing recognition methods focus on a single kind of jamming signals, instead of multiple compound jamming signals. Nevertheless, there may be more than one enemy jammer on the complex electromagnetic battlefield. In received jamming samples, suppression jamming and deception jamming are supposed to be additively compounded in the time domain, and recognition methods focused on a single kind of jamming may not be effective. On the other hand, suppression jamming signals are more likely to cover some distinguishable features of deception jamming signals once suppression jamming and deception jamming coexist, which may cause performance degradation of recognition methods. Because of the various and complex characteristics of compound jamming signals, features in a single dimension can hardly tell distinguishable differences between compound jamming signals.
Fortunately, feature fusion in multiple dimensions and domains provides a promising solution that is capable of taking full advantage of multiple features to describe compound jamming signals. Moreover, attention modules are supposed to adaptively strengthen significant features and suppress useless features, that is, it is a viable way to promote the learning ability of networks and yield robust recognition performance against noise suppression jamming. Thus, we design a novel dual-channel network architecture, which combines the advantages of feature fusion and the benefit of attention modules. In order to obtain a stable feature representation of jamming signals, we originally introduce time-frequency features and wavelet transform features simultaneously for jamming recognition. The main novelties and contributions are summarized for clarity as follows:
  • Compound jamming signals consisting of noise suppression jamming and deception jamming are considered. In order to enrich the feature space and boost the representation ability of compound jamming, features obtained by the time-frequency transform and the wavelet transform are simultaneously inputted in parallel to the designed dual-channel network.
  • To enhance the extraction and learning ability for task-relevant features, the diverse branch block (DBB) structure and a parameter-free attention module are incorporated into the proposed network. Then, a gated recurrent unit (GRU)-based subnetwork is designed for feature fusion to further improve the recognition performance.
  • Compared with the existing three recognition methods, the proposed method achieves higher recognition accuracy with lower time complexity under different JNRs. More importantly, we have used the semi-measured jamming signals to validate the feasibility and generalization ability of the proposed method.
The rest of this paper is organized as follows: the mathematical models of ten kinds of compound jamming and features in both the time-frequency domain and the wavelet domain are derived in Section 2. The detailed backbone structure of the proposed network and feature fusion subnetwork are introduced in Section 3. The results of the proposed method and comparisons with existing methods are analyzed in terms of recognition accuracy and computational complexity in Section 4. Some discussions about the performance with the semi-measured datasets are analyzed in Section 5. Finally, conclusions and future works are summarized in Section 6.

2. Materials

In this section, mathematical models of each kind of jamming are provided and two transforms used for feature extraction are introduced in detail, namely, the STFT and the wavelet transform.

2.1. Jamming Models

The radar jammer in modern electronic countermeasures often uses digital radio frequency memory (DRFM) technology, which samples and copies the received signal [20]. Then the sampled signal is modulated and returned to the victim radar. With the development of its wide application in electronic countermeasures, the jammer can generate jamming signals with flexible and complex modulation styles. Focused on the topic of compound jamming, we first introduce mathematical models of seven kinds of single jamming and then provide the compound models.

2.1.1. Intermittent Sampling and Forwarding Jamming (ISFJ)

The ISFJ can effectively reduce the minimum forwarding delay of the jammer by low-speed sampling and forwarding the radar signal. Considering that the existing radar mainly transmits linear frequency modulation (LFM) signals, LFM signals s(t) can be expressed as
s ( t ) = e i π k t 2 , 0 t T ,
where T is the pulse width and k is the frequency modulation slope. According to the principle of jamming generation, the intermittent sampling rectangular pulse train p ( t ) is
p ( t ) = rect ( t τ / 2 τ ) n = 0 N 1 δ ( t n T s ) ,
where T s is intermittent sampling period, τ is the sampling pulse width, N is the number of pulses, and δ ( · ) denotes the impulse function. Intermittent sampling signals j t can be expressed as
j t = s t p t .
By controlling forwarding delay and times, j ( t ) could be interrupted-sampling direct forwarding jamming (ISDJ), interrupted-sampling repetitive forwarding jamming (ISRJ), and interrupted-sampling loop forwarding jamming (ISLJ). The mathematical models of each of the above jamming methods [21,22] are
j I S D J ( t ) = j ( t τ ) ,
j I S R J ( t ) = m = 1 M j ( t m τ ) ,
j I S L J ( t ) = r = 0 R 1 j t τ r ( τ + T s ) ,
where M = [ T s / τ ] and R = min ( N , M ) . The true target echo is cut into Q slices, Q = N M . The diagram of the true target echo and three kinds of ISFJ are shown in Figure 1 [20], where N = 4 , M = 5 , Q = 20 . There are two obvious differences between the true echo and the jamming signal. In detail, the true echo is continuous while the jamming signal is not continuous in the time domain. Also, the jamming slice is different from the corresponding true echo slice. For example, slice 7 of the true echo corresponds to slice 6 of ISDJ, ISRJ, and ISLJ. Slice 8 of the true echo corresponds to slice 6 of ISRJ and slice 1 of ISLJ. The above two characteristics provide a basis for recognition jamming.

2.1.2. Chopping and Interleaving Jamming

C&I is similar to ISRJ, but C&I fills the entire pulse width. According to the C&I generation method, the sampling pulse train can be expressed as [23]
p t = rect t τ n = + δ t n T s .
The sampling signal is obtained by sampling the radar signal with p ( t ) ,
j 1 t = rect t T p e iπk t 2 n = + rect t n T s τ .
By copying j 1 ( t )   N times, C&I can be defined as
j C & I t = n = 0 N 1 j 1 t n τ ,
where N = T s / τ and · means the round-down operation. The comparison between C&I jamming and ISRJ is shown in Figure 2 [23].

2.1.3. Smeared Spectrum Jamming

By changing the internal form of the signal, SMSP consists of multiple LFM sub-signals in the time domain [24]. After forwarding once, high-density comb false target groups can be generated around the true target, which can deceive and suppress LFM pulse compression radars. According to the SMSP generation method, the first LFM sub-signal is
j 1 ( t ) = rect ( t 2 T p / N T p / N ) e i π k j t 2 ,
where k j = N k is the frequency modulation slope and k = B / T p , B is the bandwidth, N is the number of sub-signals. j 1 ( t ) is copied N 1 times and spliced to obtain SMSP as
j S M S P ( t ) = n = 0 N 1 rect ( t 2 T p / N n T p / N T p / N ) e i π N k ( t n T p / N ) 2 .
The instantaneous frequency of SMSP [25] is
f j ( t ) = n = 0 N 1 rect ( t 2 T p / N n T p / N T p / N ) ( N k t n B ) .
The instantaneous frequency of SMSP consists of N straight line segments with the same slope and different intercepts. The time domain width of each line segment is T p / N , the slope is N k , and the intercept is n B .

2.1.4. Noise Convolutional Jamming (NCJ)

Compared with traditional active suppression jamming, NCJ has the characteristics of adaptive radar signals. In NCJ signals, modulated noise is convolved with LFM radar signals, which can be expressed in the time domain [26] as
j N C J ( t ) = n ( t ) s ( t ) ,
where n ( t ) represents the Gaussian white noise, s ( t ) indicates the LFM signal received by the jammer, and “ ” represents the convolution operator.

2.1.5. Noise Productive Jamming (NPJ)

In NPJ, modulated noise is multiplied with LFM radar signals, which can be expressed in the time domain [27] as
j N P J ( t ) = n ( t ) × s ( t ) .

2.1.6. Compound Jamming Models

In this paper, we mainly focus on additive compound jamming that combines suppression jamming and deception jamming in the time domain. That is, compound jamming J ( t ) can be expressed as [28]
J ( t ) = j s u p p ( t ) + j d e c e ( t ) ,
where j s u p p ( t ) denotes a kind of noise suppression jamming introduced above and j d e c e ( t ) denotes a kind of deception jamming introduced above. Two kinds of noise suppression jamming and five kinds of deception jamming are modeled above, thus there are ten kinds of compound jamming under recognition. Furthermore, as discussed in the Introduction, we pay more attention to the scenario that compound jamming consists of residual sidelobe suppression jamming and mainlobe deception jamming. Taking existing adaptive beam-forming under a certain error into consideration, residual JNR of the sidelobe suppression jamming is about 10 dB, which is the premise for the following experiments.

2.2. Feature Extraction

The goal of feature extraction for jamming signals is to find more distinct and distinguishable features from different dimensions, which is supposed to be conducive to jamming signal analysis and recognition. Herein, the STFT and the wavelet transform are introduced in brief.

2.2.1. The Short-Time Fourier Transform

The STFT is an important member of the time-frequency energy density function, which is a widely-used signal analysis tool with the advantages of simple calculation. Its time domain expression is
S T F T ( t , f ) = + x ( τ ) η * ( τ t ) e j 2 πfτ d τ ,
where t and f represent time and frequency, respectively. x ( t ) indicates the radar signal. “ * ” represents the conjugation of complex numbers. η ( t ) is the window function. When the window function is 1, the STFT will become the traditional fast Fourier transform. Commonly used window functions include the Hanning window, rectangular window, and Hamming window.
However, when the window function is fixed, the time-frequency resolution of the STFT is also fixed. For complicated and variable radar signals, time resolution or frequency resolution often cannot be in a good state at the same time. There is an unavoidable conflict between the time resolution and the frequency resolution. The STFT cannot keep the time-frequency resolution in an ideal state through adaptive adjustment.

2.2.2. The Continuous Wavelet Transform (CWT)

The CWT is capable of dealing with the conflict between time resolution and frequency resolution. The CWT is implemented by convolving the signal with a parent wavelet function that can be frequency-shifted and scaled. By adjusting the frequency shift and scaling parameters, the CWT can provide spectrum information at different scales, thus providing a multi-scale analysis of the local characteristics of the signal.
The time domain expression of the CWT is
W s ( a , b ) = 1 a + s ( t ) ψ * t b a d t ,
where s ( t ) is the radar signal. a and b represent the scale function and the time translation, respectively. ψ ( a , t ) is the mother wavelet function.

3. Approach

3.1. The Structure of the Proposed Network

Since there is residual suppression jamming within received signals under recognition, helpful features obtained by the time-frequency transform and the wavelet transform are more likely to be covered by the noise characteristics of residual suppression jamming. Thus, plain CNNs may be powerless to capture helpful features, and it is an urgent requirement to boost the feature extraction and representation abilities of CNNs. On the other hand, useless noise and suppression jamming are supposed to pollute a large part of the areas in feature images. It would be better for CNNs to capture task-relevant and significant features. To deal with these problems, the proposed network takes the ResNet as the backbone structure. Meanwhile, a DBB [29] structure and a simple parameter-free attention module [30] are incorporated to strengthen the extraction ability of vital features and the adaptive selection ability of task-relevant features. Concretely, the proposed network has two channels, and two input features are processed parallelly in each channel. At the end of the proposed network, there is a subnetwork designed for feature fusion. The flowchart of the proposed method is shown in Figure 3, where “1 × 1” and “k × k” denote the size of convolution kernels, and “AVG” is the average pooling layer.
Learning from the success of the Inception network, the DBB combines the multi-branch and multi-scale idea with structural re-parameterization. By integrating multi-scale convolutions and obtaining different sizes of reception fields, abundant feature spaces are gained to improve representation performance. In a single channel of the proposed network, let X R C × H × W denote an input feature map with C channels, where H and W denote the width and length of the input feature map. The corresponding output O R D × H × W can be calculated as
O = X F + R E P b ,
where D is the number of output channels, H and W denote the width and length of the input feature map, and “ ” is the convolution operation. F R D × C × K × K is the convolution kernel, where K is the size of the convolution kernel. R E P b denotes an optional bias item. Then the output O j , h , w at ( h , w ) in j-th channel is calculated as
O j , h , w = i C k K m K F j , i , k , m X c , h , w k , m + b j ,
where, X c , h , w R K × K is the element under convolution of the input map X at ( h , w ) , and b j is a bias.
Further, the homogeneity of convolutions can be defined as
X p F = p X F ,
where the equation is true for all p R . As for two convolution kernels F 1 and F 2 with the same configuration (including the same numbers of channels, same sizes of kernels and zero-paddings, and same convolution strides), the additivity of convolutions is defined as
X F 1 + X F 2 = X ( F 1 + F 2 ) .
Thanks to the above homogeneity and additivity of convolutions, the multi-branch and multi-scale convolutions in the DBB can be converted to an equivalent single-branch convolution by a series of linear combinations. The equivalent convolutions are employed during inference for the sake of deployment, which is supposed to reduce the inference time and total parameters.
On the other hand, to capture task-relevant and essential features and simultaneously suppress useless features, lots of attention modules have been designed recently. Most of these attention modules will introduce extra learnable parameters. Taking the requirements of real-time processing and lightweight deployment into consideration, a simple parameter-free attention module is employed to enhance the proposed network. For the input feature map X , an energy function in the attention module is defined as
e t w t , b t , y , x i = y t t ^ 2 + 1 M 1 i = 1 M 1 y o x ^ i 2 , M = H × W
where, w t and b t denote the weight and bias, respectively, t ^ = w t t + b t denotes a linear transform of t , and x ^ i = w t x i + b t denote a linear transform of x i . t and x i denote the targetneuron and other neurons in the current area, respectively. y t and y o denote labels and binary labels are used herein for simplicity, i.e., y t = 1 and y o = 1 .
Then, the energy function can be rewritten as
e t w t , b t , y , x i = 1 M 1 i = 1 M 1 1 w t x i + b t 2 + 1 w t t + b t 2 + λ w t 2 ,
where, λ w t 2 is the regularization item. Fortunately, there is an analytical minimum solution e t * for the energy function as follows [31]
e t * w t , b t , y , x i = 4 σ ^ + λ t μ ^ 2 + 2 σ ^ 2 + 2 λ ,
where μ ^ = 1 M i = 1 M x i and σ ^ 2 = 1 M i = 1 M x i μ ^ 2 . The smaller value of e t * indicates the more important feature in the target neuron. Further, the values of energy functions in different neurons are scaled by a Sigmoid function, and then these scaled values are directly multiplied with the corresponding features as follows:
O a = s i g m o i d 1 E O f ,
where O a is the final output of the attention module and O f is the input feature. “ ” indicates an element-wise multiplication and E is a vector of all scaled values of the energy function.
For clarity, detailed structural parameters in a single channel of the proposed network are listed in Table 1. Herein, “Module × 2” means there are two cascaded modules with similar structural parameters. “DBB” and “Attention” mean the DBB structure and the attention module are used in the corresponding layer, respectively. Finally, the outputs of the linear layer in two channels are concatenated and employed as inputs of the subnetwork for feature fusion.

3.2. The Subnetwork for Feature Fusion

An appropriate feature fusion is supposed to gain more improvement than recognition methods based on a single feature dimension, and it can make full use of the advantages of features by the STFT and the wavelet transform. Existing recognition methods based on features in multi-domains often use machine learning approaches for feature fusion, such as the Bayesian theory, the random forest, and the SVM. However, these approaches can only fuse the outputs of corresponding neural networks after the training of these neural networks. In other words, two separate steps, namely processing by neural networks and fusion strategy, are needed to complete the entire recognition process. Differently, the subnetwork designed for feature fusion is integrated into the whole recognition network, which can be trained along with the previous dual-channel network jointly to achieve better fusion performance.
Two feature vectors of the dual-channel network are concatenated and employed as the inputs of the fusion subnetwork; hence, the task of the fusion subnetwork can be regarded as a sequence processing problem. Thanks to the special structure of recursions and nodes, recurrent neural networks (RNNs) have unique advantages in sequence processing. The GRU, famous for its concise structure and efficient training, can deal with the shortcomings of the shot memory and the gradient exploration through the gate mechanism [31]. There are two gates in the GRU, namely the update gate and the reset gate.
Let O t denote the input feature vector of the GRU, then the outputs of the update gate u t and the reset gate r t are calculated as:
u t = s i g m o i d ( W u · [ h t 1 , O t ] ) ,
r t = s i g m o i d ( W r · [ h t 1 , O t ] ) ,
where W u and W r denote the weight matrix in the u t and r t , respectively. h t 1 is the hidden state at the previous moment. Then, the candidate hidden state h ~ t and hidden state h t at this moment can be calculated as:
h ~ t = tanh W h · O t + W h · ( r t h t 1 ) ,
h t = 1 u t h t 1 + u t h ~ t ,
where “ ” denotes the Hadamard product and tanh · denotes the Tanh function. Next, the feature vectors processed by the GRU are employed as inputs for a linear layer, where the fused features will be mapped to reduce dimensions. Herein, there is one GRU in the designed subnetwork and the GRU has 128 hidden units. The numbers of input neurons and output neurons in the linear layer are 128 and 10, respectively.

3.3. Simulation and Training Configurations

Two kinds of suppression jamming and five kinds of deception jamming signals are considered in this paper, where these jamming signals are additively compounded in the time domain. As for detailed simulation parameters, the pulse width is 40 us, the bandwidth is from 40 MHz to 60 MHz, the sampling frequency is 240 MHz, the JNR of deception jamming is from 0 dB to 20 dB, and the residual JNR of suppression jamming is 10 dB.
As for the hyper-parameters for network training, the initial learning rate is 1 × 10−3 and the cosine-annealing-warm-restart mechanism is employed to dynamically adjust the learning rate during training. A total of 80% of the simulation dataset is used for network training and 20% of the simulation dataset is used for validation. The batch size is 128 and the epoch is 100. The optimization algorithm is the Adam [32] algorithm and the loss function is cross-entropy loss. The software platform includes Python 3.8.5, PyTorch 1.7.1, and CUDA 11.0. The hardware platform includes an Intel Xeon Gold 6226R CPU with a RAM of 256 G and a Nvidia Quadro RTX 6000 GPU with a video memory of 24 G.

4. Results

4.1. Recognition Performance of the Proposed Method

As introduced in Section 2, we have focused on compound jamming recognition under residual suppression jamming, where the JNR of the residual suppression jamming is 10 dB within the compound jamming. Thus, the recognition performance versus the JNR of the deception jamming within the compound jamming is analyzed and shown in Figure 4. The recognition performance when five kinds of deception jamming signals and the NPJ are compounded is provided in Figure 4a. When the JNR of the deception jamming is 0 dB, the recognition accuracy of the ISDJ + NPJ jamming is about 70%, while the recognition accuracies of the other four kinds of compound jamming signals are lower than 50%. The power of the residual NPJ is still high and the jamming noise overwhelms distinguishable features of the STFT and the wavelet transform. With the increase in JNRs, recognition accuracies of the proposed method for these five kinds of compound jamming steadily increase. When the JNR of the deception jamming is 4 dB, recognition accuracies are all close to 100%.
The recognition performance when five kinds of deception jamming signals and the NCJ are compounded is provided in Figure 4b. Since the frequency modulation slope of the SMSP is significantly different from that of other deception jamming, the time-frequency features of the SMSP are more obvious. Thereby, the recognition accuracy of the SMSP + NCJ is more than 90% even though the JNR is 0 dB. However, the recognition accuracies of the other four kinds of compound jamming are all lower than 40%. Since the residual NCJ has strong power in the time-frequency domain, it also overwhelms significant features of deception jamming. With the increase in JNRs, the recognition performance of the proposed method has also improved. When the JNR is greater than 8 dB, the recognition accuracies of five kinds of compound jamming are close to 100%. However, when the JNR is relatively low, the recognition accuracies of the ISRJ + NCJ and the ISDJ + NCJ fluctuate up and down rather than continuing to increase with the increase in JNRs, because the suppression energy of the NCJ is more concentrated and denser around the features of deception jamming in the time-frequency domain and the wavelet transform domain, which seriously influence the features of deception jamming. Thus, the proposed method fails to capture important features of the above compound jamming. The phenomenon also reveals the significance and difficulty of compound jamming recognition.
On the other hand, as shown in Figure 4a,b, the recognition accuracies of the proposed method for five kinds of compound jamming can reach 100% when the JNR is 4 dB, while the recognition accuracies of the five kinds of compound jamming in Figure 4b can reach 100% only when the JNR is greater than 8 dB. In other words, from the perspective of compound jamming recognition, compound jamming containing the NCJ is more difficult to recognize.
To further analyze the recognition performance of the proposed method at lower JNRs, the confusion matrix for ten kinds of compound jamming at 5 dB is shown in Figure 5. The proposed method achieves satisfactory recognition performance for ISRJ + NPJ, SMSP + NPJ, SMSP + NCJ, C&I + NPJ, ISLJ + NPJ, and ISDJ + NPJ. The recognition accuracy of ISRJ + NCJ is about 81.09%. In the test samples of ISRJ + NCJ, about 11.44% of the samples are incorrectly recognized as C&I + NCJ, and about 4.97% of the samples are incorrectly recognized as ISDJ + NCJ. The recognition accuracy of ISLJ + NCJ is about 91.54%, and in the test samples of ISLJ + NCJ, about 4.97% of the samples are incorrectly recognized as ISDJ + NCJ. The recognition performance of ISDJ + NCJ is the worst, and about 18.9% of the test samples are incorrectly recognized as ISLJ + NCJ. About 13.43% of the test samples are incorrectly recognized as ISRJ + NCJ and about 8.45% of the test samples are incorrectly recognized as C&I + NCJ. On the whole, the features of ISRJ + NCJ, ISLJ + NCJ, and ISDJ + NCJ are similar to each other at lower JNRs.
With the further increase in JNRs, the energy of the deception jamming is gradually higher and their features by the STFT and the wavelet transform become more and more distinct. The confusion matrix for ten kinds of compound jamming at 7 dB is shown in Figure 6. The recognition accuracies for nine kinds of compound jamming are close to 100%. Only 1% of the test samples of ISRJ + NCJ are incorrectly recognized as C&I + NCJ. The recognition accuracy of ISDJ + NCJ is also significantly improved. Most incorrect samples of ISDJ + NCJ are recognized as ISLJ + NCJ, which indicates that the characteristics of several kinds of interrupted sampling jamming are very similar.
Since the proposed method is based on a dual-channel neural network and a fusion structure, it is necessary to assess the effectiveness of the designed fusion structure. We have compared the recognition performance of the designed fusion structure with that of the methods without fusion, shown in Figure 7. Herein, “STFT” and “CWT” denote the recognition performance using features by the STFT and the wavelet transform, respectively. “Fusion” denotes the recognition performance after the designed fusion structure. When the JNR is 0 dB, the recognition accuracy after fusion is almost the same as the accuracy of the wavelet transform, which is higher than the accuracy of the STFT. That is, the wavelet transform may be more effective in characterizing the important features of compound jamming when the JNR is low. With the improvement of JNRs, the recognition performance after fusion is gradually better than that only using features in one domain. Especially, when the JNR is 3 dB, the recognition accuracy after fusion is about 3.03% and 11.75% higher than that of the wavelet transform and the STFT, respectively. In summary, the recognition accuracy after fusion is basically the highest under each JNR, and the recognition performance after fusion is better than that using only a single domain feature. That is, the designed fusion structure can combine the advantages of features by the STFT and wavelet transform features, so as to further promote the recognition performance of the proposed method.

4.2. Comparisons with Existing Methods

4.2.1. Recognition Performance Comparison

For a fair comparison, several recognition methods based on neural networks and feature images are employed as comparison methods, namely, the JRNet [12], the MBv2 [14], and the IResNet [15]. The overall accuracy (OA) of each method for each compound jamming is listed in Table 2, where the OA is defined as:
O A = 1 N J N R N c o r r e c t N J N R × N t e s t ,
where N J N R , N t e s t , and N c o r r e c t denote the number of JNRs, the size of the test dataset at each JNR, and the number of correct samples, respectively. The mean OA (mOA) is calculated by averaging the OAs of ten kinds of compound jamming. Herein, bold values indicate the best recognition performance in each row. Compared with the three existing methods, the proposed method can achieve the best recognition performance for ISRJ + NPJ, SMSP + NPJ, SMSP + NCJ, ISLJ + NPJ, ISLJ + NCJ, and ISDJ + NPJ. Especially for SMSP + NPJ, SMSP + NCJ, ISLJ + NPJ, and ISDJ + NPJ, the proposed method can gain more than 90% accuracy. In spite of the fact that the OAs of the JRNet for C&I + NPJ and C&I + NCJ are higher than those of the proposed method, the OA of the JRNet for ISLJ + NCJ is 28.9%. Similarly, although the OAs of the IResNet for ISRJ + NCJ and ISDJ + NCJ are higher than those of the proposed method, the OA for C&I + NCJ is 36.23%. Thus, according to the mOA, the proposed method achieves the best recognition performance by comprehensively considering the OAs of ten kinds of compound jamming. To sufficiently compare the performance of each recognition method, the F1-score metric is also employed to explore recognition ability. The F1-score values of each method for each jamming are listed in Table 3. The F1-score values of the proposed method are the highest except SMSP + NCJ jamming. The average F1-score value of the proposed method is about 0.8478 and is also the highest compared with the four existing methods.
Next, we have assessed the recognition performance of each method versus different JNRs of the deception jamming, shown in Figure 8. When the JNR is 0 dB, the accuracy of the proposed method is about 44.18%, which is the highest. When the JNR is 3 dB, the accuracy of the proposed method is more than 80%, while the accuracies of the three comparison methods are less than 70%. When the JNR is 5 dB, the accuracy of the proposed method is more than 93%, while the accuracies of the three comparison methods are less than 82%. When the JNR is greater than 7 dB, the accuracy of the proposed method is close to 100%. To conclude, the proposed method outperforms these comparison methods, especially when the JNR is relatively low.

4.2.2. Fusion Strategy Comparison

There are various fusion strategies used in recognition methods, such as the SVM [7], the random forest [8], and the Bayesian decision theory [19]. Thus, the recognition performance of each fusion strategy versus JNRs is shown in Figure 9, where “Bayesian” denotes the Bayesian decision theory. Herein, the inputs of each fusion strategy are the same as those of the designed fusion structure. When the JNR of the deception jamming is relatively low, the designed fusion structure can gain the best recognition performance. In spite of the fact that the Bayesian decision theory achieves slightly better performance than the designed fusion structure when the JNR is 4 dB to 6 dB, the accuracy of Bayesian decision theory is much lower than that of the designed fusion structure when the JNR is 0 dB to 1 dB.
On the whole, the proposed method has more stable and superior recognition performance compared with three existing fusion strategies. More importantly, the existing three fusion strategies cannot be integrated into the optimization training of the network; each channel of these methods needs to be recognized and then fused step by step. In contrast, the designed fusion structure can be trained jointly with the front dual-channel neural network to gain the best fusion performance.

4.2.3. The Computational Complexity

In the practical application of radar systems, real-time processing is also a vital demand besides recognition performance. So, it is necessary to assess the computational complexity of the above recognition methods. For CNN-based methods, three commonly used indicators are employed to reveal the computational complexity, namely the inference time, the number of learnable parameters (LPs), and the floating-point operations per second (FLOPs). The computational complexity of each method is listed in Table 4. In terms of LPs, the proposed method has 11.4 M parameters, which is slightly lower than the JRNet method. Because the MBv2 is a lightweight neural network specially designed by the authors, it has a small number of learnable parameters. In terms of FLOPs, the FLOPs of the proposed method is 1.82 G, which is similar to that of the JRNet method. The IResNet method uses the MobileNet as the backbone structure, so it has fewer FLOPs.
The inference time of the proposed method is only 11.38 ms, which is slightly lower than that of the IResNet method, while the inference time of the MBv2 method and the JRNet is about 24 ms. Although the IResNet method has the smallest number of FLOPs, its inverse residual structure and depth-wise convolution will occupy more inference time. Therefore, the proposed method has the advantage of parallel processing. To conclude, although the proposed method occupies more learnable parameters and FLOPs, the inference time of the proposed method is less than that of the comparison methods.

5. Discussion

In spite of the fact that high-quality simulation datasets can partly evaluate the recognition performance of the proposed method, the actual electromagnetic environment is far more complex than the simulation conditions. So, the feasibility of the proposed method on the measured datasets should be verified. However, actual jammers are almost always designed for military equipment with high confidentiality, and the countermeasure experiments are complicated. It is difficult to collect adequate measured compound jamming samples, which is also the reason why most existing methods fail to analyze the performance with measured datasets.
Thanks to a certain military jammer and corresponding countermeasure experiments organized by our laboratory, we have tried our best to collect a large number of measured jamming samples. However, due to the limited hardware condition, the measured samples all belong to the NPJ. Hence, the simulation deception jamming samples are added to the measured NPJ to construct semi-measured compound jamming samples. Finally, we have collected five types of semi-measured compound jamming signals, namely, ISRJ + NPJ, SMSP + NPJ, C&I + NPJ, ISLJ + NPJ, and ISDJ + NPJ.
The recognition fusion matrix of the proposed method for five compound jamming signals is shown in Figure 10. The recognition accuracies of the proposed method for SMSP + NPJ and ISLJ + NPJ are 100%, which indicates that the proposed method can effectively recognize these three types of semi-measured compound jamming. The accuracy of the proposed method for ISRJ + NPJ is 93.75%, and 6.25% of the test samples are incorrectly recognized as C&I + NPJ. The accuracy of the proposed method for ISDJ + NPJ is 93.36%, and 3.91% of the test samples are incorrectly recognized as ISLJ + NPJ. For these five types of semi-measured compound jamming, the proposed method can achieve an average recognition accuracy of about 97.18%, which verifies the potential feasibility and valid generalization ability of the proposed method on the semi-measured data.

6. Conclusions

To deal with the problem of compound jamming recognition, a recognition method based on a dual-channel neural network and the feature fusion strategy is proposed in this paper. Taking the noise characteristics caused by suppression jamming into account, the proposed method uses features by the STFT and the wavelet transform to enrich the feature maps and enhance the representation abilities of compound jamming. The DBB structure and the attention module are also incorporated into the designed dual-channel network to boost the ability of feature extraction and adaptive selection. Simulation results verify that the proposed method based on feature fusion outperforms the methods using only one feature. The proposed method can gain an average recognition accuracy of more than 93% for ten types of compound jamming when the JNR is 5 dB, and when the JNR is 7 dB, the average accuracy is close to 100%, which demonstrates that the proposed method has better recognition performance with less inference time compared with three existing methods. Furthermore, compared with the three fusion strategies, the designed fusion structure can further promote recognition performance under low JNR conditions. The results with the semi-measured datasets also verify the potential feasibility and generalization ability of the proposed method. To conclude, the proposed method is capable of recognizing ten kinds of simulation compound jamming and five kinds of semi-measured compound jamming, thanks to the elaborately designed network architecture with feature fusion strategy and stable feature representation of the STFT and the wavelet transform. On the other hand, when the power of suppression jamming is too high, the proposed method could be insufficient to recognize compound jamming correctly. The selection of input features is also significant for recognition. Inappropriate input features are supposed to introduce performance degradation of the feature fusion strategy.
Nevertheless, as the electromagnetic environment on the battlefield becomes more and more complex, limited simulation datasets and semi-measured datasets may be insufficient to completely assess the actual recognition performance. In the future, we will attempt to collect more perfect measured datasets to verify and facilitate the performance of the proposed method in real-world electromagnetic environments. Additionally, more advanced signal processing techniques and neural network architectures could be explored. Furthermore, more comparative studies with state-of-the-art techniques will be conducted to help benchmark the proposed method’s performance and identify areas for further improvement.

Author Contributions

Conceptualization, H.C. (Hao Chen) and Y.W.; methodology, H.C. (Hao Chen) and L.Z.; software, H.C. (Hui Chen) and J.Z.; validation, B.L., L.Z. and Y.W.; writing—original draft preparation, H.C. (Hao Chen); writing—review and editing, H.C. (Hui Chen), Z.L. and Y.W.; funding acquisition, B.L. and H.C. (Hao Chen). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 62001510 and the Enhance Foundation Project of the Wuhan Electronic Information Institute under Grants HJGC-2023-028.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Meng, Y.; Yu, L.; Wei, Y. Multi-Label Radar Compound Jamming Signal Recognition Using Complex-Valued CNN with Jamming Class Representation Fusion. Remote Sens. 2023, 15, 5180. [Google Scholar] [CrossRef]
  2. Lei, Z.; Zhang, Z.; Zhou, B.; Chen, H.; Dou, G.; Wang, Y. Transient Interference Suppression Method Based on an Improved TK Energy Operator and Fuzzy Reasoning. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5106214. [Google Scholar] [CrossRef]
  3. Zhang, H.; Yu, L.; Chen, Y.; Wei, Y. Fast Complex-Valued CNN for Radar Jamming Signal Recognition. Remote Sens. 2021, 13, 2867. [Google Scholar] [CrossRef]
  4. Lei, Z.; Qu, Q.; Chen, H.; Zhang, Z.; Dou, G.; Wang, Y. Mainlobe Jamming Suppression with Space–Time Multichannel via Blind Source Separation. IEEE Sens. J. 2023, 23, 17042–17053. [Google Scholar] [CrossRef]
  5. Zhou, H.; Wang, Z.; Wu, R.; Xu, X.; Guo, Z. Jamming Recognition Algorithm Based on Variational Mode Decomposition. IEEE Sens. J. 2023, 23, 17341–17349. [Google Scholar] [CrossRef]
  6. Lv, Q.; Quan, Y.; Sha, M.; Feng, W.; Xing, M. Deep Neural Network-Based Interrupted Sampling Deceptive Jamming Countermeasure Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9073–9085. [Google Scholar] [CrossRef]
  7. Yang, X.; Ruan, H. A Recognition Method of Deception Jamming Based on Image Zernike Moment Feature of Time-frequency Distribution. Mod. Radar 2018, 40, 91–95. [Google Scholar]
  8. Du, C.; Tang, B. Novel Unconventional-Active-Jamming Recognition Method for Wideband Radars Based on Visibility Graphs. Sensors 2019, 19, 2344. [Google Scholar] [CrossRef]
  9. Wei, S.; Qu, Q.; Zeng, X.; Liang, J.; Shi, J.; Zhang, X. Self-Attention Bi-LSTM Networks for Radar Signal Modulation Recognition. IEEE Trans. Microw. Theory Tech. 2021, 69, 5160–5172. [Google Scholar] [CrossRef]
  10. Feng, M.; Wang, Z. Interference Recognition Based on Singular Value Decomposition and Neural Network. J. Electron. Inf. Technol. 2020, 42, 2573–2578. [Google Scholar]
  11. Zhengtu, S.H.A.O.; Dengrong, X.U.; Wenli, X.U. Radar Active Jamming Recognition Based on LSTM And Residual Network. Syst. Eng. Electron. 2023, 45, 416–423. [Google Scholar]
  12. Qu, Q.; Wei, S.; Liu, S.; Liang, J.; Shi, J. JRNet: Jamming Recognition Networks for Radar Compound Suppression Jamming Signals. IEEE Trans. Veh. Technol. 2020, 69, 15035–15045. [Google Scholar] [CrossRef]
  13. Zhou, H.; Wang, L.; Guo, Z. Recognition of Radar Compound Jamming Based on Convolutional Neural Network. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 7380–7394. [Google Scholar] [CrossRef]
  14. Zou, W.; Xie, K.; Lin, J. Light-weight Deep Learning Method for Active Jamming Recognition Based on Improved MobileViT. IET Radar Sonar Navig. 2023, 17, 1299–1311. [Google Scholar] [CrossRef]
  15. Jin, Z.; Zhang, X.; Tan, S.; Zhang, X.; Wei, J. Jamming Identification Based on Inverse Residual Neural Network with Integrated Time-Frequency Channel Attention. J. Signal Process. 2023, 39, 343–355. [Google Scholar]
  16. Krayani, A.; Alam, A.S.; Marcenaro, L.; Nallanathan, A.; Regazzoni, C. Automatic Jamming Signal Classification in Cognitive UAV Radios. IEEE Trans. Veh. Technol. 2022, 71, 12972–12988. [Google Scholar] [CrossRef]
  17. Wang, P.Y.; Cheng, Y.F.; Xu, H.; Shang, G. Jamming Classification Using Convolutional Neural Network-Based Joint Multi-Domain Feature Extraction. J. Signal Process. 2022, 38, 915–925. [Google Scholar]
  18. Kong, Y.; Xia, S.; Dong, L.; Yu, X.; Cui, G. Intelligent Recognition Method of Radar Active Jamming Based on Parallel Deep Learning Network. Mod. Radar 2021, 43, 9–14. [Google Scholar]
  19. Zhou, H.; Dong, C.; Wu, R.; Xu, X.; Guo, Z. Feature Fusion Based on Bayesian Decision Theory for Radar Deception Jamming Recognition. IEEE Access 2021, 9, 16296–16304. [Google Scholar] [CrossRef]
  20. Greco, M.; Gini, F.; Farina, A. Radar Detection and Classification of Jamming Signals Belonging to a Cone Class. IEEE Trans. Signal Process. 2008, 56, 1984–1993. [Google Scholar] [CrossRef]
  21. Xiao, J.; Wei, X.; Sun, J. Interrupted-Sampling Multi-Strategy Forwarding Jamming with Amplitude Constraints Based on Simultaneous Transmission and Reception Technology. Digit. Signal Process. 2023, 147, 1051–2004. [Google Scholar] [CrossRef]
  22. Wei, J.; Li, Y.; Yang, R.; Wang, J.; Ding, M.; Ding, J. A Nonuniformly Distributed Multipulse Coded Waveform to Combat Azimuth Interrupted Sampling Repeater Jamming in SAR. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 9054–9066. [Google Scholar] [CrossRef]
  23. Zhang, L.; Wang, G.; Zhang, X.; Li, S.; Xin, T. Interrupted-Sampling Repeater Jamming Adaptive Suppression Algorithm Based on Fractional Dictionary. Syst. Eng. Electron. 2020, 42, 1439–1448. [Google Scholar]
  24. Zeng, L.; Chen, H.; Zhang, Z.; Liu, W.; Wang, Y.; Ni, L. Cutting Compensation in the Time-Frequency Domain for Smeared Spectrum Jamming Suppression. Electronics 2022, 11, 1970. [Google Scholar] [CrossRef]
  25. Han, X.; He, H.; Zhang, Q.; Yang, L.; He, Y.; Li, Z. Main-Lobe Jamming Suppression Method for Phased Array Netted Radar Based on MSNR-BSS. IEEE Sens. J. 2022, 22, 22972–22984. [Google Scholar] [CrossRef]
  26. Wang, Y.; Zhu, S.; Lan, L.; Xu, J.; Li, X. Suppression of Noise Convolution Jamming with FDA-MIMO Radar. J. Signal Process. 2023, 39, 191–201. [Google Scholar]
  27. Sun, G.; Xing, S.; Huang, D.; Li, Y.; Wang, X. Jamming Method of Intermittent Sampling Against SAR-GMTI Based on Noise Multiplication Modulation. Syst. Eng. Electron. 2022, 44, 3059–3071. [Google Scholar]
  28. Lv, Q.; Quan, Y.; Feng, W.; Sha, M.; Dong, S.; Xing, M. Radar Deception Jamming Recognition Based on Weighted Ensemble CNN With Transfer Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5107511. [Google Scholar] [CrossRef]
  29. Ding, X.; Zhang, X.; Han, J.; Ding, G. Diverse Branch Block: Building a Convolution as an Inception-like Unit. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 10886–10895. [Google Scholar]
  30. Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; Volume 139, pp. 11863–11874. [Google Scholar]
  31. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Gated Feedback Recurrent Neural Network. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2067–2075. [Google Scholar]
  32. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
Figure 1. The diagram of true target echo and the ISFJ.
Figure 1. The diagram of true target echo and the ISFJ.
Remotesensing 16 01325 g001
Figure 2. The comparison between C&I and ISRJ.
Figure 2. The comparison between C&I and ISRJ.
Remotesensing 16 01325 g002
Figure 3. The flowchart of the proposed method.
Figure 3. The flowchart of the proposed method.
Remotesensing 16 01325 g003
Figure 4. Recognition performance of the proposed method versus different JNRs. (a): NPJ + deception jamming; (b): NCJ + deception jamming.
Figure 4. Recognition performance of the proposed method versus different JNRs. (a): NPJ + deception jamming; (b): NCJ + deception jamming.
Remotesensing 16 01325 g004
Figure 5. The recognition confusion matrix at 5 dB.
Figure 5. The recognition confusion matrix at 5 dB.
Remotesensing 16 01325 g005
Figure 6. The recognition confusion matrix at 7 dB.
Figure 6. The recognition confusion matrix at 7 dB.
Remotesensing 16 01325 g006
Figure 7. Recognition performance with the fusion strategy and without the fusion strategy.
Figure 7. Recognition performance with the fusion strategy and without the fusion strategy.
Remotesensing 16 01325 g007
Figure 8. Recognition performance of each method versus JNRs.
Figure 8. Recognition performance of each method versus JNRs.
Remotesensing 16 01325 g008
Figure 9. Recognition performance of each fusion strategy versus JNRs.
Figure 9. Recognition performance of each fusion strategy versus JNRs.
Remotesensing 16 01325 g009
Figure 10. Recognition performance of the proposed method with the semi-measured datasets.
Figure 10. Recognition performance of the proposed method with the semi-measured datasets.
Remotesensing 16 01325 g010
Table 1. Structure in one channel of the proposed network.
Table 1. Structure in one channel of the proposed network.
Input SizeOutput SizeLayers/ModulesKernel, Stride, Padding
224 × 224 × 3112 × 112×64Conv-17, 2, 3
112 × 112 × 6456 ×56 × 64Max pool3, 2, 1
56 × 56 × 6456 ×56 × 64Module-1 × 2DBB
3, 1, 1 + Attention
56 × 56 × 6428× 28 × 128Module-2 × 2DBB
3, 1, 1 + Attention
28 × 28 × 12814× 14 × 256Module-3 × 2DBB
3, 1, 1 + Attention
14 × 14 × 2567× 7 × 512Module-4 × 2DBB
3, 1, 1 + Attention
7 × 7 ×5121× 1 × 512Average pool7, 1, 0
51210Linear-
Table 2. OA and mOA of four methods for each jamming.
Table 2. OA and mOA of four methods for each jamming.
Proposed Method (%)MBv2 (%)JRNet (%)IResNet (%)
ISRJ + NPJ88.6986.2082.8182.14
ISRJ + NCJ72.5954.1447.0477.02
SMSP + NPJ93.9882.1889.2387.83
SMSP + NCJ99.1499.5971.2398.37
C&I + NPJ86.0777.0289.1482.86
C&I + NCJ79.1045.7790.9536.23
ISLJ + NPJ91.8182.4175.4083.94
ISLJ + NCJ74.9071.2828.9043.19
ISDJ + NPJ93.2292.6389.6090.64
ISDJ + NCJ67.8065.9037.9975.58
mOA84.7375.7170.2375.78
Table 3. The F1-score values of four methods for each jamming.
Table 3. The F1-score values of four methods for each jamming.
Proposed MethodMBv2 JRNet IResNet
ISRJ + NPJ0.89150.8450.83890.8405
ISRJ + NCJ0.72310.64010.55880.5856
SMSP + NPJ0.94750.88850.82390.9186
SMSP + NCJ0.99560.99790.83190.9917
C&I + NPJ0.89890.84530.80150.8454
C&I + NCJ0.80930.61630.53440.5257
ISLJ + NPJ0.92620.83610.84320.8652
ISLJ + NCJ0.74090.58480.44790.5691
ISDJ + NPJ0.87550.79930.80210.8118
ISDJ + NCJ0.66980.55320.49980.6071
Average0.84780.76030.69810.7561
Table 4. Computational complexity of each method.
Table 4. Computational complexity of each method.
Proposed MethodMBv2JRNetIResNet
Time (ms)11.3824.2324.5611.71
LPs (M)11.4211.694.04
FLOPs (G)1.820.821.820.398
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Chen, H.; Lei, Z.; Zhang, L.; Li, B.; Zhang, J.; Wang, Y. Compound Jamming Recognition Based on a Dual-Channel Neural Network and Feature Fusion. Remote Sens. 2024, 16, 1325. https://doi.org/10.3390/rs16081325

AMA Style

Chen H, Chen H, Lei Z, Zhang L, Li B, Zhang J, Wang Y. Compound Jamming Recognition Based on a Dual-Channel Neural Network and Feature Fusion. Remote Sensing. 2024; 16(8):1325. https://doi.org/10.3390/rs16081325

Chicago/Turabian Style

Chen, Hao, Hui Chen, Zhenshuo Lei, Liang Zhang, Binbin Li, Jiajia Zhang, and Yongliang Wang. 2024. "Compound Jamming Recognition Based on a Dual-Channel Neural Network and Feature Fusion" Remote Sensing 16, no. 8: 1325. https://doi.org/10.3390/rs16081325

APA Style

Chen, H., Chen, H., Lei, Z., Zhang, L., Li, B., Zhang, J., & Wang, Y. (2024). Compound Jamming Recognition Based on a Dual-Channel Neural Network and Feature Fusion. Remote Sensing, 16(8), 1325. https://doi.org/10.3390/rs16081325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop