Next Article in Journal
Analysis of Spatio-Temporal Relationship Between Ecosystem Services and Human Footprints Under Different Human Activity Gradients: A Case Study of Xiangjiang River Basin
Previous Article in Journal
High-Resolution Spaceborne SAR Geolocation Accuracy Analysis and Error Correction
Previous Article in Special Issue
Calibration of Dual-Polarised Antennas for Air-Coupled Ground Penetrating Radar Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seamless Optimization of Wavelet Parameters for Denoising LFM Radar Signals: An AI-Based Approach

1
Radar Department, Military Technical Collage, Cairo 11588, Egypt
2
Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC V8P 5C2, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(22), 4211; https://doi.org/10.3390/rs16224211
Submission received: 12 September 2024 / Revised: 2 November 2024 / Accepted: 7 November 2024 / Published: 12 November 2024

Abstract

:
Linear frequency modulation (LFM) signals are pivotal in radar systems, enabling high-resolution measurements and target detection. However, these signals are often degraded by noise, significantly impacting their processing and interpretation. Traditional denoising methods, including wavelet-based techniques, have been extensively used to address this issue, yet they often fall short in terms of optimizing performance due to fixed parameter settings. This paper introduces an innovative approach by combining wavelet denoising with long short-term memory (LSTM) networks specifically tailored for LFM signals in radar systems. By generating a dataset of LFM signals at various signal-to-noise Ratios (SNR) to ensure diversity, we systematically identified the optimal wavelet parameters for each noisy instance. These parameters served as training labels for the proposed LSTM-based architecture, which learned to predict the most effective denoising parameters for a given noisy LFM signal. Our findings reveal a significant enhancement in denoising performance, attributed to the optimized wavelet parameters derived from the LSTM predictions. This advancement not only demonstrates a superior denoising capability but also suggests a substantial improvement in radar signal processing, potentially leading to more accurate and reliable radar detections and measurements. The implications of this paper extend beyond modern radar applications, offering a framework for integrating deep learning techniques with traditional signal processing methods to optimize performance across various noise-dominated domains.

1. Introduction

Linear frequency modulation (LFM) signals, characterized by their ability to linearly vary in frequency over time, are important and fundamental to the field of radar systems. Primarily, LFM signals are renowned for their high ranged resolution, a critical factor in distinguishing between closely spaced objects in radar target-detection scenarios [1]. This capability is paramount in a wide array of applications, from military defense systems, where identifying and tracking objects with precision is vital, to civilian domains such as air traffic control and weather monitoring, where the clarity and accuracy of radar readings can significantly impact safety and operational efficiency [2,3]. Moreover, LFM signals exhibit robustness against interference and signal jamming, making them highly reliable in environments cluttered with unwanted signals or in scenarios where electronic countermeasures are employed [4] and the Doppler shift of LFM signals allows radar systems to maintain accuracy in measuring the velocity of moving targets, even when subjected to high levels of interference [5]. This resilience enhances the overall performance of radar systems, ensuring that critical data can be captured and interpreted accurately, even under adverse conditions.
The versatility of LFM signals is further demonstrated in their application across various radar platforms, including in synthetic aperture radar (SAR) for detailed Earth observation and ground mapping [6] as well as in automotive radar systems for enhancing vehicle safety through obstacle detection and collision avoidance systems [7].
However, like all radar signals, LFM signals are susceptible to noise, which can degrade their quality and reduce the effectiveness of the radar system [8]. Noise can come from a variety of sources, including environmental factors, system hardware limitations, and external electromagnetic interference, posing a challenge to maintaining the quality of the LFM signal. This paper addresses this need by exploring the integration of wavelet denoising methods with deep learning techniques, aiming to enhance the denoising performance of LFM signals and, by extension, the overall efficiency and reliability of radar systems. This paper presents an approach that integrates wavelet denoising techniques with deep learning methods to significantly improve the denoising performance of LFM signals. By using the strengths of both wavelet transforms and LSTM, the proposed method enhances the accuracy and efficiency of signal denoising in radar systems, making the deep learning network predict the best wavelet parameters that can be used for denoising the LFM signal, and thereby turning from the classical approach of using fixed, predefined wavelet parameters to adaptive wavelet, in which the parameters are adaptive, especially the threshold rule parameter. This contribution offers a more robust solution to noise reduction, ultimately increasing the reliability and operational effectiveness of radar systems in complex and noisy environments.
Noise in LFM signals leads to several detrimental effects on signal processing. Firstly, it reduces the signal-to-noise ratio (SNR), a critical measure of signal quality. A lower SNR can obscure important features of the target signal, making it more challenging to accurately detect and identify objects. This reduction in signal clarity is particularly problematic in applications requiring high-resolution imaging or precise target tracking, where the ability to distinguish between closely spaced objects or detect small features is paramount.
Furthermore, noise could complicate the process of signal extraction and analysis. Radar systems rely on the accurate interpretation of reflected signals to determine object characteristics such as distance, speed, and shape. Noise may distort these signals, leading to errors in measurement and analysis.
Additionally, the presence of noise necessitates more complex and computationally intensive signal processing algorithms [9]. These algorithms must not only extract the desired signal from a noisy background but also accurately identify and mitigate the effects of noise. This requirement for advanced denoising techniques increases the processing time and computational resources needed, potentially limiting the real-time capabilities of radar systems.
Traditional methods, such as filtering and wavelet denoising, have been employed to mitigate these effects. However, these techniques often require manual parameter tuning, especially wavelet techniques, and may not be fully effective across the diverse range of conditions encountered in radar operations. As a result, there is a pressing need for innovative approaches that can adaptively and efficiently improve the quality of LFM signals, enhancing the performance and reliability of radar systems.
Classical wavelet techniques have long stood as a cornerstone in the denoising of signals [10]. These techniques leverage the wavelet transform’s ability to decompose signals into components that vary in scale, thereby isolating the noise from the signal of interest. This decomposition is instrumental in distinguishing between the high-frequency components, which are typically associated with noise, and the signal’s true underlying features, which are often manifested at lower frequencies. Thus, shallow decomposition levels may dismiss some noise-associated features, while deeper ones may cut down on some signal-associated structures [11].
The wavelet denoising process involves several key steps [12,13], as shown in Figure 1. Initially, the signal is passed through a wavelet transform, which breaks it down into a series of coefficients across different levels of resolution. This step is crucial for identifying the components of the signal that are most affected by noise. Following this, a thresholding technique is applied to the wavelet coefficients. Thresholding aims to reduce or eliminate the coefficients that are likely to be noise, a process that can be performed using various threshold rules [14]. The choice of thresholding rule and the determination of the threshold value are pivotal to the effectiveness of the denoising process. After thresholding, the inverse wavelet transform is applied to reconstruct the signal, ideally with the noise significantly reduced [15].
Despite their effectiveness, classical wavelet techniques have limitations [16]. One significant challenge is the selection of optimal wavelet parameters. The selection of the wavelet family, decomposition levels, and the thresholding rule requires a subtle understanding of both the signal characteristics and the nature of the noise. Traditionally, these parameters are selected based on heuristic methods or exhaustive experimentation, which may not be feasible or efficient in operational settings and may not always yield optimal results, especially in dynamic or complex noise environments.
The limitations of classical wavelet denoising techniques underscore the need for approaches that are more advanced and adaptive. In [17,18], the authors showed enhancement in signal denoising using a classical wavelet approach with fixed wavelet parameters, but that if they were to use adaptive parameters, especially in an environment with a wide range of noise, they would achieve higher noise removal while preserving the original signal. This led to the exploration of integrating wavelet methods with emerging technologies, such as deep learning, to enhance denoising performance. By leveraging the pattern recognition and learning capabilities of deep neural networks, we aim to overcome the challenges of parameter selection and adaptability, paving the way for more effective and efficient denoising methods for LFM signals and beyond.
In this paper, deep learning, particularly through the use of LSTM networks, offers a compelling solution to this challenge. LSTM networks, known for their excellence in modeling time series data and capturing long-term dependencies, present an opportunity to learn the complex relationships between noisy LFM signals and the optimal wavelet denoising parameters. By training an LSTM network with a dataset comprised of LFM signals subjected to various noise conditions and their corresponding optimum wavelet parameters, the network can learn to predict the best wavelet parameters for a given noisy LFM signal. This process not only automates the parameter selection process but also adapts to the specific characteristics of the noise and the signal, potentially leading to a more effective denoising outcome.
Integrating wavelet denoising with deep learning offers several advantages. First, it enhances the adaptability of the denoising process, allowing for the real-time optimization of parameters in response to changing noise conditions. Second, it raises the learning capability of deep neural networks, enabling them to understand the intricate dynamics between the signal, noise, and denoising parameters and potentially uncovering patterns not immediately apparent through traditional methods. Finally, this approach aims to improve the overall efficiency and efficacy of radar signal processing, leading to clearer, more accurate radar times series signals, which is crucial for a wide array of applications.
By bridging the gap between classical signal processing techniques and cutting-edge artificial intelligence, this study seeks to advance the field of radar signal denoising, offering a scalable, adaptive, and highly effective solution to the perennial challenge of noise.
This paper is organized as follows: Section 2 presents a detailed review of wavelet denoising techniques in radar signal processing, highlighting the foundational methods. Section 3 provides an overview of deep learning in signal denoising, focusing on LSTM networks, discussing the advantages of LSTM networks in this domain. The methodology of the proposed algorithm is outlined in Section 4, where we describe the integration of wavelet denoising with deep learning. In Section 5, we present the results and discussion, offering insights into the performance of the proposed method under various conditions. Finally, Section 6 concludes the paper with a summary of the findings and potential directions for future research.

2. Wavelet Denoising Techniques in Radar Signal Processing

Wavelet denoising techniques have emerged as a fundamental aspect of signal processing, especially in radar system applications. These techniques exploit the wavelet transform’s capacity to break down a signal into a sequence of wavelet coefficients, representing the signal across different scales. This capability is pivotal for distinguishing noise from the signal by targeting coefficients primarily influenced by noise. The denoising process generally involves three key steps: First, the signal is decomposed using the wavelet transform; next, a thresholding process is applied to the wavelet coefficients in order to suppress noise; and finally, the signal is reconstructed using the inverse wavelet transform, ideally with the noise effectively reduced.

2.1. Wavelet Transform

The mathematical foundation of wavelet denoising is based on the wavelet transform, which, in its discrete form, is defined as
W j , k = n = 0 N 1 f [ n ] ψ j , k [ n ]
where:
  • W j , k represents the wavelet coefficient at scale j and translation k,
  • f [ n ] denotes the discrete signal to be analyzed,
  • ψ j , k [ n ] is the discrete mother wavelet function at scale j and translation k.
The discrete wavelet transform is applied to decompose the signal into a wavelet representation, with scale j adjusting the analysis level and with translation k moving along the signal. The selection of the discrete mother wavelet, ψ [ n ] , is crucial, as it significantly influences the decomposition’s effectiveness in signal characterization and noise isolation.

2.2. Denoising Process

The denoising process encompasses decomposition, thresholding, and reconstruction:

2.2.1. Decomposition

The signal undergoes decomposition into wavelet coefficients via the discrete wavelet transform (DWT) [19], described for a discrete signal x [ n ] as
DWT x [ n ] = ( d 1 , d 2 , , d N ) , ( a 1 , a 2 , , a N )
Here, d i are the coefficients capturing high-frequency components (typically noise), and a i are the approximation coefficients representing the signal’s low-frequency components, where i [ 1 , N ] .

2.2.2. Thresholding

Thresholding plays a critical role in the wavelet denoising process, aimed at reducing noise by modifying the wavelet coefficients. There are two primary types of thresholding: hard thresholding and soft thresholding.
  • Hard thresholding is defined as
    d i = d i if | d i | > λ 0 otherwise
    Coefficients smaller than the threshold value λ are set to zero, while those larger are left unchanged. This method is straightforward and retains the coefficients that are considered significant, removing the rest as noise.
  • Soft thresholding is defined as
    d i = d i λ if d i > λ d i + λ if d i < λ 0 otherwise
    Wavelet coefficients with magnitudes below a threshold λ are set to zero, while the remaining coefficients are reduced by λ . This method eliminates noise components represented by small coefficients and reduces the effect of noise on larger coefficients, smoothing the signal and leading to a coherent reconstruction. Soft thresholding is particularly advantageous for signals with subtle structures or features, as it preserves essential characteristics while effectively suppressing noise. For the denoising of LFM signals in radar processing, soft thresholding is often preferred over hard thresholding [20]. The rationale behind this preference lies in the nature of radar signals, which may contain fine details for accurate target detection and characterization. Soft thresholding preserves these features while effectively reducing noise.

2.2.3. Decomposition

After applying thresholding to the detail coefficients d i (which typically contain noise), we obtain the modified coefficients d i . The denoised signal x [ n ] is then reconstructed using the inverse discrete wavelet transform (IDWT), which combines the modified detail coefficients d i and the original approximation coefficients a i across all decomposition levels. The reconstruction process is mathematically expressed as
x [ n ] = I D W T ( d 1 , d 2 , , d N ) , ( a 1 , a 2 , , a N )
In this equation, the IDWT function takes the modified detail coefficients d i (where i [ 1 , N ] ) and the unaltered approximation coefficients a i to reconstruct the denoised signal x [ n ] . Essentially, the IDWT reverses the wavelet transform process, combining these coefficients to produce a time-domain signal where noise has been reduced or eliminated, while retaining the essential features of the original signal.

3. Using LSTM for Signal Denoising

Deep learning algorithms, recognized for their capacity to automatically learn complex hierarchical representations from data, have gained significant traction in signal denoising applications [21,22]. Unlike traditional methods that rely on predefined mathematical models, deep learning approaches adaptively learn how to denoise signals directly from the data during training. This adaptability allows these methods to handle a wide variety of noise types and signal characteristics, often surpassing classical techniques in both denoising performance and operational flexibility. Among the various deep learning models, LSTM [23], a specialized type of recurrent neural network (RNN), has demonstrated remarkable potential in processing time-series data, including signals used in radar systems [24].
LSTM networks are particularly well-suited for denoising time-series signals due to their unique architecture, which is designed to capture long-term dependencies in sequential data. The core innovation of LSTM networks lies in their memory cells, which can retain information across extended sequences, thereby addressing the limitations of traditional RNNs that struggle with long-term dependencies. This capability is critical for applications where understanding the temporal dynamics of the signal is essential for effective denoising.
Figure 2 illustrates the architecture of an LSTM memory cell, which is the building block of an LSTM network. The memory cell includes three primary gates: the input gate, the forget gate, and the output gate. These gates control the flow of information into the memory cell, decide which information to retain or discard, and determine which information should be passed on to the next time step.
The behavior of the LSTM cell is governed by the following equations:
f t = σ ( W f · [ h t 1 , x t ] + b f )
i t = σ ( W i · [ h t 1 , x t ] + b i )
C ˜ t = tanh ( W C · [ h t 1 , x t ] + b C )
C t = f t C t 1 + i t C ˜ t
o t = σ ( W o · [ h t 1 , x t ] + b o )
h t = o t tanh ( C t )
  • Forget Gate ( f t ): This gate decides which information from the previous cell state ( C t 1 ) should be forgotten or retained. It is computed using the previous hidden state ( h t 1 ) and the current input ( x t ), followed by a sigmoid activation function, which outputs values between 0 and 1, where 1 indicates complete retention, and 0 indicates complete forgetting.
  • Input Gate ( i t ): The input gate controls the extent to which new information (i.e., the candidate cell state C ˜ t ) should be added to the cell state. Like the forget gate, it uses a sigmoid function to output values between 0 and 1.
  • Candidate Cell State ( C ˜ t ): This is a potential update to the cell state and is generated by passing the previous hidden state and the current input through a tanh function, which scales the values to lie between −1 and 1.
  • Cell State Update ( C t ): The actual cell state is updated by combining the previous cell state (scaled by the forget gate) and the candidate cell state (scaled by the input gate).
  • Output Gate ( o t ): The output gate determines the next hidden state ( h t ) of the LSTM, which is also the output of the memory cell. It uses a sigmoid function to decide how much of the cell state should be output, and a tanh function is applied to the cell state to generate the hidden state.
  • Hidden State ( h t ): This represents the output of the LSTM cell at time step t, which will be passed to the next cell in the sequence.
These components work together to enable the LSTM network to effectively capture and retain long-term dependencies in the data, making it an excellent choice for denoising tasks in radar signal processing. By learning from data that includes noisy LFM signals, the LSTM network can adaptively predict the optimal parameters for denoising, thereby improving the signal quality and enhancing the performance of radar systems.

4. Methodology of the Proposed Algorithm

The workflow for denoising LFM signals begins with the generation of a dataset of noisy LFM signals. Each signal is subjected to an exhaustive search to determine the optimal wavelet parameters for denoising, focusing on the mother wavelet function, decomposition level, and thresholding rule. This process identifies the parameters that maximize the correlation between the denoised and original LFM signals. The dataset is then constructed with these noisy LFM signals paired with their optimal wavelet parameters, forming the basis for training a deep learning LSTM network. After training, the model predicts the best wavelet parameters for new noisy LFM signals and applies them to denoise the signals.
As shown in Figure 3, this the method involves several steps:
  • Generating LFM Signal: An LFM signal is generated with specific parameters.
  • Adding Noise: Additive white Gaussian noise (AWGN) with zero mean and unity variance is added to the LFM signal to simulate noisy conditions at various SNR levels.
  • Wavelet Parameter Optimization: An exhaustive search is conducted over combinations of mother wavelet functions, decomposition levels, and threshold rules to find the parameters that provide the highest correlation between the denoised and original signals.
  • Denoisingand Correlation Evaluation: The noisy signals are denoised using these optimal parameters, and their correlation with the original signals is evaluated as discussed in Section 5.4.
  • Parameter Assignment and Dataset Construction: The optimal wavelet parameters are assigned to each corresponding noisy signal, creating a comprehensive dataset that pairs noisy LFM signals with their ideal denoising parameters.
  • Parameter Assignment and Dataset Construction: The optimal wavelet parameters are assigned to each noisy signal, constructing a new dataset.
  • Trainingthe Deep Learning Model: This dataset is used to train an LSTM network, where the noisy signals serve as input and the optimal wavelet parameters as output.
  • Signal Denoising: The trained model predicts the optimal wavelet parameters for new noisy LFM signals and denoises them accordingly.
This integrated approach shows the adaptability of wavelet transforms and the learning capabilities of deep neural networks, resulting in enhanced denoising performance compared to traditional methods. More details are provided in Section 5.

4.1. LFM Signal Generation and SNR Level Adjustments

The generated LFM signal parameters are described in Table 1. Each LFM  signal is characterized by a 2 MHz bandwidth and a 100 ms pulse width ( τ ). These signals are sampled at a high frequency of 2 24 Hz, allowing for detailed temporal analysis essential for radar signal processing. Subsequently, the signals are exposed to varying noise levels, with SNR spanning from −6 dB to +24 dB in increments of 6 dB. This SNR range simulates a spectrum of operational environments, from low- to high-noise scenarios, enabling an in-depth assessment of denoising techniques’ performance under diverse conditions. The dataset, inclusive of both pure and noisy signals, facilitates subsequent processing tasks of wavelet denoising and deep learning model training.

4.2. Process for Selecting the Best Wavelet Parameters for Denoising

The selection process for the optimal wavelet parameters involved a systematic examination of various combinations of mother functions, decomposition levels, and threshold rules shown in Table 2. Initially, a broad range of mother functions and threshold rules were considered, including Haar, Fejér–Korovkin, Daubechies, Symlet, and Coiflet for mother functions, and Bayes, Minimax, Stein’s Unbiased Risk Estimate “SURE”, and UniversalThreshold for threshold rules [14,25,26,27]. Decomposition levels from 2 to 10 were also explored.
Through empirical analysis and previous research findings, it was determined that the threshold rules significantly impacted the denoising effectiveness in relation to the SNR of the LFM signals. Consequently, the focus narrowed down to using a fixed mother function (db8) and decomposition level (7) while varying the threshold rules, particularly Bayes and SURE, which were identified as the most effective. The process entailed denoising each signal from a dataset of the LFM signals, each subjected to different SNR levels, using the specified wavelet parameters in Table 3. The effectiveness of denoising was evaluated based on the correlation between the denoised signals and the original signals. The combination of parameters yielding the highest correlation was considered optimal for denoising the LFM signals, ensuring both noise reduction and preservation of signal quality.
The dataset was created by integrating noisy LFM signals with their optimal wavelet denoising parameters for training an LSTM network. This pairing of noisy signals with their ideal denoising parameters was designed to enable the LSTM network to learn and predict the most effective wavelet parameters for denoising unseen noisy LFM signals, as shown in Algorithm 1. The goal of this was to automate the wavelet parameter selection process for enhancing the efficiency and accuracy of radar signal processing through the application of deep learning techniques.
Algorithm 1: Denoise LFM signals with optimal wavelet parameters and train the proposed LSTM-based architecture.
Remotesensing 16 04211 i001

4.3. LSTM Network Training Process and Network Architecture

To address the challenge of automating the selection of optimal wavelet parameters for denoising noisy LFM signals, we developed the proposed network that incorporates LSTM layers. This section describes the designed architecture of the proposed network and outlines the comprehensive training process.

4.3.1. Network Architecture

The architecture of the proposed network is designed to capture the temporal dynamics and dependencies inherent in LFM signals. The proposed network consists of the following layers, as shown in Figure 4:
  • A sequence input layer accommodating the single-dimensional nature of the input LFM signals.
  • A bidirectional LSTM (BiLSTM) layer with 128 hidden units, employing an ‘OutputMode’ of ‘sequence’ to utilize both past and future context for enhanced temporal analysis.
  • A batch normalization layer implemented to ensure the stability of the network by normalizing the layer activations across each mini batch.
  • An LSTM layer with 128 hidden units, also set to ‘sequence’ output mode, further distilling the temporal signal characteristics.
  • A dropout layer integrated to mitigate overfitting by randomly omitting a subset of features during training.
  • Another LSTM layer with 128 hidden units configured to output only the last sequence element, concentrating the network’s predictive focus.
  • A Dense layer correlates the LSTM output to the diverse classes representing unique combinations of wavelet parameters.
  • A softmax layer normalizing the output to a probabilistic distribution over the parameter classes.
  • A classification layer for the categorical prediction of the optimal wavelet parameter set.
It is worth mentioning the evolution of the designed network architecture, beginning with a single BiLSTM layer, stacking two LSTM layers one by one, and finally introducing a dropout layer in between. That incremental building is judged via monitoring the training progress and validation loss convergence.

4.3.2. Training Process

The training designed for the LSTM network is structured to optimize performance and accuracy. Key aspects of the training process include
  • Utilization of the Adam optimizer.
  • Iteration through a maximum of 80 epochs, with a mini-batch size of 64, balancing computational load and training stability.
  • Incorporation of validation data to monitor overfitting, with validation frequency set for every epoch.
  • Implementation of a dynamic learning rate schedule, starting at 0.01 and reducing periodically to refine learning precision.
  • Application of gradient thresholding to prevent the exploding gradient phenomenon, ensuring stable network convergence.

5. Results and Discussion

5.1. Justifying the Selection of Wavelet Parameters

In the initial phase of our investigation, we explored a comprehensive set of combinations of wavelet parameters, encompassing mother functions, decomposition levels, and threshold rules, to ascertain the optimal configuration for denoising LFM signals. This exploration revealed that while all wavelet parameters play a critical role in the denoising efficacy, the threshold rule emerged as the most influential parameter in adapting to changes in the SNR levels of the LFM signals. Consequently, our focus shifted towards identifying the most effective mother function and decomposition level for our dataset.
The wavelet parameters highlighted in Table 2 were subjected to an exhaustive combinatorial analysis, yielding a total of 144 distinct parameter combinations for each noisy signal. For each parameter set, the correlation value between the denoised and the original clean signal was calculated and recorded. Following this analysis, the combination of parameters that exhibited the highest correlation value was selected as the optimal denoising wavelet parameters.
Our analytical results indicated a pronounced appearance of certain mother functions within the wavelet parameter space. Specifically, the ‘db8’ mother function emerged as the most frequently optimal choice, accounting for 53% of the cases, followed closely by ‘sym8’, which was optimal in 46% of instances. The remaining mother functions collectively constituted a mere 1%, as depicted in Figure 5a.
Further analysis revealed a persistent presence of both ‘db8’ and ‘sym8’ across all SNR levels, underscoring their robustness and versatility in denoising signals with varying noise intensities. This is illustrated in Figure 6a, where the frequency of the optimal mother function choice is plotted against SNR levels.
Given the higher occurrence rate of ‘db8’ across the spectrum, it was selected as the preferred mother function for subsequent analyses. This choice was further validated by its slight superiority in appearance frequency over ‘sym8’, suggesting a marginally better fit for our LFM signal denoising tasks across the varied SNR conditions encountered.
In evaluating the optimal decomposition levels (DL) for our dataset, levels 7 and 8 emerged as the most prevalent, collectively constituting 52% of the optimal parameter choices, as depicted in Figure 5b.
This dominance was further corroborated through an analysis of DL against SNR levels, which indicated that levels 7 and 8 consistently exhibited the highest frequency of selection across the majority of SNR levels, as shown in Figure 6b. However, level 2 was the most frequently selected only at the highest SNR level of +24. This observation led to the selection of level 7 as the decomposition level with the highest overall appearance rate, thus identifying it as the optimal parameter for denoising in our study.
The pie chart analysis of threshold rule preference elucidates a clear preference for the Bayes and SURE rules, which constitute 58% and 37% of the optimal choices, respectively, with the Bayes rule slightly leading. The UniversalThreshold rule accounts for a smaller segment at 5%, and the Minimax rule is the least selected, appearing in less than 1% of the cases. This distribution is illustrated in Figure 5c, highlighting the significant inclination towards Bayesian and Stein’s Unbiased Risk Estimate methods for denoising within our dataset.
The analysis of threshold rule efficacy in relation to SNR levels reveals a dynamic interplay; the selection of the optimal threshold rule is closely linked to the SNR level. As such, a change in SNR level can precipitate a corresponding shift in the optimal threshold rule. It is also observed that within the same SNR level, the preferred threshold rule may vary, reflecting the impact of noise characteristics on the denoising process. This variable selection pattern is illustrated in Figure 6c, which showcases the intricate dependency between threshold rules and SNR levels, underpinning the complexity of achieving optimal denoising results.
In Figure 7a, the Gaussian distribution of SNR values for each mother function shows that the distribution of the mother function is similar at all levels of the SNR. Figure 7b shows the difference in the distribution of the threshold rule with the levels of the SNR.
Table 4 provides a detailed statistical analysis of SNR values for different threshold rules. The mean SNR for the Bayes rule is substantially higher at 16.339 with a standard deviation of 7.5754, indicating its preference at higher SNR levels. In contrast, the SURE rule exhibits a lower mean SNR of 1.775 with a standard deviation of 6.829, highlighting its utility at lower SNR levels. This statistical evidence corroborates the observed preference patterns, confirming that the Bayes rule excels in less noisy conditions, while the SURE rule adapts better to higher noise environments.
To statistically compare the distributions of SNR values corresponding to different threshold rules, we employed the Kolmogorov–Smirnov (K-S) test. This test evaluates whether two samples originate from the same distribution.
We specifically examined the SNR distributions for the Bayes and SURE threshold rules. The null hypothesis (H0) for the K-S test states that the SNR distributions for Bayes and SURE are the same, while the alternative hypothesis (H1) posits that the distributions are different. Using the K-S test, we obtained the following results:
  • Test Statistic: D = 0.655814 , which is the maximum difference between the empirical cumulative distribution functions (CDFs) of the two samples.
  • p-value: 0.
  • Reject the null hypothesis.
The K-S test resulted in a p-value of 0, which is significantly low. Consequently, we rejected the null hypothesis, which means the SNR distributions for the Bayes and SURE threshold rules are significantly different, D = 0.655814 means that the maximum difference between the empirical CDFs of the SNR values for the Bayes and SURE threshold is a relatively large value, indicating a substantial difference between the two distributions. Figure 8 visualizes the CDFs for the SNR values of the Bayes and SURE threshold rules.
To further investigate the differences in SNR distributions between the Bayes and SURE threshold rules, we conducted a z-test for the means of two independent samples. This test helps determine whether the observed difference in means is statistically significant.
First, we calculated the means and standard deviations of the SNR values for both the Bayes and SURE threshold rules:
  • Mean SNR for Bayes: X ¯ Bayes
  • Standard deviation for Bayes: σ Bayes
  • Mean SNR for SURE: X ¯ SURE
  • Standard deviation for SURE: σ SURE
We then calculated the z-value using the formula
z = X ¯ Bayes X ¯ SURE σ Bayes 2 n Bayes + σ SURE 2 n SURE
where n Bayes and n SURE are the sample sizes for the Bayes and SURE threshold rules, respectively. The calculated z-value was z = 78.1754 , indicating a substantial difference between the means of the two distributions.
To determine the statistical significance, we compared the z-value to the critical value at a 95% confidence level (two-tailed), which was z critical = 1.96 . Since | z | > z critical , we rejected the null hypothesis and concluded that the difference in SNR values between the Bayes and SURE threshold rules was statistically significant.
If | z | > z critical , then the difference is statistically significant. Thus, the z-test confirmed that the SNR distributions for the Bayes and SURE threshold rules were significantly different, reinforcing the results of the Kolmogorov–Smirnov test and highlighting the distinct impact of each threshold rule on SNR values.
Upon closer examination of the two primary threshold rules, Bayes and SURE, a refined analysis was conducted. This focused assessment revealed that the Bayes rule was predominantly favored at lower SNR levels, while the SURE rule tended to be more prevalent as the SNR increased. This pattern demonstrates the decision-making process involved in selecting the most appropriate threshold rule based on the SNR level of the signal. The visualization in Figure 9 distinctly shows the shifting dominance between the two rules across the SNR spectrum.
This direct relationship between the SNR level and the optimal threshold rule signifies the importance of context-aware denoising strategies. It shows the fact that denoising is not merely a static procedure but a dynamic one that requires careful consideration of the noise environment. In the realm of radar signal processing, where precision and reliability are paramount, the ability to adapt the denoising approach to match the SNR level is invaluable. Such adaptability could lead to significant enhancements in signal clarity and, consequently, the accuracy of the information derived from the processed signals.
Upon determining the most favorable wavelet parameters for our study, a specialized dataset was meticulously selected for the purpose of deep learning model training. This dataset consisted of noisy LFM signals with the best-performing wavelet parameter combination presented in Table 3, as indicated by our extensive analysis. This deliberate pairing of noisy signals with their optimal denoising wavelet parameters underpins the model’s ability to accurately infer and apply the most effective noise reduction techniques. The dataset not only provides a comprehensive foundation for the model’s learning process but also encapsulates the empirical knowledge necessary for the advancement of automated denoising methods in the realm of signal processing.

5.2. Training of the Deep Learning Model

The deep learning model, as depicted in Figure 4, was trained to address a classification problem wherein the input, x, was a noisy LFM signal, and the target, y, was the best wavelet parameter, binarily classified into one of two classes based on the threshold rule.
The model’s performance was validated across a spectrum of SNR levels, starting from −6 dB to +24 dB. The validation accuracy achieved was commendable, averaging approximately 85% across all six SNR levels, as shown in Figure 10. However, the accuracy was further enhanced when the model was presented with a smaller set of SNR levels, specifically −6 dB and +24 dB, achieving an impressive 100% accuracy for both negative and positive SNR values, as shown in Figure 11.
To evaluate the performance of the deep learning model for this classification problem, we used the following metrics:
  • Accuracy: Measures the percentage of correctly classified samples.
    Accuracy = Number of correct predictions Total number of predictions
    Accuracy is best suited when the classes are balanced. However, it may be misleading for imbalanced datasets.
  • Precision, Recall, and F1 Score:
    Precision: Measures how many of the predicted positive labels are actually correct.
    Precision = True Positives True Positives + False Positives
    Recall (Sensitivity): Measures how many actual positive samples were correctly predicted.
    Recall = True Positives True Positives + False Negatives
    F1 Score: The harmonic mean of precision and recall, used when both are important and you need a balance between them.
    F 1 Score = 2 × Precision × Recall Precision + Recall
The model’s performance was evaluated for the two different datasets with varying signal-to-noise ratio (SNR) values, as shown in Table 5:
  • For the dataset with 6 SNR values:
    Accuracy (82.91%): This is a strong result for a complex scenario with varying noise levels.
    Mean Precision (86.88%): A precision close to 87% means that the model is good at minimizing false positives.
    Mean Recall (82.79%): This indicates that the model is catching most of the true positives, which is crucial.
    Mean F1-Score (82.40%): This combines both precision and recall, showing the overall balance between the two. It is a solid performance considering the challenge of multiple SNRs.
  • For the dataset with 2 SNR values:
    Accuracy (99.95%), Precision, Recall, and F1-Score (99.95%): These near-perfect results suggest that the model is highly reliable when the SNR conditions are simpler (fewer SNR values).

5.3. Signal-to-Noise Ratio Improvement Analysis

In order to evaluate the improvement in SNR achieved by the denoising method, we performed both a quantitative analysis and a visual comparison. Specifically, two types of plots were generated: a box plot and a scatter plot.
The box plot compares the distribution of SNR values before and after denoising, labeled as “Noisy Signal SNR” and “Denoised Signal SNR”, respectively, as shown in Figure 12. This plot highlights the overall improvement in SNR by showing changes in the median, spread, and potential outliers in the data.
Figure 13 is the scatter plot, and it presents a direct comparison of each signal’s input SNR versus its output SNR, with a reference line indicating input SNR. Points above this line demonstrate an improvement in SNR post denoising.
The analysis reveals an average SNR improvement of 5.72 dB, indicating a significant enhancement in signal quality after applying the denoising technique. These results provide clear evidence of the effectiveness of the method, as shown by the consistent shift in SNR distributions and the positive trends in the scatter plot.

5.4. Proposed LSM Approach and Classical Denoising Analysis

For the analysis of denoising performance, an evaluation was conducted to compare the proposed method against a suite of classical approaches utilizing various wavelet parameters. The performance metrics considered in this analysis included mean correlation, standard deviation, minimum correlation, and maximum correlation between the denoised signals and the original LFM signals. As shown in Table 6, the proposed method demonstrated superior performance across several key metrics when compared to the classical approaches.
The proposed method achieved a mean correlation of 0.9337, indicating a closer approximation to the original signal compared to all classical configurations tested. This high mean correlation signifies the method’s effectiveness in preserving the quality and hold the feature of the signal while reducing noise. Additionally, the proposed method exhibited a standard deviation of 0.167, reflecting a consistent performance across different instances of noise in LFM signals. Notably, the proposed method maintained a minimum correlation of 0.512916, suggesting its robustness in even the least favorable conditions encountered during the testing phase. The maximum correlation achieved was 0.9994, underscoring the method’s potential to achieve near-perfect denoising under optimal conditions.
In contrast, classical approaches, particularly those using db8 with Bayes thresholding, showed lower mean correlations 0.763 decomposition levels 7. The db8, SURE configurations showed mean correlations of 0.833, highlighting the proposed method’s enhanced ability to closely match the original signal.
The empirical results represent the efficacy of the proposed denoising method, which leverages deep learning to surpass the limitations of traditional wavelet-based denoising approaches.

5.5. Spectrogram Analysis of LFM Signals

To visually assess the efficacy of the proposed denoising method, spectrograms of the LFM signals were analyzed at various stages: clear LFM signal, the signal with added noise at different SNR levels, and the signal post denoising with the deep learning-based method.
Figure 14 first row depicts the noisy LFM signals across a range of SNR levels, from −6 dB to +12 dB. The impact of noise on the signal’s spectral purity is observable, with spectral clarity degrading as the SNR decreases. The resultant signals exhibit a marked reduction in noise, as shown in the second row.
The spectrograms closely resemble the reference clear signal spectrogram, indicating the restoration of signal across all SNR levels. The preservation of the LFM signal is particularly noteworthy, emphasizing the proposed method’s capability of retaining essential signal features while mitigating noise.
The spectrogram analysis corroborates the statistical findings reported in Table 6, visually and quantitatively demonstrating the superior performance of the proposed denoising approach compared to classical wavelet methods. These results highlight the potential of deep learning-driven techniques in significantly enhancing the quality of signal processing in noisy environments.

5.6. Potential Applications and Limitations

The proposed deep learning-based denoising approach has significant implications for enhancing radar systems performance, making it highly suitable for deployment in real-world radar applications. Traditional wavelet denoising relies on fixed parameters, which can lead to suboptimal performance when the noise characteristics vary based on the radar’s environment. By contrast, our adaptive approach allows for real-time adjustment of wavelet parameters according to the local noise profile, thereby providing consistent denoising performance across diverse settings.
One promising avenue for implementing this deep learning-based denoising model is through deployment on advanced FPGA (field-programmable gate array) platforms, such as the ZCU102 card. These FPGA platforms now offer powerful deep learning support, enabling both radar signal processing and adaptive denoising in a single system. The ZCU102, for example, provides a robust digital processing unit (DPU) where the trained model can be implemented, allowing real-time, high-speed denoising tailored to the specific conditions encountered by the radar system.
Deploying our model on an FPGA offers substantial flexibility and scalability, especially as FPGA-based systems can adapt to a range of noise intensities and types in the radar’s operating environment. This adaptability ensures that the radar system’s denoising algorithm dynamically adjusts wavelet parameters, preserving signal integrity and enhancing detection accuracy regardless of location or noise conditions. By integrating a deep learning model that responds to SNR variations, we eliminate the limitations associated with static wavelet parameters, offering a significant advancement in radar signal processing that can be broadly utilized across defense, aviation, automotive, and other sectors where radar technology is pivotal.
A key limitation of this work is the reliance on simulated data for training and testing the deep learning model. Due to the unavailability of real LFM radar data, we used data simulation to achieve initial results. However, to validate and enhance the model’s performance in practical applications, a large dataset of real radar signals is essential. Collecting such data would enable a more comprehensive evaluation, especially in varied operational scenarios, improving accuracy and reliability. To address this limitation, future work could include a small set of real data supplemented by generative deep learning techniques, such as generative adversarial networks (GANs). This approach would allow us to augment a smaller real dataset, creating a larger and more diverse dataset to improve model generalization.
Additionally, while we focused on linear frequency modulation (LFM), applying this approach to other modulation types could broaden its utility across different radar applications. Future research could explore adaptive deep learning techniques tailored to alternative modulation schemes, potentially enhancing flexibility and applicability in various radar signal processing contexts.

6. Conclusions

This paper has demonstrated a new algorithm that integrates wavelet-based denoising techniques with the proposed LSTM-based architecture to enhance the quality of LFM radar signals. The proposed LSTM-based architecture, trained on a meticulously selected dataset of noisy LFM signals, has shown a notable ability to predict the optimal denoising parameters that were previously determined through empirical analysis. The results show the superiority of the proposed method in denoising performance over classical wavelet techniques.
Specifically, the LSTM model has achieved a higher mean correlation coefficient alongside a lower standard deviation. These statistical measures indicate not only an enhanced similarity to the original signal but also a consistency in performance across various noise levels. Spectrogram analysis further confirmed these findings, revealing a clear restoration of signal integrity post denoising, particularly evident in the retention of the LFM signal’s characteristics. An improvement in SNR was also demonstrated, plotted graphically, and the average improvement in SNR was calculated, showing the efficiency of the proposed approach.
The significance of this research extends beyond the theoretical realm, promising substantial practical applications in the field of radar signal processing. By automating the selection of wavelet parameters and adapting to different SNR levels, the proposed method paves the way for more accurate and reliable radar detection. This could have far-reaching implications in various sectors, including but not limited to the defense, aviation, and automotive industries, where radar technology is pivotal.
The potential for integrating additional deep learning models and techniques could be explored to further refine denoising capabilities. Moreover, extending the application of this approach to other types of signals and noise environments could broaden its utility in the signal processing domain.

Future Work

While this study has shown the potential of the LSTM-based wavelet parameter selection approach, future research will aim to incorporate real-world radar data to further validate the method’s robustness in practical applications. Testing on field data from actual radar systems will provide a more thorough evaluation of the model’s denoising capabilities and its ability to handle diverse noise characteristics. Moreover, future iterations of this research will include a comparative analysis with state-of-the-art deep learning-based denoising methods in order to gain deeper insight into the advantages and limitations of the proposed method in comparison to contemporary approaches.

Author Contributions

Conceptualization, A.M. and A.Y.; methodology, A.M. and T.A.; software, A.M. and T.A.; validation, A.M.; formal analysis, A.M. and A.Y.; investigation, A.Y., A.M. and T.A.; data curation, A.Y.; writing—original draft preparation, A.M. and T.A.; writing—review and editing, A.Y. and P.F.D.; visualization, A.M., T.A., A.Y. and P.F.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Richards, M.A.; Scheer, J.; Holm, W.A.; Melvin, W.L. Principles of Modern Radar; Scitech Publishing, Inc.: Raleigh, NC, USA, 2010. [Google Scholar]
  2. Zaugg, E.C.; Long, D.G. Theory and Application of Motion Compensation for LFM-CW SAR. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2990–2998. [Google Scholar] [CrossRef]
  3. Xie, R.; Luo, K.; Jiang, T. Waveform Design for LFM-MPSK-Based Integrated Radar and Communication Toward IoT Applications. IEEE Internet Things J. 2022, 9, 5128–5141. [Google Scholar] [CrossRef]
  4. Levanon, N.; Mozeson, E. Radar Signals; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  5. Jia, W.; Cao, Y.; Zhang, S.; Wang, W.Q. Detecting High-Speed Maneuvering Targets by Exploiting Range-Doppler Relationship for LFM Radar. IEEE Trans. Veh. Technol. 2021, 70, 2209–2218. [Google Scholar] [CrossRef]
  6. Painam, R.K.; Manikandan, S. A comprehensive review of SAR image filtering techniques: Systematic survey and future directions. Arab. J. Geosci. 2021, 14, 37. [Google Scholar] [CrossRef]
  7. Li, J.; Stoica, P. MIMO Radar Signal Processing; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  8. Wang, H.; Guo, Y.; Yang, L. Parameter Estimation of LFM Signals Based on FOTD-CFRFT under Impulsive Noise. Fractal Fract. 2023, 7, 822. [Google Scholar] [CrossRef]
  9. Shi, D.; Lam, B.; Gan, W.S.; Wen, S. Block coordinate descent based algorithm for computational complexity reduction in multichannel active noise control system. Mech. Syst. Signal Process. 2021, 151, 107346. [Google Scholar] [CrossRef]
  10. Gupta, A.; Mehra, D. Wavelet based denoising of LFM signals. In Proceedings of the International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 11–12 February 2016; pp. 567–572. [Google Scholar]
  11. Srivastava, M.; Anderson, C.L.; Freed, J.H. A New Wavelet Denoising Method for Selecting Decomposition Levels and Noise Thresholds. IEEE Access 2016, 4, 3862–3877. [Google Scholar] [CrossRef] [PubMed]
  12. Waseem, A.; Shah, I.; Kamil, M.A.U. Advancements in Signal Processing: A Comprehensive Review of Discrete Wavelet Transform and Fractional Wavelet Filter Techniques. In Proceedings of the 2023 Second International Conference on Advances in Computational Intelligence and Communication (ICACIC), Puducherry, India, 7–8 December 2023; pp. 1–6. [Google Scholar]
  13. Halidou, A.; Mohamadou, Y.; Ari, A.A.A.; Zacko, E.J.G. Review of wavelet denoising algorithms. Multimed. Tools Appl. 2023, 82, 41539–41569. [Google Scholar] [CrossRef]
  14. Zhu, H.; Ma, L. Pulse wave signal preprocessing based on improved threshold. In Proceedings of the International Conference on Signal Processing and Communication Security (ICSPCS 2024), Surfers Paradise, Australia, 16–18 December 2024; SPIE: Bellingham, WA, USA, 2024; Volume 13222, pp. 151–155. [Google Scholar]
  15. Ji, H.; Fermüller, C. Robust Wavelet-Based Super-Resolution Reconstruction: Theory and Algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 649–660. [Google Scholar] [CrossRef] [PubMed]
  16. Guo, T.; Zhang, T.; Lim, E.; Lopez-Benitez, M.; Ma, F.; Yu, L. A review of wavelet analysis and its applications: Challenges and opportunities. IEEE Access 2022, 10, 58869–58903. [Google Scholar] [CrossRef]
  17. Kumar, A.; Tomar, H.; Mehla, V.K.; Komaragiri, R.; Kumar, M. Stationary wavelet transform based ECG signal denoising method. ISA Trans. 2021, 114, 251–262. [Google Scholar] [CrossRef] [PubMed]
  18. Youssef, A.; Driessen, P.F.; Gebali, F.; Moa, B. A Novel Framework for Combining Multiple Radar Waveforms Using Time Compression Overlap-Add. IEEE Trans. Signal Process. 2021, 69, 4371–4384. [Google Scholar] [CrossRef]
  19. Onufriienko, D.; Taranenko, Y. Filtering and compression of signals by the method of discrete wavelet decomposition into one-dimensional series. Cybern. Syst. Anal. 2023, 59, 331–338. [Google Scholar] [CrossRef]
  20. Donoho, D. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  21. Rasti-Meymandi, A.; Ghaffari, A. A deep learning-based framework For ECG signal denoising based on stacked cardiac cycle tensor. Biomed. Signal Process. Control 2022, 71, 103275. [Google Scholar] [CrossRef]
  22. Brophy, E.; Redmond, P.; Fleury, A.; De Vos, M.; Boylan, G.; Ward, T. Denoising EEG signals for real-world BCI applications using GANs. Front. Neuroergon. 2022, 2, 805573. [Google Scholar] [CrossRef] [PubMed]
  23. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  24. Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent neural networks for time series forecasting: Current status and future directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar] [CrossRef]
  25. Ngui, W.K.; Leong, M.S.; Hee, L.M.; Abdelrhman, A.M. Wavelet analysis: Mother wavelet selection methods. Appl. Mech. Mater. 2013, 393, 953–958. [Google Scholar] [CrossRef]
  26. Guariglia, E.; Guido, R.C.; Dalalana, G.J. From wavelet analysis to fractional calculus: A review. Mathematics 2023, 11, 1606. [Google Scholar] [CrossRef]
  27. Georgieva-Tsaneva, G. Wavelet based interval varying algorithm for optimal non-stationary signal denoising. In Proceedings of the 20th International Conference on Computer Systems and Technologies, Ruse, Bulgaria, 21–22 June 2019; pp. 200–206. [Google Scholar]
Figure 1. Block diagram of the DWT-based denoising process for LFM radar signals.
Figure 1. Block diagram of the DWT-based denoising process for LFM radar signals.
Remotesensing 16 04211 g001
Figure 2. LSTM building block which acts as a memory cell by handling input, output, and forget gates.
Figure 2. LSTM building block which acts as a memory cell by handling input, output, and forget gates.
Remotesensing 16 04211 g002
Figure 3. The block diagram of the workflow integrating deep learning with wavelet denoising.
Figure 3. The block diagram of the workflow integrating deep learning with wavelet denoising.
Remotesensing 16 04211 g003
Figure 4. Proposed network architecture designed for classifying the best wavelet parameter for denoising LFM signals.
Figure 4. Proposed network architecture designed for classifying the best wavelet parameter for denoising LFM signals.
Remotesensing 16 04211 g004
Figure 5. Distribution of optimal wavelet parameters.
Figure 5. Distribution of optimal wavelet parameters.
Remotesensing 16 04211 g005
Figure 6. Distribution of optimal wavelet parameters across SNR levels. (a) Histogram of optimal mother functions across SNR levels. (b) Histogram of optimal decomposition levels across SNR levels. (c) Histogram of threshold rule selection across SNR levels.
Figure 6. Distribution of optimal wavelet parameters across SNR levels. (a) Histogram of optimal mother functions across SNR levels. (b) Histogram of optimal decomposition levels across SNR levels. (c) Histogram of threshold rule selection across SNR levels.
Remotesensing 16 04211 g006
Figure 7. Gaussian distribution of SNR values for each mother function and for each threshold rule.
Figure 7. Gaussian distribution of SNR values for each mother function and for each threshold rule.
Remotesensing 16 04211 g007
Figure 8. Empirical cumulative distribution functions (CDFs) of SNR values for Bayes and SURE threshold rules, illustrating the significant difference between the two distributions.
Figure 8. Empirical cumulative distribution functions (CDFs) of SNR values for Bayes and SURE threshold rules, illustrating the significant difference between the two distributions.
Remotesensing 16 04211 g008
Figure 9. Focused analysis of the prevalence of Bayes and SURE threshold rules at different SNR levels, illustrating the adaptive nature of threshold rule selection in response to SNR variations.
Figure 9. Focused analysis of the prevalence of Bayes and SURE threshold rules at different SNR levels, illustrating the adaptive nature of threshold rule selection in response to SNR variations.
Remotesensing 16 04211 g009
Figure 10. The training progress and average loss of the deep learning model across a broad range of SNR levels.
Figure 10. The training progress and average loss of the deep learning model across a broad range of SNR levels.
Remotesensing 16 04211 g010
Figure 11. The training progress and average loss of the deep learning model for a selective set of SNR levels.
Figure 11. The training progress and average loss of the deep learning model for a selective set of SNR levels.
Remotesensing 16 04211 g011
Figure 12. Box plot of SNR distribution before and after denoising.
Figure 12. Box plot of SNR distribution before and after denoising.
Remotesensing 16 04211 g012
Figure 13. Scatter plot of noisy and denoised SNR of LFM signals.
Figure 13. Scatter plot of noisy and denoised SNR of LFM signals.
Remotesensing 16 04211 g013
Figure 14. Spectrograms of noisy and denoised LFM signals with various SNR levels; the top row is the noisy LFM signal, and the bottom row is the corresponding denoised LFM signal.
Figure 14. Spectrograms of noisy and denoised LFM signals with various SNR levels; the top row is the noisy LFM signal, and the bottom row is the corresponding denoised LFM signal.
Remotesensing 16 04211 g014
Table 1. Summary of parameters for LFM signal generation and SNR adjustments.
Table 1. Summary of parameters for LFM signal generation and SNR adjustments.
ParameterDescriptionValue(s)
fsSampling frequency 2 24 Hz
TsSampling period1/fs
deltaPulse width ( τ ) in microseconds100 μ s
BBandwidth of the LFM signal2 MHz
SNRRange of Signal-to-Noise Ratios−6 dB to +24 dB
Table 2. Overview of wavelet analysis parameters.
Table 2. Overview of wavelet analysis parameters.
Parameter CategorySelected Values
Mother FunctionHaar, fk8, db8, sym8
Threshold RulesBayes, Minimax, SURE, UniversalThreshold
Decomposition Levels2, 3, 4, 5, 6, 7, 8, 9, 10
Table 3. Final wavelet parameter selection for dataset construction.
Table 3. Final wavelet parameter selection for dataset construction.
ParameterDescription
Mother FunctionsDaubechies (db8)
Threshold Rules(Bayes) or (SURE)
Decomposition Levels7
Table 4. Statistical analysis of SNR values for different threshold rules.
Table 4. Statistical analysis of SNR values for different threshold rules.
ThresholdRuleMeanStandard Deviation
Bayes16.3397.5754
SURE1.7756.829
Table 5. Performance metrics for datasets with 6 and 2 SNR values.
Table 5. Performance metrics for datasets with 6 and 2 SNR values.
Metric6 SNR Values2 SNR Values
Accuracy (%)82.9199.95
Mean Precision (%)86.8899.95
Mean Recall (%)82.7999.95
Mean F1-Score (%)82.4099.95
Table 6. Comparison of denoising performance between the proposed method and classical approaches.
Table 6. Comparison of denoising performance between the proposed method and classical approaches.
MethodMean CorrelationStandard DeviationMin CorrelationMax Correlation
db8, Bayes, 70.7630.2390.34640.9994
db8, Sure, 70.833660.1670.512920.99933
Proposed Method0.9337020.1670.5129160.999397
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdelfattah, T.; Maher, A.; Youssef, A.; Driessen, P.F. Seamless Optimization of Wavelet Parameters for Denoising LFM Radar Signals: An AI-Based Approach. Remote Sens. 2024, 16, 4211. https://doi.org/10.3390/rs16224211

AMA Style

Abdelfattah T, Maher A, Youssef A, Driessen PF. Seamless Optimization of Wavelet Parameters for Denoising LFM Radar Signals: An AI-Based Approach. Remote Sensing. 2024; 16(22):4211. https://doi.org/10.3390/rs16224211

Chicago/Turabian Style

Abdelfattah, Talaat, Ali Maher, Ahmed Youssef, and Peter F. Driessen. 2024. "Seamless Optimization of Wavelet Parameters for Denoising LFM Radar Signals: An AI-Based Approach" Remote Sensing 16, no. 22: 4211. https://doi.org/10.3390/rs16224211

APA Style

Abdelfattah, T., Maher, A., Youssef, A., & Driessen, P. F. (2024). Seamless Optimization of Wavelet Parameters for Denoising LFM Radar Signals: An AI-Based Approach. Remote Sensing, 16(22), 4211. https://doi.org/10.3390/rs16224211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop