1. Introduction
In the actual signal acquisition environment, the effective signals of some scenes could be stronger and accompanied by a large amount of environmental noise. This often makes the acquired signals non-smooth and increases the difficulty of accurately extracting the effective signals from the noisy environment. Researchers have proposed the adaptive filtering technique to improve the signal-to-noise ratio of weak signals in complex noise environments. Reference [
1] introduces an adaptive time delay estimation algorithm for low-signal-to-noise-ratio environments that can accurately estimate the filtering results. Reference [
2] proposed a method combining LMS and RLS filters, which is able to obtain results close to Kalman filtering performance with low computational complexity. Reference [
3] described the principle of active noise reduction and submitted a patent application. Reference [
4] proposed a new normalized LMAT (NLMAT) algorithm that outperforms existing algorithms in various noise environments. Reference [
5] proposed a fast and stable normalized least mean quadrature (FSNLMF) algorithm with faster convergence. This technique enhances the optimization of the signal-to-noise ratio in the signal-processing process by dynamically adjusting the filtering parameters to adapt to environmental changes. It improves the accuracy and reliability of signal detection in complex noise backgrounds. Therefore, adaptive filtering techniques have been widely used and thoroughly studied in the fields of radar, sonar, communication and navigation systems, etc. [
6]. Widrow and his colleagues introduced the least mean square error (LMS) algorithm in the early 1980s, simplifying the computational process, reducing the difficulty of implementation, and enhancing adaptability [
7]. This algorithm has been applied in numerous fields due to its superior performance, such as radar beam formation, adaptive interference noise cancelation, and next-generation mobile communication technologies [
8,
9,
10,
11].
Traditional least mean square error (LMS) algorithms use a fixed-step-size approach (hereafter referred to as fixed-step-size LMS algorithms), which limits their ability to achieve fast convergence and maintain low steady-state errors. Researchers have conducted extensive studies to address this issue and optimize the performance of fixed-step-size LMS algorithms. A variable-step-size LMS algorithm that employs an S-function to regulate the step size is introduced in the literature [
12]. This method dynamically adjusts the step size through the S-function by assigning a more significant step size to quickly approach the optimal solution and decreasing the step size to improve the accuracy as the convergence point approaches. This variable-step-size strategy significantly improves the convergence speed and reduces the steady-state error compared to the fixed-step-size LMS algorithm. However, the steep variation in the S-function near the zero point causes the algorithm to change the step size too fast as it approaches convergence, which increases the error at the steady state of the algorithm. In the literature [
13], by applying translation and flip-flop transformations to the S-function, coupled with introducing new parameters to optimize the change at the bottom of the function, the researchers succeeded in improving the performance of the variable-step-size LMS algorithm. Although this approach brought about a performance improvement, the complexity of the algorithm model adversely affected its flexibility. Subsequently, reference [
14] explored a new variable-step-size LMS algorithm that draws on the Q-function with the properties of the S-function curve and adjusts the step size using a compensating term inter-correlation function for the relative error. Due to the inherent characteristics of the Q-function, this algorithm faces the same challenge of significant steady-state errors as it approaches the convergence point. Reference [
15] employs a gradient statistical averaging-based approach to modulate the step-size factor, aiming to speed up the algorithm’s convergence and optimize its steady-state error performance. However, the drawback of this approach lies in the weak immunity to interference caused by the introduction of a judgment threshold that increases the algorithm’s complexity. Reference [
16] introduced a variable-step-size LMS algorithm based on exponential functions by using exponential functions. Still, the algorithm also suffers from high complexity due to the frequent execution of exponential operations. To solve this problem, reference [
17] improved on [
16] by applying the variable-step-size method to the partial update of the filter weight coefficients, improving the convergence speed and effectively reducing the algorithm’s complexity. However, in low-signal-to-noise-ratio environments, the algorithm has a more significant step size when approaching convergence, and the steady-state performance is poor.
In summary, the existing variable-step-size LMS algorithms cannot simultaneously solve the problems of noise interference, slow convergence speed, and higher steady-state error. In order to effectively reduce the influence of environmental noise, speed up the convergence of the algorithm to the optimal solution, and achieve a lower error level after reaching the steady state, this paper proposes a variable-step-length adaptive filter based on the improved skip-link function, in which the DSLMS adaptive filtering algorithm is used, with the following main contributions:
- (1)
A new variable-step-size adaptive filtering algorithm is proposed based on the minutiae function, which is combined with the pairwise elimination system to effectively reduce the noise component in the error by utilizing the characteristic of low correlation of the noise signal and improve the anti-noise-interference capability of the adaptive filter.
- (2)
The adaptive filter effectively improves the convergence speed and significantly reduces the steady-state error by segmentally adjusting the step factor of the weight coefficients.
- (3)
Using FPGA technology as an experimental platform, this study modularizes the design of the DSLMS algorithm and compares it with the traditional LMS algorithm through experiments. The results show that the proposed algorithm has better performance.
2. Related Work
2.1. Theoretical Analysis of Adaptive Filters
Adaptive filters deal with weak signals without knowing the statistical characteristics of the input signal and noise; the filter itself can learn or estimate the statistical characteristics of the signal during the working process [
18] and adjust its parameters based on this to achieve the optimal filtering effect under a specific cost function [
19]. LMS algorithmic filters are a kind of adaptive filter widely used in noise cancelation, echo cancelation, spectral line enhancement, channel equalization, system identification, etc. [
20,
21,
22].
As shown in
Figure 1, the LMS algorithm includes two essential parts: filtering and adaptive processes. The filtering process consists of two key steps: Firstly, in the adaptive filtering process, the input signal,
first undergoes a multilevel structure consisting of delay units
and weights,
, which is responsible for implementing the signal weighting and accumulating it to form the output signal,
[
23]. The output of each stage consists of the product of the corresponding weights and the delayed version of the input signal, which is summarized into the final output of the filter.
The adaptive adjustment mechanism is based on the error,
between the output signal,
and the desired signal,
. This error signal is used to guide the updating of the weight parameters according to a gradient descent algorithm, where the weight adjustment is proportional to the product of the error signal and the input signal, and a step factor (μ) controls the update amplitude. Such a weight-updating process aims to gradually reduce the mean square error between the output and the expectation, thus optimizing the overall performance of the filter [
24].
The horizontal filter input signal vector,
, is denoted as
, where
is the order of the filter. The weight vector,
, of the filter is
, in this case, the output,
, of the filter is calculated as shown in Equation (1):
Equation (2) is the estimation error at the nth moment.
Equation (3) is the lateral filter’s mean square error criterion cost function.
The gradient descent method is applied to adjust the values of the adaptive filter’s weight coefficient vectors along the performance surface’s negative gradient direction. Equation (4) is the iterative formula for calculating the weight vector,
w.
where
is the step factor, and
> 0.
The transversal filter is optimized by seeking a weight vector that minimizes the mean square error cost function. For a generalized smooth process when the number of iterations tends to infinity, the expectation of this weight vector estimate is close to the Wiener optimal solution,
[
25].
Although the traditional least mean square error (LMS) adaptive filtering algorithm performs well in many aspects, the algorithm suffers from several shortcomings in applications dealing with weak signals. First, the traditional LMS algorithm uses the product of the instantaneous error and the weight vector, , as an approximation of the gradient vector, . Since this approximation does not include the computation of the expectation value in the weight-updating process, its iterative step lacks robustness under the influence of noise. Second, the step factor (μ) plays a decisive role in the weight-updating process, which directly affects the response speed of the filter parameter tuning as well as the rate of minimizing the error. In a boisterous signal environment, too small a value of slows down the system’s adaptation to changes in the signal environment. It prolongs the time for the system to converge to the desired signal. In less noisy signal environments, larger values of μ speed up the weight update and cause over-adjustment and oscillations in the system. Such oscillations not only lead to convergence problems of the filter near the global optimum solution but also degrade the system’s performance, leading to signal distortion and increased steady-state error.
2.2. Principle of the Pair of Eliminators
Noise-counteracting techniques originated in the mid-1960s by a team of researchers at Stanford University and are now widely used in various environments. The basic principle of noise counter-cancelation is shown in
Figure 2 [
26,
27,
28].
Noise cancelation systems are a vital signal-processing technique that operates through two main inputs: the primary input, , and the reference input, . The primary input signal, , combines the desired signal, , and the accompanying noise, . The reference input signal, , which is usually obtained from an ambient noise source, is associated with the noise component, in the primary input but is independent of the desired signal, . In a noise cancelation system, a reference signal, generates an output designed to match and neutralize the primary signal’s noise component, . The error signal near-clean signal is obtained by subtracting the estimated noise, from the primary input signal, .
The noise pair cancelation technique can identify and cut down unwanted noise components employing a reference noise signal. Although the original signal,
is interfered with by the noise,
, the pair cancelation system ensures that the final error signal (
) maximally reflects the desired signal,
by accurately estimating and cutting the reference noise,
, which improves the signal-to-noise ratio of the desired signal [
29].
Assuming that the signals
,
, and
are smooth, the expectation of the square of the output error mode
of the filter is
Since the useful signal
is uncorrelated with the noise signal
and the reference signal
, the
Because the power of the sound signal is not affected by the filter power vector, the moment when
reaches its minimum value is the moment when
reaches its minimum value. When
, i.e., when
the minimum value is obtained.
Equations (5)–(7) show that an ideal noise counter-cancelation system can eliminate the difference between the estimated noise and the actual noise to a large extent, thus leaving only the effective signal. However, in real dynamically changing noise environments, the counter-cancelation system may not effectively respond to changes in noise characteristics due to its lack of real-time adjustability, which results in incomplete noise cancelation and prevents the system from adapting quickly when noise characteristics change.
3. DSLMS Adaptive Filter
In the field of weak signal processing, although the traditional fixed-step-size LMS algorithm is widely used due to its simplicity, its fixed and unchanging weight-vector step factor makes it difficult to simultaneously achieve the improvement of convergence speed and the reduction in steady-state error when dealing with signals and noises in a variable environment [
30].
To solve the above problems, this paper proposes a variable-step-size adaptive filter based on the improved micro tangent trace function, which adopts the DSLMS algorithm, and by optimizing the step-size adjustment mechanism, it speeds up the convergence speed, reduces the steady-state error, and effectively improves the processing performance of weak signals. By combining the DSLMS algorithm with the pair cancelation system, the signal can be separated from the noise more effectively, the anti-interference ability of the system can be improved, and the accuracy and efficiency of signal processing can be improved through more accurate signal reconstruction. The principle of the DSLMS adaptive filter is shown in
Figure 3.
3.1. Algorithmic Principles
This study proposes a variable-step-size adjustment strategy based on the minutiae function by taking the inverse of the independent variable (x) of the minutiae function to obtain a new step-size factor-taking function, as shown in Equation (8).
where x denotes the autocorrelation estimate based on the current error versus the previous moment’s error, bringing
from
Figure 3 into the equation.
From
Figure 4a, it can be seen that the μ(x) curve conforms to the step-size adjustment principle of the adaptive filtering algorithm in the absence of noise interference or relatively small noise interference in the early stage of the algorithm (the initial stage of algorithmic convergence). The step-size adjustment function provides the algorithm with a more significant value of the step size (μ) to improve the convergence speed of the algorithm and to enable it to quickly transit to the stage of the algorithm’s convergence completion. In the convergence completion phase, a smaller step size value (μ) is offered to keep the algorithm more stabilized in that phase.
However, in the environment with more serious noise interference, as shown in
Figure 4b, the error,
contains the signal component,
and the residual noise,
after being processed by the counter-cancelation system. Despite the use of the cancelation system, if the residual noise is still considerable, it will cause the error,
e(
n), to be more significant all the time, so that
cannot go to a minimal value and the adaptive algorithm is too complex to reach the optimal solution. It can only fluctuate around the optimal solution. At this time, the weight-vector step factor must be improved to eliminate the noise interference.
The DSLMS algorithm proposed in this paper employs an improved weight-vector step-size factor tuning function, shown in Equation (9), which determines the step size based on the current number of iterations, , and a predefined threshold value, . When the number of iterations n is less than or equal to , the step size is set to a constant, . This provides a more significant step size in the initial phase of the algorithm, which speeds up the initial convergence.
When the number of iterations exceeds , the step-size adjustment becomes a dynamic value dependent on the correlation of the error signal. Fine control of the step size is achieved by introducing the parameters , , and . is a scaling factor that controls the overall amplitude, determining the baseline size of the step size change. acts as a scaling factor, which controls how much the correlation of the error signal affects the step size. By adjusting this, we can control how quickly the algorithm responds to dynamic changes in the error signal, which involves the algorithm’s adaptability and sensitivity. ensures that the algorithm has a minimum step size value, which means that the step size does not drop to zero even when the error correlation is minimal, guaranteeing that the algorithm can continue updating the weights.
In summary, the formula for the variable-step-size adaptive filtering algorithm in the adaptive pair cancelation filter proposed in this paper can be summarized as
where
denotes the error signal, the difference between the desired signal and the filter output.
denotes the actual signal, the superposition of the desired output signal and the noise signal.
is the filter’s actual output signal.
denotes the desired output signal.
is the noise signal.
represents the reference signal.
is a transposition of the filter weight vector.
where
denotes the weight vector at the next moment.
is the weight vector at the current moment.
is the step factor, a positive constant that controls the speed of weight adjustment.
3.2. Algorithm Performance Analysis
To better analyze the specific impact of the parameters in the step adjustment function model on the algorithm’s performance, this paper will focus on the volatility of the step coefficients as well as the role of parameters
and
on the convergence speed of the algorithm. On this basis, we will discuss in detail the selection principle of each parameter and its scope of application. When analyzing the step-size adjustment strategy, we will choose the specific condition of
= 0 to explore the influence of
and
parameters.
Figure 5a shows the step coefficient,
, and error
adjustment curve when
is different and
is the same;
Figure 5b shows the step coefficient,
, and error (
) adjustment curve when
is the same and
is different; and
Figure 6 shows the fluctuation in algorithmic weight vectors when the parameters
and
are fixed and
is taken to other values.
The three curves in
Figure 5a are the corresponding step-size adjustment curves when α is taken as 0.5, 1, and 2 (
= 0.010), respectively, from which it can be seen that the larger the value of α, the larger the value of step size provided for the algorithm in the early stage of convergence, but the algorithm decreases in stability in the stage of convergence completion. The smaller the value of
, the more stable the algorithm is in the stage of convergence completion, but the algorithm’s speed of convergence is slower; because of the equilibrium stability and the speed of convergence,
= 2 is taken in this paper. The three curves in
Figure 5b are the corresponding step-size adjustment curves when
is 0.01, 0.02, and 0.05 (
= 2), respectively. It can be seen that the smaller the value of
, the smaller the change in the step-size factor in the error close to zero, but in the early stage of algorithm convergence, the algorithm is not able to provide a more significant value of step size and it slower; when
is taken with a more substantial value, it can provide the algorithm with a more significant value of step size to make it converge more slowly. When the value of
is more critical, it can give the algorithm a faster convergence speed. Still, at the same time, the step factor will be affected by the error when the error is close to zero. The value of
should be in the range of 0.005~0.500; the value of
is taken as
= 0.01 in this paper, and the algorithm’s performance in terms of convergence speed and steady-state error performance is better.
Figure 6 illustrates the effect of the parameter
on the convergence performance of the algorithm’s weight vectors. A higher value of
accelerates the convergence of the weight vectors in the early stages of the algorithm. Still, it leads to more significant fluctuations in the weight vectors when the algorithm reaches the steady state. On the contrary, a lower value of
makes the convergence speed slower but is conducive to reducing the fluctuation at the steady state and enhancing the algorithm’s stability. Therefore, choosing the
value requires a trade-off between convergence speed and strength, depending on the requirements of the application scenario. In this study, after comparative analysis and considering the convergence speed and weight fluctuation at steady state, we choose
= 0.2 to ensure that the algorithm achieves good noise suppression while maintaining reasonable performance.
The DSLMS algorithm proposed in this paper uses time series correlation analysis to quantify the correlation of the error signal at two consecutive moments by calculating . Utilizing the non-correlated nature of noise, the method can effectively distinguish between noise and valuable signals in the signal. The DSLMS algorithm can identify noise variations and reduce their effects when adjusting the step size, significantly enhancing the adaptive filter’s anti-interference capability. In addition, the algorithm employs a dynamic step-size adjustment mechanism that connects the step size to the correlation of the error signal, allowing the algorithm to automatically reduce the step size as it approaches the optimal solution in response to the reduction in error. Combining the DSLMS algorithm with the pair eliminator system and fine-tuning the segmented step-size factor, the methodology used in this study effectively improves the noise rejection capability. It balances the convergence speed with the system stability while ensuring the algorithm’s performance.
4. DSLMS Adaptive Filter
FPGAs (Field Programmable Logic Gate Arrays) excel in parallel processing power, fast data computation, on-demand reconfigurable flexibility, and strong customization adaptability, which make them an ideal hardware platform for implementing efficient and complex DSLMS algorithms [
31,
32].
The LMS algorithm requires many matrix and vector operations, and the FPGA’s ability to perform multiple computational tasks, weighted updates, and error calculations in parallel ensures smooth processing of the data stream through the filter design, significantly accelerating the data processing rate [
33]. In addition, the reconfigurability of FPGAs allows researchers to tailor the hardware logic to the LMS filter requirements, which not only helps to optimize the computational tasks but also allows the logic to be adjusted to the characteristics of the input data, which in turn improves the processing efficiency [
34].
For the DSLMS algorithm proposed in this paper, FPGAs can achieve precise and rapid step-size adjustments, thus enhancing the stability and performance of the algorithm. Given these significant advantages of FPGAs, this study chooses to use them as the platform for DSLMS filter design. In this design, the filter is divided into four modules: a filter fir module, an error calculation (error_n) module, a step size update (mu_update) module, and a weight update (w_update) module, which realizes filter coefficient updating, error data updating, and filter calculation functions. With these four modules, the following five-step operation is realized:
- (1)
Filter calculations: .
- (2)
Calculate the filter result and output .
- (3)
Calculation of errors: .
- (4)
Calculation of new weights: .
- (5)
Updating weights: .
In the data filtering module, after receiving the signal pulse, the weighting of the input data is performed, and the results are truncated for output. Due to the feedback mechanism in the DSLMS algorithm optimized for noise correlation characteristics, the filtering process can lead to an overflow of computation results. Proper decimal point alignment will be ensured in the addition module to prevent computational errors due to overflow. In the face of integer bit-width mismatches, sign bit padding is performed on shorter-bit-width digits to match longer-bit-width digits; zero padding is performed on decimal bit-width mismatches to align them. In the module design, the saturation truncation strategy is implemented before the module outputs the results to ensure the accuracy of the calculation results, avoid data overflow, and retain as many valid bits as possible for the critical weight.
For the error calculation module, the module summarizes the filtering results from the previous stage and performs eight rounds of addition operations. After completing these operations, a saturation truncation process is applied, and the computed results are compared with the target signal to calculate the difference. Such a process aims to ensure computational accuracy while reducing distortion caused by numerical limits.
In the weight update module, a 16-bit multiplier with 16-bit multiplication calculates the weight update, producing a 32-bit result. This result then needs to be combined with the existing filter coefficients, which involves a truncation process to accommodate the bit width of the coefficients. To deal with this bit-width mismatch and avoid data overflow, a saturated truncation method is used in the system design, whereby when the multiplication result exceeds the maximum value that the target bit width can represent, the output is limited to the maximum value that can be defined, thus maintaining the validity of the result. This module adapts to signal variations and noise characteristics by precisely adjusting the filter coefficients, ensuring high output accuracy and stability.
Figure 7 represents a schematic diagram of the RTL structure of the DSLMS algorithm after generating the synthesized Verilog code, representing the implementation of Equations (10)–(12), respectively.
data_in[15:0] denotes the input signal, data_ref[15:0] denotes the desired signal, clk_i denotes the clock signal, rst_n_i denotes the reset signal (active, low), error_o[15:0] denotes the error signal, mu3 denotes the step factor mu of Equation (12), and coef1 to coef8 denote the weight update; coef1 denotes the initial weights, which are set to one. data_o[15:0] denotes the final filtered output.
Since the RTL view of each module is too cumbersome, the design schematic is used to show them individually.
Figure 8 shows FilterError calculation module and
Figure 9 shows Weight update module.
Figure 10 shows Error calculation module.
Figure 11 shows Step update module.