3.1. Least Mean Square Error Adaptive Phase Difference Estimation Algorithm
The essence of the LMS algorithm is to perform an orthogonal transformation on a signal to generate a new signal that matches another signal, and then adaptively update the weight coefficients based on the least mean square error to achieve an adaptive estimation of phase difference. The algorithm will be described in detail below.
From a certain perspective, the adaptive filtering algorithm can also be referred to as a performance surface search method. On the performance surface, it finds the optimal solution by continuously measuring whether the measurement points are close to the target value. Currently, one of the widely used surface functions is the mean square error (MSE) function, which is expressed as follows:
Equation (14) is designed to find the minimum value of the mean square error (MMSE) criterion function, which is used to derive the Wiener filter. The formula is given by:
From this equation, it can be seen that the mean square error and the weight vector of the filter have a quadratic relationship. The introduction of the mean square error surface is to describe the mapping relationship of the function, and the quadratic function corresponding to the weight vector is a double parabolic surface. According to the MMSE criterion and the mean square error surface, at each time, the weight vector
surface is updated along the projection direction of the steep descent of the mean square error, that is, the anti-gradient vector ζ of the objective function is iteratively updated. Since there is only one unique minimum value on the mean square error performance surface, as long as the convergence step is properly selected, no matter where the initial weight vector is, it can converge to a small point on the error surface or in its neighborhood. This method of solving the minimization problem along the opposite direction of the objective function gradient is generally called the fast descent method, and its expression is as follows:
where
:
The complete expression of the least-mean-square (LMS) adaptive filtering algorithm based on the stochastic gradient algorithm is given as follows:
where
represents the weight coefficient,
represents the calculated estimated signal,
represents the signal to be recalculated,
represents the desired signal, and
represents the error value between the desired signal and the estimated signal.
Suppose the sampled sequences of two identical sinusoidal signals with the same frequency are
and
, then
where
represents the amplitude of the signal,
and
are the initial phases of the signal, and
is the angular frequency, where
and
are the frequency of the sine wave signal and the signal sampling rate, respectively.
and
are Gaussian white noise, and
N is the sampling length. Since there is only a phase difference between
and
, according to trigonometry, one signal can be estimated through its orthogonality with another signal. Without loss of generality, let
be the orthogonality of
, where
is the phase shift of noise
. Then, the estimation of
and the estimation error can be obtained as follows:
where
. The phase difference
information is contained in the coefficients
and
. The LMS algorithm is used to update
and
adaptively. When the mean square error
, the phase difference
can be obtained by solving it. The design of the phase difference adaptive estimator is shown in
Figure 4.
The Hilbert transform is applied to
to obtain the analytic signal, in which the real and imaginary parts are
and
, respectively. By applying Equation (20) to calculate
, and using the least mean square algorithm to adaptively update
and
, the phase difference can be obtained:
The adaptive updating of
and
can be achieved by the LMS algorithm, and the specific formula is given as follows:
where
is the step size of a single iteration.
3.2. Discrete Fourier Transform Solution Phase Difference
The phase difference of two measured sinusoidal signals with the same frequency can be calculated using the discrete Fourier transform (DFT). Specifically, the two measured sinusoidal signals are first sampled to obtain two discrete sequences, and then the DFT is applied to each sequence to obtain their spectra. The maximum amplitude value of each spectrum is located, and the corresponding spectral lines are found. The initial phase values of the two sequences are then calculated separately using the real and imaginary parts of the values of the corresponding spectral lines. Finally, the phase difference between the two measured signals can be obtained by subtracting the two initial phase values. The specific derivation process is shown as follows.
Suppose there are two sinusoidal signals:
where
represents the time sequence,
is the frequency of the sine wave,
and
are the initial phases of
and
, respectively.
Under ideal sampling conditions (i.e., whole-cycle sampling), the frequency of the sine wave exactly falls on the N equal divisions of the sampling frequency, that is, , where m takes any positive integer value in 0, 1, 2, …, N/2.
After frequency extraction of the above
and
, two discrete-time sequences can be obtained, namely:
The N-point discrete Fourier transform (DFT) operation of
is defined as follows:
where
k = 0, 1, …,
N − 1.
According to the orthogonality of complex exponential periodic sequences, it can be observed that
when
.
only when
:
Therefore, for the sine sequence
, its initial phase is given by
and for the sine sequence
, its initial phase is given by:
Hence, the phase difference
between the loop
and
can be expressed as:
The above deduction shows that when , the spectral values of and reach their maximum values. According to Equations (30) and (31), the initial phase of the two tested sine signals can be calculated, and the phase difference between them can be obtained by subtracting the two initial phase values using Equation (32). Moreover, when frequency stability and noise are not considered, the phase values detected by the DFT method based on ideal sampling are very accurate. In this case, the calculated signal phase is correct.