Next Article in Journal
Effects of Constant Water Flow on Endurance Swimming and Fatigue Metabolism of Large Yellow Croaker
Next Article in Special Issue
Ray-Based Analysis of Subcritical Scattering from Buried Target
Previous Article in Journal
The Analysis of Cavitation Flow and Pressure Pulsation of Bi-Directional Pump
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Least Mean p-Power-Based Sparsity-Driven Adaptive Line Enhancer for Passive Sonars Amid Under-Ice Noise

1
Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
2
Key Laboratory of Science and Technology on Advanced Underwater Acoustic Signal Processing, Chinese Academy of Sciences, Beijing 100190, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(2), 269; https://doi.org/10.3390/jmse11020269
Submission received: 24 December 2022 / Revised: 18 January 2023 / Accepted: 19 January 2023 / Published: 24 January 2023
(This article belongs to the Special Issue Underwater Target Detection and Recognition)

Abstract

:
In order to detect weak underwater tonals, adaptive line enhancers (ALEs) have been widely applied in passive sonars. Unfortunately, conventional ALEs cannot perform well amid impulse noise generated by ice cracking, snapping shrimp or other factors. This kind of noise has a different noise model compared to Gaussian noise and leads to noise model mismatch problems in conventional ALEs. To mitigate the performance degradation of conventional ALEs in under-ice impulse noise, in this study, a modified ALE is proposed for passive sonars. The proposed ALE is based on the least mean p-power (LMP) error criterion and the prior information of the frequency domain sparsity to improve the enhancement performance under impulse noise. The signal-to-noise ratio (SNR) gain is chosen as the metric for evaluating the proposed ALE. The simulation results show that the output SNR gain of the proposed ALE was, respectively, 9.3 and 2.6 dB higher than that of the sparsity-based ALE (SALE) and the least mean p-power ALE (PALE) when the input GSNR was −12 dB. The results of processing the under-ice noise data also demonstrate that the proposed ALE is distinguished among the four ALEs.

1. Introduction

Noise radiated from underwater sources such as ship propellers consists of both broadband continuous and narrowband components [1,2]. The narrowband components are typically referred to as tonals and serve as important information for passive sonars. Enhancing the tonals is a crucial step in the target detection of passive sonars. Conventional methods are based on the hypothesis of Gaussian noise. However, the under-ice noise shows non-Gaussian characteristics [3]. The mismatch of noise model leads to the performance degradation of conventional ALE methods, because there is no second-order or higher-order statistics in non-Gaussian noise. Detecting under-ice targets needs more effective approaches.
Due to the fact of their strong performance in enhancing tonals, ALEs have been applied in underwater acoustic signal processing. ALEs are typically employed as proper preprocessing steps to improve the signal-to-noise ratio (SNR) to achieve a superior detecting performance in passive sonars [4,5,6]. In fact, ALEs have also been applied in speech enhancement, biomedical signal processing, and other fields [7,8,9].
An ALE based on the least mean square (LMS) algorithm was first proposed, and it is often referred to as conventional ALE (CALE) in studies regarding passive sonars [4,10]. Later, CALE based on the statistical mean square error (MSE) criterion was proposed by Widrow et al. in 1975 [11]. The filter of CALE uses the irrelevance of noise and the correlation of single-frequency components to suppress the noise, yet there remains much room for improvement in its enhancement performance [12,13,14]. Considering the sparsity of narrowband signals in the frequency domain, a sparsity-based ALE (SALE) and fast implementation of SALE (FSALE) based on prior sparse information were proposed to improve the line spectrum enhancement performance [15]. On the basis of SALE, more sparsity-driven ALEs applying different sparse penalties have been proposed [16]. Sparsity-driven ALEs perform better than CALEs in passive detectability with higher SNR gain amid Gaussian noise [17,18]. Compared to Gaussian environmental noise, under-ice noise includes many sharp impulse bursts with strong power. Strong impulse noises are generated during the process of ice formation, rupture collision and friction which, in turn, results in a heavier tail in the probability density function and makes the under-ice environmental noise non-Gaussian and nonstationary [19]. Impulse noises are also common in tropical waters and caused by snapping from shrimps and other aquatic animals [20]. However, CALE and the sparsity-driven ALE methods cannot perform well in non-Gaussian noise, because random processes according to non-Gaussian distribution do not have second-order and higher-order statistics [21]. To improve performance for impulse noises, the LMP algorithm is applied on adaptive filters [22,23]. Based on the CALE and LMP algorithms, the least mean p-power ALE (PALE) is helpful for suppressing impulse non-Gaussian noise. Instead of the square operation in the cost function of CALE, PALE uses p-power to construct the cost function [22]. Time-varying step-size methods are feasible solutions to accelerate the convergence process using the LMP algorithm. The time-varying step-size GVSS-LMP algorithm is proven to be robust when applied in adaptive filters [24]. In the KCGLMP algorithm, the LMP error criterion and kernel adaptive filters are combined to improve the filtering accuracy and computational efficiency [25]. In constrained-type adaptive filters, the constrained least mean p-power (CLMP) method is proposed to deal with non-Gaussian signals [26]. Additionally, the researchers also propose NLMP and NLMAD (p = 1) to obtain better filtering results [27]. Adaptive processing on the basis of the LMP error criterion has been applied in the noise reduction of electrocardiogram (ECG) signals, speech signals and other fields [21,22,28].
To improve the enhancement performance amid under-ice noise, in this paper a least mean p-power-based sparsity-driven ALE (PSALE) method was proposed. The LMP error criterion was applied on the PSALE to reduce the effect of non-Gaussian noise on the enhancement. To use the prior sparse information of narrowband tonals from targets, the PSALE was achieved in the frequency domain and imposed a sparse constraint on the cost function. Meanwhile, to improve the effect of the sparse constraint, the p-norm was applied on the constraint. The PSALE is based on the LMP error criterion. In addition, the frequency domain sparsity penalty was also used in the PSALE. In the simulation, the output SNR gain of the proposed PSALE was, respectively, 9.3 and 2.6 dB higher than that of the SALE and PALE when the input GSNR was −12 dB. The results of processing the real data also show that the PSALE performed better than the three other ALEs.
The remainder of this paper is organized as follows. In Section 2, the principle of conventional ALE is introduced, and the proposed PSALE is presented. In Section 3, the performance of the PSALE is evaluated through simulation. In Section 4, the results of processing the experimental data are shown. In Section 5, a discussion is provided. Finally, Section 6 concludes the paper.

2. Methods and Materials

2.1. Principle of Conventional ALE

Line spectrum enhancement is a method that is widely used to enhance weak tonals through the adaptive filtering technique. The ALE is based on the principle that narrowband signals and noise have different correlation lengths. As shown in Figure 1, the block diagram shows that an adaptive filter is employed in the CALE. The original input signal x ( n ) of the CALE is the sum of some narrowband signals contaminated by the broadband noise. The input y ( n ) of the adaptive filter is the original CALE input delayed by a delay parameter, and the reference of the adaptive filter is the original CALE input.
The decorrelation delay measured by the sampling period is the predicted depth of the CALE. In order to keep the noise in the reference and input of the adaptive filter uncorrelated while also keeping the signal correlated, the value of the delay should be greater than the correlation length of the broadband noise and smaller than the correlation length of the narrowband signals. The cost function of the LMS algorithm is based on the MSE criterion. In addition, the filter coefficients are adjusted according to the cost function. The CALE has low algorithmic complexity and is easy to implement, but the SNR gain after processing can be further improved.
The original signal in the discrete time domain is expressed by x ( n ) = s ( n ) + u ( n ) , where n is the time index, and s ( n ) is the sum of the underwater acoustic tonals expressed by
s ( n ) = m = 1 M A m sin ( 2 π f m n + φ m ) ,
where M is the number of narrowband signals, A m is the amplitude of the tonal, f m is the frequency of the tonal, φ m represents the original phase and u ( n ) is the additional broadband noise. x ( n n 0 ) = [ x ( n n 0 ) , x ( n n 0 1 ) , , x ( n n 0 L + 1 ) ] T is the delay vector of x ( n ) , and n 0 is the delay parameter.
The coefficient of the adaptive filter is w ( n ) , which is a weight vector expressed by w ( n ) = [ w 0 ( n ) , w 1 ( n ) , , w L 1 ( n ) ] T , where L is the length of the weight coefficients. The output of the adaptive filter is computed by
z ( n ) = w ( n ) T x ( n n 0 ) ,
and the estimation error ε ( n ) of the adaptive filter is given by
ε ( n ) = x ( n ) z ( n ) .
The cost function J ( n ) of the LMS algorithm can be expressed by the following formula
J ( n ) = E ( | ε ( n ) | 2 ) .
The filter coefficients can be computed by applying the steepest descent method expressed by
w ( n + 1 ) = w ( n ) + 2 μ ε ( n ) x ( n n 0 ) ,
where the positive value μ is the iterative step-size parameter. The LMS adaptive weight vector converges to the optimal adaptive weight vector, and the filter coefficients will maintain a steady-state error. Meanwhile, the performance of the CALE will be limited by this steady-state error. The optimal weight vector is also called the Wiener weight vector, which is given by w opt = R 1 p . Here, R is the covariance matrix of x ( n n 0 ) , and p is the correlation vector between x ( n n 0 ) and x ( n ) denoted by Formulas (6) and (7), respectively.
R = E [ x ( n n 0 ) x T ( n n 0 ) ] .
p = E [ x ( n ) x ( n n 0 ) ] .

2.2. Proposed PSALE

In this section, a PSALE method is proposed by applying the LMP error criterion and sparse constraint on the ALE to improve the enhancement performance amid non-Gaussian noise.

2.2.1. LMP Error Criterion

The LMP algorithm is applied on the adaptive filter to improve the SNR gain amid non-Gaussian noise [22]. The filter coefficients in the CALE are updated by Formula (5). When the noise model satisfies the Gaussian distribution, the adaptive filter in the CALE based on the LMS algorithm can easily narrow the estimated error. However, some sharp spikes with higher amplitude appear in the environment, and the update process of the filter coefficients will be disturbed by the suddenly large error ε ( n ) . Under this circumstance, the cost function can be constructed using the p-order distance of the error ε ( n ) expressed by
J ( n ) = E ( | ε ( n ) | p ) ,
where sgn ( ) takes a sign operation, and p is a positive norm parameter satisfying 1 < p < 2 . The filter coefficients are updated by
w ( n + 1 ) = w ( n ) + μ | ε ( n ) | p 1 sgn ( ε ( n ) ) x ( n n 0 ) ,
where μ is the positive learning parameter. When p = 2 , the LMP algorithm degenerates to the LMS algorithm. When p = 1 , the LMP algorithm degenerates to the least mean absolute deviation (LMAD) algorithm. The p-order distance can effectively suppress impulse noise; thus, the LMP algorithm is more suitable for processing the data amid non-Gaussian noise than the LMS algorithm.

2.2.2. Principle of PSALE

In the conventional ALE, the filter coefficients are updated in the time domain. Considering that the tonal signal is narrowband in the frequency domain, there will be a few elements of the frequency domain filter coefficients contributing to the enhancement. Thus, the weight coefficients of the adaptive filter are sparse. In order to use the prior sparsity information, the proposed PSALE is achieved in the frequency domain. The working schematic is shown in Figure 2.
In the SALE, matrix F is the discrete Fourier operator given by Formula (10)
F = 1 L ( 1 1 1 1 e j 2 π L e j 2 π ( L 1 ) L 1 e j 2 π ( L 1 ) L e j 2 π ( L 1 ) 2 L ) ,
where we have
F H F = I .
The frequency forms of the adaptive filter input and the filter coefficients can be, respectively, denoted as x F ( n n 0 ) = F x ( n n 0 ) and w F ( n ) = F w ( n ) . The output of the adaptive filter can be represented by
y ( n ) = w T ( n ) x ( n n 0 ) = w T ( n ) F H F x ( n n 0 ) = w F H ( n ) x F ( n ) ,
where the frequency domain filter coefficients are updated according to the cost function. Thus, the building of the cost function is important for the whole PSALE.
The estimation error of the adaptive filter can be determine by
ε ( n ) = x ( n ) y ( n ) .
In the conventional ALE, the adaptive filter is based on the LMS algorithm, and the cost function is an estimation of MSE. However, in the non-Gaussian distribution, it does not have second-order statistics or higher-order statistics. Thus, the MSE criterion is not suitable for describing the cost function under non-Gaussian noise. In order to suppress the non-Gaussian noise, we imposed the LMP error criterion on the adaptive filter. Then, we can obtain the first part of the cost function expressed by Formula (14)
J 1 ( n ) = E ( | ε ( n ) | p ) ,
where 1 < p < 2 .
As analyzed, the frequency domain filter coefficients w F ( n ) should be sparse. To utilize the sparsity, we imposed the sparse penalty on w F ( n ) . The most widely used norms are l 0 and l 1 , where l 0 represents the number of non-zero elements in the vector, and l 1 represents the sum of the absolute values of each element in the vector. Due to the fact that the solution of l 0 is a nondeterministic polynomial-time (NP) difficult problem and l 1 is the optimal convex approximation of l 0 , l 1 can be used as the approximate solution of l 0 . However, p-norm regularization has proved to be much sparser than l 1 regularization [29]. Thus, we imposed a p-norm sparse constraint on w F ( n ) . The sparse norm constraint part in the cost function can be expressed by Formula (15)
J 2 ( n ) = k w F ( n ) p 1 ,
where 0 < p 1 < 1 .
The term in Formula (15) is called the zero-forcing term, which means the noise component in the filter coefficients will be suppressed to maintain the sparse part. The value of k influences the suppression ability on noise of the zero-forcing term. An excessively small k cannot make the zero-forcing term effective, while an excessively large k will lead to the suppression of the large weights.
Based on the above analysis, the optimized cost function can be expressed by
J ( n ) = J 1 ( n ) + J 2 ( n ) = E ( | ε ( n ) | p ) + k w F ( n ) p 1 .
The gradient of the cost function J ( n ) regarding w F * ( n ) is denoted by
w F * ( n ) J ( n ) = w F * ( n ) E ( | ε ( n ) | p ) + k w F * ( n ) w F ( n ) p 1 .
By computing the two gradient parts in Formula (17), we can obtain Formulas (18) and (19)
w F * ( n ) E ( | ε ( n ) | p ) = p | ε ( n ) | p 1 sgn ( ε ( n ) ) ( x F ( n ) ) ,
w F * ( n ) w F ( n ) p 1 = w F * ( n ) ( l = 0 L 1 | w F , l ( n ) | p 1 p 1 ) = 1 p 1 w F ( n ) p 1 1 p 1 p 1 sgn ( w F ( n ) ) | w F ( n ) | p 1 1 = w F ( n ) p 1 1 p 1 sgn ( w F ( n ) ) | w F ( n ) | p 1 1 ,
where the operation means multiplication by element.
The updating formulation of the adaptive filter can be written by
w F ( n + 1 ) = w F ( n ) + μ 1 p | ε ( n ) | p 1 sgn ( ε ( n ) ) ( x F ( n ) ) + k w F ( n ) p 1 1 p 1 sgn ( w F ( n ) ) | w F ( n ) | p 1 1 = w F ( n ) + μ | ε ( n ) | p 1 sgn ( ε ( n ) ) x F ( n ) + k w F ( n ) p 1 1 p 1 sgn ( w F ( n ) ) | w F ( n ) | p 1 1 ,
where μ is the iteration step size used to control the converging speed of the adaptive algorithm, and μ 1 is an adjusting parameter.

3. Simulation Performance

In this section, the α stable distribution is introduced, the performance of the proposed PSALE is compared with those of several adaptive line spectrum enhancers (CALE, PALE and SALE), and different situations are discussed.

3.1. The α Stable Distribution

The characteristics of α stable distribution are affected by four parameters: α , β , γ and δ . These four parameters, respectively, determine the tail characteristics, deflection characteristics, dispersion characteristics and position characteristics of α stable distribution. α ( 0 , 2 ] is the characteristic parameter, which determines the tail thickness of the probability density function for a random sequence. β [ 1 , 1 ] is the skew parameter, which affects the symmetry of the probability density function. When β = 0 , the distribution is symmetrical; when α = 1 and β = 0 , the α stable distribution is Cauchy distribution; and when α = 2 , the random distribution becomes Gaussian distribution. The smaller the value of α , the heavier the tail of the probability density distribution and the stronger the impulsivity of the random sequence. γ ( 0 , + ) is the dispersion parameter, which is used to measure the deviation of the sample from the mean value. γ is similar to the concept of variance. δ ( , + ) is a position parameter and determines the offset degree of the probability density function on the x-axis.
Figure 3 shows the influence of the different parameters on the probability density distribution function of α stable distribution. The embedding diagram in Figure 3a shows the tailing characteristics of the probability density function when the characteristic parameter α takes different values.

3.2. Simulation

The formula for calculating the SNR under a non-Gaussian distribution is similar to the formula under Gaussian distribution. Assuming that the noise in the discrete time domain is n [ k ] , where k is the time index, the signal not contaminated by noise is x [ k ] , and the signal length is K ; then, the signal power is P x = 1 K k = 1 K x [ k ] 2 . We used the generalized SNR (GSNR) to measure the relationship between signal and noise. The formula for calculating the GSNR can be expressed by
R GSN = 10 log 10 ( P x γ α ) ( dB ) .
First, the situation of one tonal is discussed. The original input signal was the sum of one narrowband signal and broadband impulse noise. The additional noise fit the non-Gaussian distribution. The signal frequency was 192 Hz, sampling frequency was 2000 Hz, signal length was 200 s, and GSNR was −11 dB. Different methods were used to process the contaminated signal. The learning parameter μ was 1 × 10−10. The norm parameter p was 1.1. The search interval of the parameter k in SALE was 9 × 10−12. The search intervals of the parameters k and p 1 in PSALE were 2 × 10−13 and 0.5, respectively.
We used a low-frequency analysis and recording (LOFAR) diagram to evaluate the enhancement results here and after. In addition, the output SNR of the processed signal was calculated based on the LOFAR results. The computing method was achieved in the frequency domain to (1) calculate the sum of the power at the detected line spectrum frequencies as the signal power; (2) calculate the average power of the broadband noise without the signal power as the noise power; and, finally, (3) calculate the result of the signal power divided by the noise power as the output SNR. The LOFAR diagrams after different processing are shown in Figure 4. The diagrams are the LOFAR spectrums of the original input data and the respective outputs of CALE, PALE, SALE and PSALE. The specific output SNRs of PALE, SALE and PSALE are 16.9, 16.7 and 19.0 dB higher than that of the CALE, respectively. The proposed PSALE, shown in Figure 4e, had the clearest line spectrum and the best enhancement performance.
To compare the enhancement performance of the ALEs under different GSNRs, the output SNR under different GSNRs is discussed to evaluate the output SNR gains.
The output SNR gains in dB of the various ALEs are shown in Table 1 compared to the CALE. We set the GSNRs to −12, −9, −6, −3 and 0 dB. The values were averaged over 10 times under each GSNR. The simulated signal continued for 200 s. The results show that the proposed PSALE always had the highest SNR gain. Moreover, we could determine that the output SNR gain of the PSALE was, respectively, 9.3 and 2.6 dB higher than that of the SALE and PALE when the input GSNR was −12 dB.
The situation of three tonals is also discussed in this paper. The frequencies of the three tonals were 86, 135 and 192 Hz. The sampling frequency was 2000 Hz, signal length was 300 s and the GSNR was −10 dB. We used the four ALEs to process the contaminated input. The learning parameter μ was 1 × 10−12. The norm parameter p was 1.1. The search interval of the parameter k in SALE was 9 × 10−14. The search intervals of the parameters k and p 1 in PSALE were 2 × 10−15 and 0.5, respectively.
The LOFAR diagrams of the original input and the results processed through different ALEs are shown in Figure 5. The subgraphs, from top to bottom, are the original LOFAR diagram, and the results after processing by the CALE, PALE, SALE and PSALE. The output SNRs of the PALE, SALE and PSALE were 13.7, 7.5 ad 17.2 dB higher than that of the CALE, respectively. The LOFAR diagram, as shown in Figure 5e, after processing by the PSALE had the best performance among all of the ALEs. It can be seen that the proposed PSALE could efficiently suppress the non-Gaussian noise when the CALE and SALE failed.

4. Data Analysis

In this section, we use the environmental noise data recorded under ice to verify the performance of the proposed method and the CALE, PALE and SALE. The situations of one and three tonals are discussed.

4.1. Environmental Noise Characteristics

The under-ice noise data recorded at a depth of 95 m contained broadband noise with sharp bursts. The sampling frequency after downsampling was 4000 Hz. The noise data in the time domain and their LOFAR diagram are, respectively, depicted in Figure 6a and Figure 7. The results in Figure 7 show that the strong non-Gaussian impulsive noise was not averagely distributed in each frequency band. To determine the exact frequency band where the non-Gaussian impulse noise was distributed, we used α stable distribution to fit the under-ice noise to determine the fitting value of the parameter α in every frequency band. The fitting result is shown in Figure 8. From Section 4.1, we know that the smaller the parameter α, the stronger the non-Gaussian and impulsive characteristic. Therefore, signals in the frequency band with α 1.7 have stronger impulsivity. We chose the filtered noise signal between 650 and 850 Hz as the non-Gaussian environmental noise to verify the methods. The noise in this frequency band exhibited stronger non-Gaussian characteristics. The filtered noise data are shown in Figure 6b. Next, we analyzed the proposed PSALE performance in this frequency band and compared it with those of the three other ALEs amid impulse noise.

4.2. Experimental Data Analysis

Next, we added the narrowband component into the filtered noise data to form the under-ice received data. The frequency of the tonal was 750 Hz, and the frequency band of the impulse noise ranged between 650 and 850 Hz. The time length of the signal was 200 s. The learning parameter μ was 5 × 10−12. The norm parameter p was 1.1. The search interval of the parameter k in the SALE was 2 × 10−22. The search intervals of the parameters k and p 1 in the PSALE were 9 × 10−20 and 0.5, respectively. The LOFAR diagrams are depicted in Figure 9.
Figure 9a shows the original LOFAR diagram of the unprocessed signal with low SNR. The output SNRs of the PALE, SALE and PSALE were 2.5, 14.7 and 16.4 dB higher than that of the CALE, respectively. Figure 9e shows the processing result of the PSALE proposed in this paper. It can be seen that broadband noise and impulse noise were strongly suppressed, and the line spectrum at 750 Hz was obvious. The line spectrum in the PSALE was much clearer than those of the other ALEs.
Finally, we discuss the situation of three tonals under non-Gaussian noise for passive detection. We added three narrowband signals to the recorded environmental impulse noise to verify the proposed PSALE. The time it continued for was 200 s. The three frequencies were 690, 750 and 810 Hz. The learning parameter μ was 5 × 10−12. The norm parameter p was 1.1. The search interval of the parameter k in the SALE was 1 × 10−22. The search intervals of the parameters k and p 1 in the PSALE were 1 × 10−19 and 0.5, respectively. The processing results are shown in Figure 10.
The first subgraph is the original LOFAR spectrum. The respective output SNRs of the PALE, SALE and PSALE were 2.4, 9.5 and 12.2 dB higher than that of the CALE. The results in both Figure 10c,e show three clear tonals, while the tonals in Figure 10b,d are not sufficiently clear. It can thus be seen that the proposed PSALE had the highest output SNR and developed the best among the four ALEs.

5. Discussion

Non-Gaussian noise has rich impulse components and different statistical characteristics from Gaussian noise. The conventional ALEs are based exactly on the hypothesis of Gaussian noise. However, when ALEs are employed to suppress non-Gaussian noise, the model mismatch will influence the enhancement performance. Under-ice noise exhibits non-Gaussian characteristics. To improve the enhancement performance amid under-ice noise, we proposed the PSALE method.
First, the method was tested by simulation in the scenarios of one under-ice target and three under-ice targets. Second, the experimental data were processed in order to compare and discuss the results provided by other approaches and our approach. The results show that the PSALE had a higher SNR gain than the three other ALEs amid non-Gaussian noise.
In past studies, both the CALE and SALE were based on the Gaussian noise model, and the SALE performed better than the CALE using frequency domain sparsity penalties. These two methods do not have a strong inhibiting ability on non-Gaussian noise because of the noise model mismatch, while our method is aimed at suppressing non-Gaussian noise. The PALE performed better amid non-Gaussian noise than the CALE with the influence of the LMP algorithm. However, the PSALE imposes sparse penalties on the basis of the LMP. Meanwhile, though the SALE imposes sparse penalties too, the PSALE uses p-norm sparse penalties to modify the cost function to suppress non-Gaussian noise. Moreover, there also exist the logarithmic sparse penalty and CIM (correntropy-induced metric) sparse penalty which are applied in the cost function and are effective for overcoming the non-Gaussian problem [23]. Compared to the variable step-size methods [24], the PSALE does not have a variable step-size parameter, and all parameters need to be provided artificially. The study may help to automatically determine the step-size parameter of the PSALE. In the PSALE, the gradient is determined by the steepest gradient (SG) algorithm. There is also the conjugate gradient (CG) algorithm, which is used in adaptive filters to improve the performance in non-Gaussian noise [25]. In addition, the CG algorithm has a faster convergence speed than the SG algorithm, while the SG algorithm has a lower algorithmic complexity than the CG algorithm.
To this end, we showed that the PSALE can efficiently suppress non-Gaussian noise. However, there are still several problems needing further study. The specific rule between the norm parameter p and the characteristics of the non-Gaussian noise is not very clear, and it determines the optimal parameter p. We will work to improve the issues.

6. Conclusions

In this paper, considering the problem of the performance degradation of the conventional ALEs against the background of non-Gaussian noise, an improved method, PSALE, was proposed. By suppressing non-Gaussian noise, the proposed PSALE can yield a higher output SNR and superior performance. In addition, it can be seen from the simulation that the proposed PSALE achieved the highest output SNR compared to the CALE, PALE and SALE. Furthermore, the output SNR gain of the PSALE was, respectively, 9.3 and 2.6 dB higher than that of the SALE and PALE when the input GSNR was −12 dB. The results of processing the real data also support the superiority of the proposed PSALE amid impulse noise. In the future, the relationship between the parameters of adaptive filters and the impulsivity of the noise will be further investigated. Although the PSALE method performed well amid non-Gaussian noise, it still has several weaknesses. On the one hand, both the parameter debugging work and the calculation take a long time. On the other hand, from the LOFAR results we can see that there still remains some impulse noise in the background, which means the enhancement of the PSALE needs to be further improved. In the next step, we suggest that an auto-debugging algorithm can be designed based on the LOFAR results, and the cost function can be optimized further.

Author Contributions

Conceptualization, C.C. and H.H.; methodology, Y.L.; software, Y.L. and C.C.; validation, Y.L., C.C. and S.J.; formal analysis, Y.L. and S.J.; investigation, Y.L.; resources, H.H., C.C. and S.J.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L. and C.C.; project administration, H.H. and C.C.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by the Project of Arctic Multi-Node Cooperative Localization Technology, Project No. E21O130101.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.; Ma, S.; Fan, Z.; Liang, G.; Li, Q. Robust DFT-based generalised likelihood ratio test for underwater tone detection. IET Radar Sonar Navig. 2017, 11, 1845–1853. [Google Scholar] [CrossRef]
  2. Lourens, J.G. Passive Sonar Detection of Ships with Spectrograms. In Proceedings of the IEEE Symposium on Communications and Signal Processing, Johannesburg, South Africa, 29 June 1990. [Google Scholar] [CrossRef]
  3. Baggeroer, A.B.; Scheer, E.K.; Colosi, J.A.; Cornuelle, B.D.; Dushaw, B.D.; Dzieciuch, M.A.; Howe, J.A.; Mercer, W.H.; Munk, W.H.; Spindel, R.C.; et al. Statistics and vertical directionality of low-frequency ambient noise at the North Pacific Acoustics Laboratory site. J. Acoust. Soc. Am. 1985, 117, 1643–1665. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Yan, Z.; Niezrecki, C.; Cattafesta, L.N.; Beusse, D.O. Background noise cancellation of manatee vocalizations using an adaptive line enhancer. J. Acoust. Soc. Am. 2006, 120, 145–152. [Google Scholar] [CrossRef] [PubMed]
  5. Guo, Y.; Zhao, J.; Chen, H. A novel algorithm for underwater moving-target dynamic line enhancement. Appl. Acoust. 2003, 64, 1159–1169. [Google Scholar] [CrossRef]
  6. Hao, Y.; Qiu, L.; Chi, C.; Liang, G. Sparsity-inducing frequency-domain adaptive line enhancer for unmanned underwater vehicle sonar. Appl. Acoust. 2021, 173, 107689. [Google Scholar] [CrossRef]
  7. Sanei, S.; Lee, T.K.M.; Abolghasemi, V. A New Adaptive Line Enhancer Based on Singular Spectrum Analysis. IEEE Trans. Biomed. Eng. 2012, 59, 428–434. [Google Scholar] [CrossRef]
  8. Perez-Neira, A.; Anton-Haro, C.; Lagunas, M.A. An adaptive fuzzy logic enhancer for rejection of narrowband interference in DS-Spread Spectrum. In Proceedings of the 48th IEEE Vehicular Technology Conference, Ottawa, ON, Canada, 18–21 May 1998. [Google Scholar] [CrossRef]
  9. Zhang, X.; Hui, J.; Li, J.; Li, H.; Bu, X. Application of adaptive line spectrum enhancer in adaptive matched filter. J. Acoust. Soc. Am. 2019, 145, 1732–1733. [Google Scholar] [CrossRef]
  10. Koford, J.S.; Groner, G.F. The use of an adaptive threshold element to design a linear optimal pattern classifier. IEEE Trans. Inf. Theory. 1966, 12, 42–50. [Google Scholar] [CrossRef]
  11. Widrow, B.; Glover, J.R.; McCool, J.M.; Kaunitz, J.; Williams, C.S.; Hearn, R.H.; Zeidler, J.R.; Eugene Dong, J.; Goo-dlin, R.C. Adaptive noise cancelling: Principles and applications. Proc. IEEE 1975, 63, 1692–1716. [Google Scholar] [CrossRef]
  12. Gharieb, R.; Horita, Y.; Murai, T.; Cichocki, A. Unity-gain cumulant-based adaptive line enhancer. In Proceedings of the 10th IEEE Workshop on Statistical Signal and Array Processing, Pocono Manor, PA, USA, 14–16 August 2000. [Google Scholar] [CrossRef]
  13. Ibrahim, H.M.; Gharieb, R.R.; Hassan, M.M. A higher-order statistics-based adaptive algorithm for line enhancement. IEEE Trans. Signal Process. 1999, 47, 527–532. [Google Scholar] [CrossRef]
  14. Ibrahim, H.M.; Gharieb, R.R. Two-dimensional cumulant-based adaptive enhancer. IEEE Trans. Signal Process. 1999, 47, 593–596. [Google Scholar] [CrossRef]
  15. Hao, Y.; Chi, C.; Qiu, L.; Liang, G. Sparsity-based adaptive line enhancer for passive sonars. IET Radar Sonar Navig. 2019, 13, 1796–1804. [Google Scholar] [CrossRef]
  16. Hao, Y.; Chi, C.; Liang, G. Sparsity-driven adaptive enhancement of underwater acoustic tonals for passive sonars. J. Acoust. Soc. Am. 2020, 147, 2192–2204. [Google Scholar] [CrossRef] [PubMed]
  17. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  18. Bradley, P.S.; Mangasarian, O.L. Feature selection via concave minimization and support vector machines. In Proceedings of the 15th International Conference on Machine Learning, Madison, WI, USA, 24–27 July 1998. [Google Scholar]
  19. Veitch, J.G.; Wilks, A.R. A characterization of Arctic undersea noise. J. Acoust. Soc. Am. 1985, 77, 989–999. [Google Scholar] [CrossRef]
  20. Chitre, M.A.; Potter, J.R.; Ong, S. Optimal and Near-Optimal Signal Detection in Snapping Shrimp Dominated Ambient Noise. IEEE J. Oceanic Eng. 2006, 31, 497–503. [Google Scholar] [CrossRef] [Green Version]
  21. Shao, M.; Nikias, C.L. Signal processing with fractional lower order moments: Stable processes and their applications. Proc. IEEE 1993, 81, 986–1010. [Google Scholar] [CrossRef]
  22. Pei, S.; Tseng, C. Least mean p-power error criterion for adaptive FIR filter. IEEE J. Sel. Areas Commun. 1994, 12, 1540–1547. [Google Scholar] [CrossRef]
  23. Ma, W.; Chen, B.; Qu, H.; Zhao, J. Sparse least mean p-power algorithms for channel estimation in the presence of impulsive noise. Signal Image Video Process. 2016, 10, 503–510. [Google Scholar] [CrossRef]
  24. Rai, A.; Kohli, A.K. Adaptive Polynomial Filtering using Generalized Variable Step-Size Least Mean pth Power (LMP) Algorithm. Circuits Syst. Signal Process. 2014, 33, 3931–3947. [Google Scholar] [CrossRef]
  25. Huang, X.; Xiong, K.; Wang, L.; Wang, S. The Robust Kernel Conjugate Gradient Least Mean p-Power Algorithm. In Proceedings of the 2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence, Xi’an, China, 21–22 September 2019. [Google Scholar] [CrossRef]
  26. Peng, S.; Wu, Z.; Chen, B. Constrained least mean p-power error algorithm. In Proceedings of the 2016 35th Chinese Control Conference, Chengdu, China, 27–29 July 2016. [Google Scholar] [CrossRef]
  27. Arikan, O.; Enis Cetin, A.; Erzin, E. Adaptive filtering for non-Gaussian stable processes. IEEE Signal Process. Lett. 1994, 1, 163–165. [Google Scholar] [CrossRef]
  28. Li, G.; Zhang, H.; Wang, G.; Huang, F. Least mean p -power algorithms with generalized correntropy. Signal Process. 2021, 185, 108058. [Google Scholar] [CrossRef]
  29. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. $L_{1/2}$ Regularization: A Thresholding Representation Theory and a Fast Solver. IEEE Trans. Neural Networks Learn.Syst. 2012, 23, 1013–1027. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the CALE.
Figure 1. Block diagram of the CALE.
Jmse 11 00269 g001
Figure 2. Schematic of the PSALE.
Figure 2. Schematic of the PSALE.
Jmse 11 00269 g002
Figure 3. The change of the probability density distribution function with different parameters.
Figure 3. The change of the probability density distribution function with different parameters.
Jmse 11 00269 g003
Figure 4. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Figure 4. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Jmse 11 00269 g004
Figure 5. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Figure 5. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Jmse 11 00269 g005
Figure 6. Recorded under-ice noise data in the time domain.
Figure 6. Recorded under-ice noise data in the time domain.
Jmse 11 00269 g006
Figure 7. LOFAR spectrum of the measured under-ice noise.
Figure 7. LOFAR spectrum of the measured under-ice noise.
Jmse 11 00269 g007
Figure 8. The fitting result of parameter α.
Figure 8. The fitting result of parameter α.
Jmse 11 00269 g008
Figure 9. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Figure 9. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Jmse 11 00269 g009
Figure 10. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Figure 10. LOFAR spectrums: (a) original input signal; (b) CALE output; (c) PALE output; (d) SALE output; (e) PSALE output.
Jmse 11 00269 g010
Table 1. SNR gains of the various ALEs.
Table 1. SNR gains of the various ALEs.
GSNR (dB)PALE (dB)SALE (dB)PSALE (dB)
−1216.39.618.9
−912.010.612.4
−610.57.710.6
−37.97.08.0
03.52.93.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, Y.; Chi, C.; Huang, H.; Jin, S. Least Mean p-Power-Based Sparsity-Driven Adaptive Line Enhancer for Passive Sonars Amid Under-Ice Noise. J. Mar. Sci. Eng. 2023, 11, 269. https://doi.org/10.3390/jmse11020269

AMA Style

Lv Y, Chi C, Huang H, Jin S. Least Mean p-Power-Based Sparsity-Driven Adaptive Line Enhancer for Passive Sonars Amid Under-Ice Noise. Journal of Marine Science and Engineering. 2023; 11(2):269. https://doi.org/10.3390/jmse11020269

Chicago/Turabian Style

Lv, Yujiao, Cheng Chi, Haining Huang, and Shenglong Jin. 2023. "Least Mean p-Power-Based Sparsity-Driven Adaptive Line Enhancer for Passive Sonars Amid Under-Ice Noise" Journal of Marine Science and Engineering 11, no. 2: 269. https://doi.org/10.3390/jmse11020269

APA Style

Lv, Y., Chi, C., Huang, H., & Jin, S. (2023). Least Mean p-Power-Based Sparsity-Driven Adaptive Line Enhancer for Passive Sonars Amid Under-Ice Noise. Journal of Marine Science and Engineering, 11(2), 269. https://doi.org/10.3390/jmse11020269

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop