Next Article in Journal
An Improved Skewness Balancing Filtering Algorithm Based on Thin Plate Spline Interpolation
Next Article in Special Issue
Experimental Investigation of Acoustic Propagation Characteristics in a Fluid-Filled Polyethylene Pipeline
Previous Article in Journal
A Novel Heart Rate Robust Method for Short-Term Electrocardiogram Biometric Identification
Previous Article in Special Issue
Evaluation of Cracks in Metallic Material Using a Self-Organized Data-Driven Model of Acoustic Echo-Signal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularization Factor Selection Method for l1-Regularized RLS and Its Modification against Uncertainty in the Regularization Factor

1
Department of Electrical Engineering, College of Electronics and Information Engineering, Sejong University, Gwangjin-gu, Seoul 05006, Korea
2
School of Electronics Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(1), 202; https://doi.org/10.3390/app9010202
Submission received: 5 December 2018 / Revised: 31 December 2018 / Accepted: 4 January 2019 / Published: 8 January 2019
(This article belongs to the Special Issue Modelling, Simulation and Data Analysis in Acoustical Problems)

Abstract

:

Featured Application

This algorithm can be applied to various kinds of sparse channel estimations, e.g., room impulse response, early reflection, and underwater channel response.

Abstract

This paper presents a new l1-RLS method to estimate a sparse impulse response estimation. A new regularization factor calculation method is proposed for l1-RLS that requires no information of the true channel response in advance. In addition, we also derive a new model to compensate for uncertainty in the regularization factor. The results of the estimation for many different kinds of sparse impulse responses show that the proposed method without a priori channel information is comparable to the conventional method with a priori channel information.

1. Introduction

Room impulse response (RIR) estimation is a problem in many applications that use acoustic signal processing. The RIR identification [1] is fundamental for various applications such as room geometry related spatial audio applications [2,3,4,5], acoustic echo cancellation (AEC) [6], speech enhancement [7], and dereverberation [8]. In [9], the RIR has relatively large magnitude values during the early part of the reverberation and fades to smaller values during the later part. This indicates that most RIR entries have values close to zero. Therefore, the RIR has a sparse structure. The sparse RIR model is useful for estimating RIRs in real acoustic environments when the source is given a priori [10]. There has been recent interest in adaptive algorithms for sparsity in various signals and systems [11,12,13,14,15,16,17,18,19,20,21,22]. Many adaptive algorithms based on least mean square (LMS) [11,12] and recursive least squares (RLS) [14,15,16,17] have been reported with different penalty functions. Sparse estimation research, such as that done by Eksioglu and Tanc [17], has proposed a sparse RLS algorithm, l1-RLS, which is fully recursive like the plain RLS algorithm. The algorithm of l1-RLS in [17] proposed a proper calculation method for the regularization factor. These recursive algorithms have the potential for sparse RIR estimation; however, the regularization factor should be established prior to applying these algorithms. The regularization factor calculation method requires information about a true sparse channel response for a good performance. The authors in [18,19] have also proposed recursive regularization factor selection methods; however, these methods still need the true impulse response in advance.
In this paper, we propose a new regularization factor calculation method for l1-RLS algorithm in [17]. The new regularization factor calculation needs no information for the true channel response in advance. This makes it possible to apply l1-RLS algorithm in various room environments. In addition, we derive a new model equation for l1-RLS in [17] with uncertainty in the regularization factor and show that the new model is similar to the total least squares (TLS) model that compensates for uncertainty in the calculated regularization factor without the true channel response. For the performance evaluation, we simulate four different sparse channels and compare channel estimation performances. We show that, without any information of the true channel impulse response, the performance of the proposed algorithm is comparable to that of l1-RLS with the information of the true channel impulse response.
This paper is organized as follows. In Section 2, we summarize l1-RLS in [17]. In Section 3, we summarize the measure of sparsity. In Section 4, we propose a new method for the regularization calculation. In Section 5, we show that l1-RLS with uncertainty in the regularization factor can be modeled as the TLS model. In Section 6, we summarize l1-RTLS (recursive total least squares) algorithm as a solution for l1-RLS with uncertainty in the regularization factor. In Section 7, we present simulation results to show the performance of the proposed algorithm. Finally, we give the conclusion in Section 8.

2. Summarize l1-RLS

In the sparse channel estimation problem of interest, the system observes a signal represented by an M × 1 vector x ( k ) = [ x k , , x k M + 1 ] T at time instant n, performs filtering, and obtains the output y ( i ) = x T ( k ) w o ( k ) , where w o ( k ) = [ w k , , w k M + 1 ] T is the M dimensional actual system with finite impulse response (FIR) type. For system estimation, an adaptive filter system applies with M dimensional vector w ( k ) to the same signal vector x ( k ) and produces an estimated output y ^ ( k ) = x T ( k ) w ( k ) , and calculates the error signal e ( k ) = y ( k ) + n ( k ) y ^ ( k ) = y ˜ ( k ) y ^ ( k ) , where n ( k ) is the measurement noise, y ( k ) is the output of the actual system, and y ^ ( k ) is the estimated output. In order to estimate the channel impulse response, an adaptive algorithm minimizes the cost function defined by
w = arg min w   1 2 m = 0 k   λ k m ( e ( m ) ) 2 .
From the gradient based minimization, Equation (1) becomes
R ( k ) w ( k ) = r ( k ) ,
where R ( k ) = m = 0 k   λ k m x ( m ) x T ( m ) and r ( k ) = m = 0 k   λ k m y ˜ ( m ) x ( m ) . This equation is the normal equation for the least squares solution. Especially, w o ( k ) is considered as a sparse system when the number of nonzero coefficients K is less than the system order of M . In order to estimate the sparse system, most estimation algorithms exploit non-zero coefficients in the system [11,12,13,14,15,16,17]. In [17], Eksioglu proposed a full recursive l1-regularized algorithm by the minimization of the object function as shown in Equation (3).
J k = 1 2 ε k + γ k w 1 ,
where ε k = m = 0 k   λ k m ( e ( m ) ) 2 . From the minimization of Equation (3), a modified normal equation was derived as shown in Equation (4).
R ( k ) w ( k ) = r ( k ) γ k s w ( k ) 1 = p ^ ( k ) .
When we solve Equation (4), we should select the regularization factor as shown in Equation (5).
γ k = 2 t r ( R 1 ( k ) ) M R 1 ( k ) s f ( w ( k ) ) 2 2 × [ ( f ( w ( k ) ) ρ ) + s f ( w ( k ) ) R 1 ( k ) ε ( k ) ] ,
where f ( w ( k ) ) = w ( k ) 1 and the subgradient of f ( w ) is s w 1 = sgn ( w ) . In Equation (5), the regularization factor has the parameter, ρ , which should be set beforehand. In [17], the parameter was set as ρ = f ( w t r u e ) = w t r u e 1 with w t r u e indicating the impulse response of the true channel. There was no further discussion about how to set ρ . However, it is not practical to know the true channel in advance.

3. Measure of Sparseness

In [20], the sparseness of a channel impulse response is measured by Equation (6).
χ = L L L ( 1 w ^ 1 L w ^ 2 ) ,
where w ^ p is the p-norm of w ^ and L is the dimension of w ^ . The range of χ is 0 χ 1 . That is dependent on the sparseness of w ^ . As w ^ becomes sparser, the sparsity, χ , comes close to 1, and as w ^ becomes denser, χ comes close to 0. We often have small and none-zero value of χ , even in a dense channel. For example, Figure 1 shows the relation of the value of χ and the percentage of none-zero components in w ^ with L = 215. In Figure 1, we consider all possible cases of none-zero components in w ^ .

4. New ρ Selection Method in the Sparsity Regularization Constant γ k

Section 2 shows that the regularization constant γ k in Equation (5) needs ρ to be set as ρ = true   system   impulse   response 1 = w t r u e 1 . However, we need a new method in the constant selection because Equation (5) is not practical. Therefore, Section 4 proposes a new method to set this constant.
For a practical method for the constant selection, we can consider using the estimated vector w ^ instead of using the true vector w t r u e because w ^ , the solution with l1-norm, will be closer to the sparse true vector than the solution of the conventional RLS. The more iteration is repeated, the more w ^ converges to the true value. Conventional RLS also converges to the true value; however, the solution with l1-norm, is closer to the sparse true value. Therefore, we can use sparse estimate w ^ instead of w t r u e when we set ρ , and the uncertainty arising from this is compensated through a TLS solution in the next section. When we determine ρ using the estimated w ^ , we choose between the average ρ and the current estimate w ^ 1 . Table 1 summarizes the ρ selection steps.
The determination method for ρ value shown in Table 1 is as follows. In Step 1, the sparsity of the estimated w ^ is calculated. The sparsity represents the sparseness of w ^ as a number [23]. In Step 2, l1-norm of the estimated w ^ is scaled and the value is averaged with the previous ρ value. The scaling value approaches 1 as the sparsity, χ , gets close to 1. However, the scaling value gets close to e 1 0.37 as the sparsity, χ , gets close to 0. Therefore, the scaling does not change l1-norm of w ^ for the sparse w ^ . Instead the scaling changes the l1-norm smaller for the dense w ^ . In Step 3, the smaller one between the averaged ρ and the l1-norm of the estimated w ^ is selected as the new ρ value. In this case, the ρ value becomes completely new if the l1-norm of the estimated w ^ is selected, otherwise the previous trend is maintained. In Figure 1, the reference value 0.75 used in Step 3 means that less than 16% of all the impulse response taps are not zero.

5. New Modeling for l1-RLS with Uncertainty in the Regularization Factor

If we set ρ = constant , the regularization factor becomes
γ ˜ k = 2 t r ( R 1 ( k ) ) M R 1 ( k ) s f ( w ( k ) ) 2 2 × [ ( f ( w ( k ) ) constant ) + s f ( w ( k ) ) × R 1 ( k ) ε ( k ) ] = 2 t r ( R 1 ( k ) ) M R 1 ( k ) s f ( w ( k ) ) 2 2 × ( f ( w ( k ) ) h 1 + h 1 constant ) + 2 t r ( R 1 ( k ) ) M s f ( w ( k ) ) R 1 ( k ) ε ( k ) R 1 ( k ) s f ( w ( k ) ) 2 2 .
Then,
γ ˜ k = γ k + 2 t r ( R 1 ( k ) ) M ( h 1 constant ) R 1 ( k ) s f ( w ( k ) ) 2 2 = γ k + Δ γ .
Using Equation (8), Equation (4) becomes
R ( k ) w ( k ) = r ( k ) ( γ k + Δ γ ) s w ( k ) 1 .
s w 1 = sgn ( w ) is represented as
s w ( k ) 1 = [ 1 | w i | ] w ( k ) .
By applying Equation (10) to Equation (9),
( R ( k ) + Δ γ [ 1 | w i | ] ) w ( k ) = r ( k ) γ k w ( k ) 1 ,
where w i is i-th element of w ( k ) . Then it is simplified as
( R ( k ) + Δ γ [ 1 | w i | ] ) w ( k ) = p ^ ( k ) .
Equation (12) is very similar to the system model in Figure 2 that is contaminated by noise both in input and in output. Suppose that an example of the system in Figure 2 is represented as
[ x k + n i , k x k N + 1 + n i , k N + 1 x k 1 + n i , k 1 x k N + n i , k N x k N + 1 + n i , k N + 1 x k 2 N + 2 + n i , k 2 N + 2 ] × w ( k ) = [ y k + n o , k y k 1 + n o , k 1 y k N + 1 + n o , k N + 1 ] ,
where x k is x ( k ) , n i , k is n i ( k ) , and n o , k is n o ( k ) . Equation (13) is simplified as
A w ( k ) = b .
If we multiply Equation (14) by A H and average it, we get
E ( A H A ) w ( k ) = E ( A H b ) .
We can rewrite Equation (15) as follows
[ r x x ( 0 ) + σ n 2 r x x ( 1 ) r x x ( N 1 ) r x x ( 1 ) r x x ( 0 ) + σ n 2 r x x ( N 2 ) r x x ( N 1 ) r x x ( N 2 ) r x x ( 0 ) + σ n 2 ] w ( k ) = [ r x y ( 0 ) r x y ( 1 ) r x y ( N 1 ) ]
Then, it can be represented as
( R + σ n 2 I ) w ( k ) = p ˜ ( k ) .
When we compare Equation (12) with Equation (17), the two system models have almost the same form. Therefore, it is feasible that the TLS method can be applied to Equation (12) [24,25,26,27,28,29,30]. Therefore, we expect to obtain almost the same performance as l1-RLS with the true channel response if we apply the TLS method by the regularization factor with the new ρ in Table 1. In the next section, we summarize l1-RTLS (recursive total least squares) algorithm in [29].

6. Summarize l1-RTLS for the Solution of l1-RLS with Uncertainty in the Regularization Factor

Lim, one of the authors of this paper, has proposed the TLS solution for l1-RLS known as l1-RTLS [30]. In this section, we summarize l1-RTLS in [30] for the solution of Equation (11).
The TLS system model assumes that both input and output are contaminated by additive noise as Figure 2. The output is given by
y ˜ ( k ) = x ˜ T ( k ) w o + n o ( k ) ,
where the output noise n o ( k ) is the Gaussian white noise with variance σ o 2 . The noisy input vector in the system is modeled by
x ˜ ( k ) = x ( k ) + n i ( k ) C M × 1 ,
where n i ( k ) = [ n i ( k ) , n i ( k 1 ) , n i ( k M + 1 ) ] T and the input noise n i ( k ) is the Gaussian white noise with variance σ i 2 . For the TLS solution, we set the augmented data vector as
x ¯ ( k ) = [ x ˜ T ( k ) , y ˜ ( k ) ] T R ( M + 1 ) × 1 .
The correlation matrix is represented as
R ¯ = [ R ˜ p p T c ] ,
where p = E { x ˜ ( k ) y ( k ) } , c = E { y ( k ) y ( k ) } , R = E { x ( k ) x T ( k ) } and R ˜ = E { x ˜ ( k ) x ˜ T ( k ) } = R + σ i 2 I . In [27,28], the TLS problem becomes to find the eigenvector associated with the smallest eigenvalue of R ¯ . Equation (22) is the typical cost function to find the eigenvector associated with the smallest eigenvalue of R ¯ .
J ( k ) = 1 2 w ˜ T ( k ) R ¯ ( k ) w ˜ ( k ) ,
where R ¯ ( k ) is a sample correlation matrix at k-th instant, and w ˜ ( k ) = [ w ^ T ( k ) , 1 ] T in which w ^ ( k ) is the estimation result for the unknown system at k-th instant. We modify the cost function by adding a penalty function in order to reflect prior knowledge about the true sparsity system.
J ( k ) = 1 2 w ˜ T ( k ) R ¯ ( k ) w ˜ ( k ) + λ ( w ˜ T ( k ) w ˜ ( k 1 ) 1 ) + γ k f ( w ˜ ( k ) ) ,
where λ is the Lagrange multiplier and γ k is the regularized parameter in [13]. We solve the equations by w ^ J ( k ) = 0 and λ J ( k ) = 0 simultaneously. w ^ J ( k ) = 0 :
2 R ¯ ( k ) w ˜ ( k ) + λ w ˜ ( k 1 ) + γ k s f ( w ˜ ( k ) ) = 0 ,
λ J ( k ) = 0 : w ˜ T ( k ) w ˜ ( k 1 ) = 1 ,
where the subgradient of f ( w ˜ ) = w ˜ 1 is w ˜ s w ˜ 1 = sgn ( w ˜ ) . From (24), we obtain
w ˜ ( k ) = λ 2 R ¯ 1 ( k ) w ˜ ( k 1 ) γ k R ¯ 1 ( k ) s f ( w ˜ ( k ) ) .
Substituting Equation (26) in Equation (25), we get
( λ 2 R ¯ 1 ( k ) w ˜ ( k 1 ) γ k R ¯ 1 ( k ) s f ( w ˜ ( k ) ) ) T × w ˜ ( k 1 ) = 1 ,
or
λ = 2 1 + γ k s f ( w ˜ ( k ) ) T R ¯ 1 ( k ) w ˜ ( k 1 ) w ˜ T ( k 1 ) R ¯ 1 ( k ) w ˜ ( k 1 ) .
Substituting λ in Equation (26) by Equation (28) leads to
w ˜ ( k ) = 1 + γ k s f ( w ˜ ( k ) ) T R ¯ 1 ( k ) w ˜ ( k 1 ) w ˜ T ( k 1 ) R ¯ 1 ( k ) w ˜ ( k 1 ) × R ¯ 1 ( k ) w ˜ ( k 1 ) γ k R ¯ 1 ( k ) w ˜ f ( w ˜ ( k ) ) .
Equation (29) can be expressed in a simple form as
w ˜ ( k ) = α R ¯ 1 ( k ) w ˜ ( k 1 ) γ k R ¯ 1 ( k ) s f ( w ˜ ( k ) ) ,
where α = 1 + γ k s f ( w ˜ ( k ) ) T R ¯ 1 ( k ) w ˜ ( k 1 ) w ˜ T ( k 1 ) R ¯ 1 ( k ) w ˜ ( k 1 ) . Because asymptotically w ˜ ( k ) = 1 as k , Equation (29) can be approximated as the following two equations.
w ˜ ( k ) R ¯ 1 ( k ) w ˜ ( k 1 ) γ k ( w ˜ T ( k 1 ) R ¯ 1 ( k 1 ) w ˜ ( k 1 ) ) R ¯ 1 ( k ) s f ( w ˜ ( k 1 ) ) .
w ˜ ( k ) = w ˜ ( k ) / w ˜ ( k ) .
Finally, we obtain the estimated parameter of the unknown system as
w ^ ( k ) = w ˜ 1 : M ( k ) / w ˜ M + 1 ( k ) .
For Equation (23), we can use the modified regularization factor γ k in [30]
γ k = 2 t r ( R ¯ 1 ( k ) ) M R ¯ 1 ( k ) s f ( w ^ a u g ( k ) ) 2 2 × [ ( f ( w ^ a u g ( k ) ) ρ ) + s f ( w ^ a u g ( k ) ) R ¯ 1 ( k ) ε ( k ) ] ,
where w ^ a u g ( k ) = [ w ^ T ( k ) , 1 ] T , w ^ a u g , R L S ( k ) = [ w ^ R L S T ( k ) , 1 ] T , ε ( k ) = w ^ a u g ( k ) w ^ a u g , R L S ( k ) , and w ^ R L S ( k ) is the estimated parameter by recursive least squares (RLS). As f ( w ^ ) = w ^ 1 , the subgradient of f ( w ^ a u g ( k ) ) is
s w ^ a u g ( k ) 1 = sgn ( w ^ a u g ( k ) ) .
As mentioned in Section 4, we apply new constant ρ in Table 1, to the regularization factor γ k in Equation (34) instead of w t r u e 1 , where w t r u e is the true system impulse response.

7. Simulation Results

This section confirms the performance of the proposed algorithm in sparse channel estimation. In the first experiment, the channel estimation performance is compared with other algorithms using randomly generated sparse channels. In this simulation, we follow the same scenario in the experiments as [17]. The true system vector w t r u e is 64 dimensions. In order to generate the sparse channel, we set the number of the nonzero coefficients, S, in the 64 coefficients and randomly position the nonzero coefficients. The values of the coefficients are taken from an N ( 0 , 1 / S ) distribution, where N (   ) is the normal distribution. In the simulation, we estimate the channel impulse response by the proposed algorithms that are l1-RLS using the ρ in Table 1 and l1-RTLS using the ρ in Table 1. For the comparison, we estimate the channel impulse response by l1-RLS using the true channel response; in addition, we also execute the regular RLS algorithm in an oracle setting (oracle-RLS) where the positions of the true nonzero system parameters are assumed to be known. For the estimated channel results, we calculate the mean standard deviation (MSD), where MSD = E ( | w ^ w t r u e | 2 ) , w ^ is the estimated channel response and w t r u e is the true channel response. For the performance evaluation, we simulate the algorithms in the sparse channels for S = 4, 8, 16, and 32.
Figure 3 illustrates the MSD curves. For S = 4, Figure 3a shows that the estimation performance of l1-RTLS using the regularization factor with the ρ in Table 1 is almost the same as the l1-RLS using regularization with a true channel impulse response. However, the performance of l1-RLS using the regularization factor with the ρ in Table 1 is gradually degraded and shows a kind of uncertainty accumulation effect. In the other cases of S, we can observe the same trend in the MSD curves. Therefore, we can confirm that the new regularization factor selection method and the new modeling for l1-RLS can estimate the sparse channel as good as l1-RLS using the regularization with the true channel impulse response. In all the simulation scenarios, oracle RLS algorithm produces the lowest MSD as expected.
Table 2 summarizes the steady-state MSD values as varying S from 4 to 32. The results show that the proposed l1-RTLS with the new ρ is comparable to l1-RLS with the true channel.
In the second experiment, we compare channel estimation performance using room impulse response. The size of the room is (7.49, 6.24, 3.88 m). The position of the sound source is (1.53, 0.96, 1.12 m) and the position of the receiver is (1.81, 5.17, 0.71 m), respectively. T60 is set to 100 ms and 400 ms. The impulse response of the room is generated using the program in [31]. We focus on the direct reflection part and the early reflection part in the RIR because the direct reflection and early reflection part of the RIR has a sparse property. This is the part that is estimated in the AEC applications [32]. This part is also related to localization and clarity in room acoustics [33,34,35]. Comparing the impulse response (IR) generated by setting T60 = 100 ms to the channel with 65 coefficients used in the first experiment, it is equivalent to S = 4 in the channel with 65 coefficients. In the same manner, the IR generated by setting T60 = 400 ms is equivalent to S = 10.
Table 3 summarizes the steady-state MSD values. The results also show the same trend as Table 2. In RIR estimation, the proposed l1-RTLS with the new ρ is also comparable to l1-RLS with the true channel.

8. Conclusions

In this paper, we have proposed the regularization factor for recursive adaptive estimation. The regularization factor needs no prior knowledge of the true channel impulse response. We have also reformulated the recursive estimation algorithm as l1-RTLS type. This formulation is robust to the uncertainty in the regularization factor without a priori knowledge of the true channel impulse response. Simulations show that the proposed regularization factor and l1-RTLS algorithm provide good performance comparable to l1- RLS with the knowledge of the true channel impulse response.

Author Contributions

Conceptualization, J.L.; Methodology, J.L.; Validation, J.L. and S.L.; Formal analysis, J.L.; Investigation, J.L. and S.L.; Writing—original draft preparation, J.L.; Writing—review and editing, S.L.; Visualization, J.L.; Project administration, J.L.; Funding acquisition, J.L. and S.L.

Funding

This research received no external funding.

Acknowledgments

This research was supported by Agency for Defense Development (ADD) in Korea (UD160015DD).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benichoux, A.; Simon, L.; Vincent, E.; Gribonval, R. Convex regularizations for the simultaneous recording of room impulse responses. IEEE Trans. Signal Process. 2014, 62, 1976–1986. [Google Scholar] [CrossRef]
  2. Merimaa, J.; Pulkki, V. Spatial impulse response I: Analysis and synthesis. J. Audio Eng. Soc. 2005, 53, 1115–1127. [Google Scholar]
  3. Dokmanic, I.; Parhizkar, R.; Walther, A.; Lu, Y.M.; Vetterli, M. Acoustic echoes reveal room shape. Proc. Natl. Acad. Sci. USA 2013, 110, 12186–12191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Remaggi, L.; Jackson, P.; Coleman, P.; Wang, W. Acoustic reflector localization: Novel image source reversion and direct localization methods. IEEE Trans. Audio Speech Lang. Process. 2017, 25, 296–309. [Google Scholar] [CrossRef]
  5. Baba, Y.; Walther, A.; Habets, E. 3D room geometry interference based on room impulse response stacks. IEEE Trans. Audio, Speech Lang. Process. 2018, 26, 857–872. [Google Scholar] [CrossRef]
  6. Goetze, S.; Xiong, F.; Jungmann, J.O.; Kallinger, M.; Kammeyer, K.; Mertins, A. System Identification of Equalized Room Impulse Responses by an Acoustic Echo Canceller using Proportionate LMS Algorithms. In Proceedings of the 130th AES Convention, London, UK, 13 May 2011; pp. 1–13. [Google Scholar]
  7. Yu, M.; Ma, W.; Xin, J.; Osher, S. Multi-channel l1 regularized convex speech enhancement model and fast computation by the split Bregman method. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 661–675. [Google Scholar] [CrossRef]
  8. Lin, Y.; Chen, J.; Kim, Y.; Lee, D.D. Blind channel identification for speech dereverberation using l1-norm sparse learning. In Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; pp. 921–928. [Google Scholar]
  9. Naylor, P.A.; Gaubitch, N.D. Speech Dereverberation; Springer: London, UK, 2010; pp. 219–270. [Google Scholar]
  10. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–518. [Google Scholar] [CrossRef]
  11. Gu, Y.; Jin, J.; Mei, S. Norm Constraint LMS Algorithm for Sparse System Identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  12. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128. [Google Scholar]
  13. He, X.; Song, R.; Zhu, W.P. Optimal pilot pattern design for compressed sensing-based sparse channel estimation in OFDM systems. Circuits Syst. Signal Process. 2012, 31, 1379–1395. [Google Scholar] [CrossRef]
  14. Babadi, B.; Kalouptsidis, N.; Tarokh, V. SPARLS: The sparse RLS algorithm. IEEE Trans. Signal Process. 2010, 58, 4013–4025. [Google Scholar] [CrossRef]
  15. Angelosante, D.; Bazerque, J.A.; Giannakis, G.B. Online adaptive estimation of sparse signals: Where RLS meets the l1-norm. IEEE Trans. Signal Process. 2010, 58, 3436–3447. [Google Scholar] [CrossRef]
  16. Eksioglu, E.M. Sparsity regularised recursive least squares adaptive filtering. IET Signal Process. 2011, 5, 480–487. [Google Scholar] [CrossRef]
  17. Eksioglu, E.M.; Tanc, A.L. RLS algorithm with convex regularization. IEEE Signal Process. Lett. 2011, 18, 470–473. [Google Scholar] [CrossRef]
  18. Sun, D.; Liu, L.; Zhang, Y. Recursive regularisation parameter selection for sparse RLS algorithm. Electron. Lett. 2018, 54, 286–287. [Google Scholar] [CrossRef]
  19. Chen, Y.; Gui, G. Recursive least square-based fast sparse multipath channel estimation. Int. J. Commun. Syst. 2017, 30, e3278. [Google Scholar] [CrossRef]
  20. Kalouptsidis, N.; Mileounis, G.; Babadi, B.; Tarokh, V. Adaptive algorithms for sparse system identification. Signal Process. 2011, 91, 1910–1919. [Google Scholar] [CrossRef]
  21. Candes, E.J.; Wakin, M.; Boyd, S. Enhancing sparsity by reweighted minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  22. Lamare, R.C.; Sampaio-Neto, R. Adaptive reduced-rank processing based on joint and iterative interpolation, decimation, and filtering. IEEE Trans. Signal Process. 2009, 57, 2503–2514. [Google Scholar] [CrossRef]
  23. Petraglia, M.R.; Haddad, D.B. New adaptive algorithms for identification of sparse impulse responses—Analysis and comparisons. In Proceedings of the Wireless Communication Systems, York, UK, 19–22 September 2010; pp. 384–388. [Google Scholar]
  24. Golub, G.H.; Van Loan, C.F. An analysis of the total least squares problem. SIAM J. Numer. Anal. 1980, 17, 883–893. [Google Scholar] [CrossRef]
  25. Dunne, B.E.; Williamson, G.A. Stable simplified gradient algorithms for total least squares filtering. In Proceedings of the 34th Annual Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2000; pp. 1762–1766. [Google Scholar]
  26. Feng, D.Z.; Bao, Z.; Jiao, L.C. Total least mean squares algorithm. IEEE Trans. Signal Process. 1998, 46, 2122–2130. [Google Scholar] [CrossRef]
  27. Davila, C.E. An efficient recursive total least squares algorithm for FIR adaptive filtering. IEEE Trans. Signal Process. 1994, 42, 268–280. [Google Scholar] [CrossRef]
  28. Soijer, M.W. Sequential computation of total least-squares parameter estimates. J. Guid. Control Dyn. 2004, 27, 501–503. [Google Scholar] [CrossRef]
  29. Choi, N.; Lim, J.S.; Sung, K.M. An efficient recursive total least squares algorithm for raining multilayer feedforward neural networks. Lect. Notes Comput. Sci. 2005, 3496, 558–565. [Google Scholar]
  30. Lim, J.S.; Pang, H.S. l1-regularized recursive total least squares based sparse system identification for the error-in-variables. SpringerPlus 2016, 5, 1460–1469. [Google Scholar] [CrossRef] [PubMed]
  31. Lehmann, E. Image-Source Method: MATLAB Code Implementation. Available online: http://www.eric-lehmann.com/ (accessed on 17 December 2018).
  32. Gay, S.L.; Benesty, J. Acoustic Signal Processing for Telecommunication; Kluwer Academic Publisher: Norwell, MA, USA, 2000; pp. 6–7. [Google Scholar]
  33. Swanson, D.C. Signal Processing for Intelligent Sensor Systems with MATLAB, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2012; p. 70. [Google Scholar]
  34. Kuttruff, H. Room Acoustics, 6th ed.; CRC Press: Boca Raton, FL, USA, 2017; p. 168. [Google Scholar]
  35. Bai, H.; Richard, G.; Daudet, L. Modeling early reflections of room impulse responses using a radiance transfer method. In Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, USA, 20–23 October 2013; pp. 1–4. [Google Scholar]
Figure 1. Sparsity ( χ ) vs. the percentage of none zero coefficients in the channel impulse response.
Figure 1. Sparsity ( χ ) vs. the percentage of none zero coefficients in the channel impulse response.
Applsci 09 00202 g001
Figure 2. The model of a noisy input and noisy output system.
Figure 2. The model of a noisy input and noisy output system.
Applsci 09 00202 g002
Figure 3. Steady-state MSD for S = 4, 8, 16, and 32 when applying the new ρ method to the regularization factor (-o-: l1-RLS with the true channel response, -×-: l1-RLS with the new ρ method, -*-: proposed l1-RTLS with the new ρ method, - -: oracle-RLS).
Figure 3. Steady-state MSD for S = 4, 8, 16, and 32 when applying the new ρ method to the regularization factor (-o-: l1-RLS with the true channel response, -×-: l1-RLS with the new ρ method, -*-: proposed l1-RTLS with the new ρ method, - -: oracle-RLS).
Applsci 09 00202 g003
Table 1. ρ selection method in the sparsity regularization constant γ k .
Table 1. ρ selection method in the sparsity regularization constant γ k .
Step 1Sparsity: χ = L L L ( 1 w ^ 1 L w ^ 2 ) [20]
where L is the length of the impulse response.
Step 2 ρ ( k ) = 0.99 ρ ( k 1 ) + 0.01 ( e χ 1 × w ^ 1 )
Step 3 ρ ( k ) = { m i n ( ρ ( k ) , 0.98 w ^ 1 ) , if   x > 0.75 m i n ( ρ ( k ) , 0.999 w ^ 1 ) , otherwise
Table 2. MSD (mean square deviation) comparison.
Table 2. MSD (mean square deviation) comparison.
Sparsity (S)AlgorithmMSD
4l1-RLS with the true channel−40.6 dB
l1-RLS with the new ρ method−37.8 dB
proposed l1-RTLS with the new ρ method−38.5 dB
Oracle-RLS−50.4 dB
8l1-RLS with the true channel−39.5 dB
l1-RLS with the new ρ method−28.4 dB
proposed l1-RTLS with the new ρ method−38.5 dB
Oracle-RLS−46.9 dB
16l1-RLS with the true channel−38.4 dB
l1-RLS with the new ρ method−18.2 dB
proposed l1-RTLS with the new ρ method−37.6 dB
Oracle-RLS−43.6 dB
32l1-RLS with the true channel−37.6 dB
l1-RLS with the new ρ method−9.1 dB
proposed l1-RTLS with the new ρ method−37.3 dB
Oracle-RLS−40.6 dB
Table 3. MSD (mean square deviation) comparison in sparse RIR estimations.
Table 3. MSD (mean square deviation) comparison in sparse RIR estimations.
Reverberation Time (T60)AlgorithmMSD
100 msl1-RLS with the true channel−38.5 dB
l1-RLS with the new ρ method−34.7 dB
proposed l1-RTLS with the new ρ method−35.4 dB
Oracle-RLS−45.3 dB
400 msl1-RLS with the true channel−32.1 dB
l1-RLS with the new ρ method−20.9 dB
proposed l1-RTLS with the new ρ method−30.1 dB
Oracle-RLS−36.0 dB

Share and Cite

MDPI and ACS Style

Lim, J.; Lee, S. Regularization Factor Selection Method for l1-Regularized RLS and Its Modification against Uncertainty in the Regularization Factor. Appl. Sci. 2019, 9, 202. https://doi.org/10.3390/app9010202

AMA Style

Lim J, Lee S. Regularization Factor Selection Method for l1-Regularized RLS and Its Modification against Uncertainty in the Regularization Factor. Applied Sciences. 2019; 9(1):202. https://doi.org/10.3390/app9010202

Chicago/Turabian Style

Lim, Junseok, and Seokjin Lee. 2019. "Regularization Factor Selection Method for l1-Regularized RLS and Its Modification against Uncertainty in the Regularization Factor" Applied Sciences 9, no. 1: 202. https://doi.org/10.3390/app9010202

APA Style

Lim, J., & Lee, S. (2019). Regularization Factor Selection Method for l1-Regularized RLS and Its Modification against Uncertainty in the Regularization Factor. Applied Sciences, 9(1), 202. https://doi.org/10.3390/app9010202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop