Next Article in Journal
Hybrid Optimization Based Mathematical Procedure for Dimensional Synthesis of Slider-Crank Linkage
Next Article in Special Issue
Modeling Epidemic Spread among a Commuting Population Using Transport Schemes
Previous Article in Journal
Efficiency Analysis with Educational Data: How to Deal with Plausible Values from International Large-Scale Assessments
Previous Article in Special Issue
Modified Inertial Forward–Backward Algorithm in Banach Spaces and Its Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Recursive Regularization Factor Calculation for Sparse RLS Algorithm with l1-Norm

1
Department of Electrical Engineering, College of Electronics and Information Engineering, Sejong University, Gwangjin-gu, Seoul 05006, Korea
2
Department of Defense Systems Engineering, College of Engineering, Sejong University, Gwangjin-gu, Seoul 05006, Korea
3
School of Electronics Engineering, School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1580; https://doi.org/10.3390/math9131580
Submission received: 14 June 2021 / Revised: 1 July 2021 / Accepted: 3 July 2021 / Published: 5 July 2021
(This article belongs to the Special Issue Advances in Computational and Applied Mathematics)

Abstract

:
In this paper, we propose a new calculation method for the regularization factor in sparse recursive least squares (SRLS) with l 1 -norm penalty. The proposed regularization factor requires no prior knowledge of the actual system impulse response, and it also reduces computational complexity by about half. In the simulation, we use Mean Square Deviation (MSD) to evaluate the performance of SRLS, using the proposed regularization factor. The simulation results demonstrate that SRLS using the proposed regularization factor calculation shows a difference of less than 2 dB in MSD from SRLS, using the conventional regularization factor with a true system impulse response. Therefore, it is confirmed that the performance of the proposed method is very similar to that of the existing method, even with half the computational complexity.

1. Introduction

The sparse channel system refers to a model characterized by only a few numbers of non-zero taps when modeling the impulse response of the channel system, using the tapped-delay-line model. We can experience such channels in TV channels [1], wide-band radio communication [2], underwater sound channels [3], etc. [4]. There have been many recent works on the use of adaptive estimation for sparse channel estimation; subsequently, various LMS-type algorithms [5,6] such as sparse LMF algorithms [7,8] and sparse LMS/F algorithms [9,10] have been proposed for sparse channel estimation. The RLS algorithm is better suited to fast convergence rates than the LMS-type algorithm. Hence, the RLS-based sparse adaptive filtering algorithm is considered one of the most promising fast algorithms in many system estimation applications, such as channel estimation. There are some algorithms based on sparse RLS [11,12,13,14] as well as the Total Least Squares (TLS)-based algorithm [15,16]. Many sparse RLS algorithms are algorithmically comparable to the plain RLS algorithm; however, the updated equations in most sparse RLS (SRLS) algorithms are not intrinsically recursive, as in the plain RLS algorithm. Eksioglu and Tanc [13] proposed a fully recursive SRLS algorithm that is comparable to the plain RLS algorithm. The sparse RLS algorithms handle the sparsity with l 1 -norm. Therefore, it is essential to properly select the regularization factor for l 1 -norm. Many researchers have developed as well as proposed a proper selection for the regularization factor. The authors in [13] also proposed a regularization factor calculation method for the SRLS algorithm. Similar recursive regularization factor selection methods were used in [15,16,17,18,19]. However, the algorithm from [15,16,17,18] is not practical because the regularization factor in [15,16,17] assumed that the true system impulse response is known in advance and set the true system response in part to an arbitrary constant in [18]. The regularization factor selection method in [19] needs no true system impulse response. However, the regularization factor is recursively updated. Therefore, errors in updating the regularization factor are likely to propagate and accumulate. In [20], Lim proposed the regularization factor by estimating the sparsity of the estimated system. Although the regularization factor required no true system information, it added error to the system. Then Lim in [20] utilized it only to TLS-based system modeling. In [21], l 1 -IWF (iterative Wiener filter) was proposed, which was a kind of a steepest descent method. l 1 -IWF had a regularization factor without requiring a priori knowledge of the true system response. However, it did not show whether the regularization factor converges to the optimal regularization factor.
In this paper, the major contribution is that we propose a new regularization factor for the SRLS algorithm in [13] and show that it converges to the scaled optimal regularization factor. The proposed regularization factor does not require a priori knowledge of the true system response. The minor contribution is that the proposed regularization factor requires less computation complexity than the regularization in [13].
The remainder of the paper is organized as follows. In Section 2, we reformulate the sparse RLS. In Section 3, a new regularization factor is proposed. In this section, we show that the regularization factor in [13] requires a priori knowledge of the true system impulse response. We also show the calculation complexity of the proposed regularization factor. The simulation condition and results are described and illustrated in Section 4, and a discussion of the results follows in Section 5. We conclude in Section 6.

2. Problem Formulation

In this section, we reformulate the SRLS in [13]. Consider a sparse weight vector w o R N , which represents the channel by a delay line with N taps. By sparsity, the number of significant factors in w o , S, is much lower than its total dimensionality (that is, S ≪ N). The goal is to derive the sparse vector w o based on the input signal vector x ( n ) R N and the received signal y ( n ) , which is assumed to be generated by a linear system, such as (1).
y ( n ) = w o T x ( n ) + η ( n ) ,
where η ( n ) is the additive noise. Consider the following standard RLS optimization problem, subject to a sparsity constraint.
minimize ( w ^ ( n ) ) = 1 2 m = 0 n λ n m e ( m ) 2 , s . t . w ^ ( n ) 1 c ,
where e ( m ) = y ( m ) w ^ ( n ) T x ( m ) , w ^ ( n ) = w ^ 1 ( n ) , , w ^ N ( n ) T , x ( m ) = [ x ( m ) , x ( m 1 ) , , x ( m N + 1 ) ] T , λ is a forgetting factor, and w ^ ( n ) 1 = k = 0 N 1 w ^ k ( n ) . As the reformulation in [22], (2) becomes the following:
J w ^ ( n ) , γ ( n ) = 1 2 ζ w ^ ( n ) + γ ( n ) w ^ ( n ) 1 c ,
where γ ( n ) is a real-valued Lagrangian multiplier. We can find the optimal vector by minimizing the regularized cost function (3). The regularized cost function is convex and nondifferentiable; therefore, subgradient analysis is used instead of the normal gradient. When representing a subgradient of f at w ^ with s f ( w ^ ) , the subgradient of J ( w ^ ( n ) , γ ( n ) ) with respect to w ^ ( n ) is as follows.
s J w ^ ( n ) , γ ( n ) = 1 2 ζ w ^ ( n ) + γ ( n ) s w ^ ( n ) 1 ,
where s w ^ ( n ) 1 = sgn w ^ ( n ) [13]. Here, sgn ( · ) means the element-wise sign function. Hence, for the optimal w ^ ( n ) for J ( w ^ ( n ) , γ ( n ) ) , we set (4) to 0 at the optimal point. After evaluating the gradient ζ ( w ^ ( n ) ) , (4) can be written as normal equations in matrix form as follows [13].
Φ ( n ) w ^ ( n ) = r ( n ) γ ( n ) s w ^ ( n ) 1 ,
where Φ ( n ) = m = 0 n λ n m x ( m ) x ( m ) T = λ Φ ( n 1 ) + x ( n ) x ( n ) T and r ( n ) = m = 0 n λ n m y ( m ) x ( m ) = λ r ( n 1 ) + y ( n ) x ( n ) . With P ( n ) = Φ 1 ( n ) = λ Φ ( n 1 ) + x ( n ) x ( n ) T 1 and using the matrix inversion lemma, we obtain the following:
P ( n ) = λ 1 P ( n 1 ) k ( n ) x ( n ) T P ( n 1 ) ,
where k ( n ) = P ( n 1 ) x ( n ) / λ + x ( n ) T P ( n 1 ) x ( n ) is the gain vector. Assuming that γ ( n 1 ) and s w ^ n 1 1 do not change considerably over a single time step, the updated equation of w ^ ( n ) can be approximately written as follows [13]:
w ^ ( n ) w ^ ( n 1 ) + k ( n ) ξ ( n ) ( 1 λ ) γ ( n 1 ) P ( n ) s w ^ ( n 1 ) 1 = w ^ ( n 1 ) + k ( n ) ξ ( n ) + ( 1 λ ) γ ^ ( n 1 ) P ( n ) s w ^ ( n 1 ) 1 ,
where γ ^ n = γ n and ξ n = y n w ^ n 1 T x n .
Algorithm 1 summarizes the conventional RLS algorithm and Algorithm 2 summarizes the l 1 -norm sparse RLS algorithm.
Algorithm 1 conventional RLS algorithm
1:
Initialize: λ , x ( 0 ) , y ( 0 ) , w R L S ( 0 ) = 0 , η , P ( 0 ) = η 1 I
2:
for n = 1 , 2 , do
3:
     k ( n ) = P ( n 1 ) x ( n ) / λ + x ( n ) T P ( n 1 ) x ( n )
4:
     e n = y n w R L S n 1 T x n
5:
     P ( n ) = λ 1 P ( n 1 ) k ( n ) x ( n ) T P ( n 1 )
6:
     w R L S ( n ) = w R L S ( n 1 ) + k ( n ) e ( n )
7:
end for
Algorithm 2 l 1 -norm sparse RLS algorithm
1:
Initialize: λ , x ( 0 ) , y ( 0 ) , w ^ ( 0 ) = 0 , η , P ( 0 ) = η 1 I
2:
for n = 1 , 2 , do
3:
     k ( n ) = P ( n 1 ) x ( n ) / λ + x ( n ) T P ( n 1 ) x ( n )
4:
     ξ n = y n w ^ n 1 T x n
5:
     P ( n ) = λ 1 P ( n 1 ) k ( n ) x ( n ) T P ( n 1 )
6:
     calculate γ ^ ( n 1 )
7:
     w ^ ( n ) = w ^ ( n 1 ) + k ( n ) ξ ( n ) + ( 1 λ ) γ ^ ( n 1 ) P ( n ) s w ^ ( n 1 ) 1
8:
end for

3. Proposed Recursive Regularization Factor for Sparse RLS Algorithm

In this section, we derive the regularization factor, γ ^ ( n 1 ) , such that w ^ ( n 1 ) 1 = c , which means that the l 1 -norm of w ^ ( n ) is preserved for all the time steps in n. This property yields the time invariance in w ^ ( n ) [23].
d w ^ ( n ) 1 d n = w ^ ( n ) 1 w ^ ( n ) T d w ^ ( n ) d n = s w ^ ( n ) 1 T w ^ ( n ) w ^ ( n T s ) T s s w ^ ( n 1 ) 1 T w ^ ( n ) w ^ ( n T s ) T s = 0 .
By referring to (7) and assuming a normalized sample time T s to 1 [24], (8) is as follows.
s w ^ ( n 1 ) 1 T k ( n ) ξ ( n ) + ( 1 λ ) γ ^ ( n 1 ) P ( n ) s w ^ ( n 1 ) 1 = 0 .
Hence, γ ^ ( n 1 ) in (9) becomes the following:
γ ^ ( n 1 ) = s w ^ ( n 1 ) 1 T k ( n ) ξ ( n ) 1 λ s w ^ ( n 1 ) 1 T P ( n ) s w ^ ( n 1 ) 1 .
Eksioglu derived the upper bound of the regularization parameter γ ^ ( n ) in [13] as follows:
γ ¯ ( n ) = 2 s w ^ ( n ) 1 T P ( n ) ϵ ˜ ( n ) P ( n ) s w ^ ( n ) 1 2 2 ,
where ϵ ˜ ( n ) = w ˜ ( n ) w o and w ˜ ( n ) is the solution to the conventional non-regularized normal equation as w ˜ ( n ) = P ( n ) r ( n ) [25]. In (11), the true system impulse response, w o , must be known in advance for ϵ ˜ ( n ) = w ˜ ( n ) w o .
If we assume that x ( n ) is a white noise process, its autocorrelation matrix, Φ ( n ) , asymptotically becomes σ x 2 I and then P ( n ) = Φ ( n ) 1 σ x 2 ( 1 λ ) I [25]. Therefore, (11) asymptotically becomes the following:
γ ¯ ( n ) 2 σ x 2 1 λ s w ^ ( n ) 1 T ϵ ˜ ( n ) s w ^ ( n ) 1 2 2 .
Eksioglu also showed that, if the regularization parameter γ ^ ( n ) is 0 γ ^ ( n ) γ ¯ ( n ) , the following inequality is satisfied.
w ^ ( n ) w o 2 2 w ˜ ( n ) w o 2 2 .
This means that the estimation error with regularization is lower than the estimation error without regularization.
In order to review the asymptotic meaning of the proposed regularization in (10), we also assume x ( n ) is a white noise process whose autocorrelation matrix, Φ ( n ) , asymptotically becomes σ x 2 I and then P ( n ) = Φ ( n ) 1 σ x 2 ( 1 λ ) I [25]. We use the simplified gain vector k(n) in [25].
k ( n ) = P ( n ) x ( n ) ,
and approximate the error, ξ ( n ) , in high SNR,
ξ ( n ) = y ( n ) x T ( n ) w ^ ( n 1 ) x T ( n ) w o x T ( n ) w ^ ( n 1 ) = x T ( n ) w ^ ( n 1 ) w o x T ( n ) w ˜ ( n 1 ) w o .
When we substitute (10) with (14) and (15), we can rewrite the proposed regularization in (10) as follows:
γ ^ ( n 1 ) = s w ^ ( n 1 ) 1 T P ( n ) x ( n ) x T ( n ) w ^ ( n 1 ) w o 1 λ s w ^ ( n 1 ) 1 T P ( n ) s w ^ ( n 1 ) 1 .
In (16), P ( n ) x ( n ) x T ( n ) can be asymptotically approximated as the expected value of (17) in [26].
E P ( n ) x ( n ) x T ( n ) 1 λ 1 λ k + 1 I 1 λ I .
Substituting (16) with (17) and P ( n ) σ x 2 ( 1 λ ) I , the variable regularization factor in (16) becomes the following:
γ ^ ( n 1 ) σ x 2 1 λ w ^ ( n 1 ) 1 T ϵ ˜ ( n 1 ) s w ^ ( n 1 ) 1 2 2 = 1 2 γ ¯ ( n 1 ) ,
γ ^ ( n 1 ) = 1 2 γ ¯ ( n 1 ) .
In (19), the proposed regularization parameter is approximately converse to the scaled optimal regularization parameter in [13]. Therefore, the newly derived regularization parameter γ ^ ( n ) in (19) satisfies 0 γ ^ ( n ) γ ¯ ( n ) . Therefore, the proposed regularization parameter can be used as a regularization parameter on behalf of the optimal regularization parameter. In addition, the proposed regularization parameter needs no true system parameter w o .
In terms of computational complexity, the difference between SRLS in [13] and the proposed algorithm lies only in the computational complexity of the regularization factor. Therefore, we can compare the computational complexity between the proposed regularization factor in (10) and the regularization factor in (11) from [13]. Actually, the regularization factor used in [13] is (20), which is an approximation of (11).
γ ^ ( n 1 ) = 2 t r P ( n 1 ) N f w ^ ( n 1 ) w o P ( n 1 ) s f w ^ ( n 1 ) 2 2 + s f w ^ ( n 1 ) P ( n 1 ) w ^ ( n ) w ^ R L S ( n 1 ) P ( n 1 ) s f w ^ ( n 1 ) 2 2 = 2 t r P ( n 1 ) N w ^ ( n 1 ) 1 w o 1 P ( n 1 ) sgn w ^ ( n 1 ) 2 2 + sgn w ^ ( n 1 ) P ( n 1 ) w ^ ( n ) w ^ R L S ( n 1 ) P ( n 1 ) sgn w ^ ( n 1 ) 2 2 ,
where w o is the true system response, f ( w ^ ( n ) ) = w ^ ( n ) 1 , s w ^ 1 = sgn ( w ^ ) and w ^ R L S ( n ) is a solution from the conventional RLS algorithm [13]. And applying s w ^ 1 = s g n ( w ^ ) to the proposed regularization factor (10), it becomes as follows:
γ ^ ( n 1 ) = s f w ^ ( n 1 ) T k ( n ) ζ ( n ) ( 1 λ ) s f w ^ ( n 1 ) T P ( n 1 ) s f w ^ ( n 1 ) = sgn w ^ ( n 1 ) T k ( n ) ζ ( n ) ( 1 λ ) sgn w ^ ( n 1 ) T P ( n 1 ) sgn w ^ ( n 1 ) .
The regularization factor is calculated in line 6 in Algorithm 2. When calculating the complexity of the regularization factor, it should be taken into account that the regularization factor uses the elements calculated before line 6. Taking this into account and calculating the complexity, (20) requires 2 N + 3 multiplications, whereas (21) requires N + 1 multiplications.

4. Simulation Results

For the simulation in this section, we set the same experimental conditions as in [13] (code is available at https://web.itu.edu.tr/eksioglue/pubs.htm accessed on 16 March 2021). We assume two system parameters, w o , with N = 64 taps and 256 taps, respectively. Out of the N coefficients, the only S coefficients are not zero. We generate the values of coefficients from an N(0, 1/S) distribution and randomly place the non-zero coefficients.
In the simulation, we show the system estimation results, using the proposed regularization factor that does not require the true system response information. We compare the l 1 -RLS using the true system response [13], the l 1 -RLS in [19], l 1 -IWF in [21] and the l 1 -RLS, using the proposed regularization factor selection method. In addition, we compare the estimation result from the conventional RLS. We simulate these algorithms in the sparse impulse response for S = 2, 4, 8, and 16 for the performance evaluation. Figure 1 shows the Mean Square Deviation (MSD) comparison results in the case of an order N = 64 at SNR = 20 dB, where MSD ( w ^ ( n ) ) = E w o w ^ ( n ) 2 . Figure 2 also shows the MSD comparison results in the case of an order N = 256 at SNR = 20 dB. In addition, the MSD comparison results are summarized in Table 1 at SNR = 20 dB, 10 dB and 0 dB, respectively.

5. Discussion

The MSD comparison results in Figure 1 show that the estimation performance of l 1 -RLS, using the regularization factor of the proposed method, is almost the same as the l 1 -RLS, using the regularization factor with the true system impulse response. l 1 -IWF also performs almost the same as the l 1 -RLS with the true system information as the results in [21]. The results from the proposed algorithm are even better than those of the l 1 -RLS in [19]. Predictably, the conventional RLS has the worst MSD in all cases. Figure 1 confirms that, despite having no prior knowledge of the true system response, the proposed regularization factor is comparable to the conventional regularization factor with the true system impulse response.
Figure 2 shows the MSD results for a longer-order system: N = 256 . We can see that Figure 2 has very similar results to those in Figure 1. Therefore, it can be confirmed that the proposed regularization factor calculation method works well, regardless of the system dimension.
Table 1 shows the performance of sparse RLS, using the proposed regularization factor calculation method, compared with other algorithms in various SNR situations. The performance of l 1 -IWF is also similar to that of the proposed algorithm, but the performance of the proposed algorithm is better at low SNR. The results of Table 1 shows that, although SRLS using the proposed regularization factor does not actually utilize the impulse response of the true system, the MSD difference between SRLS with the regularization factor using information on the actual impulse response of the target system, and SRLS with the proposed regularization factor, is less than 2 dB. Therefore, it can be said that the performance of the two SRLS is very similar. In addition, as mentioned in Section 3, it is also remarkable that the computational complexity of the regularization factor can be reduced by about half of the conventional regularization factor.

6. Conclusions

In this paper, we proposed a new calculation method for a regularization factor in l 1 -RLS, requiring no prior knowledge of the true system response. We also showed that the proposed regularization factor converges to the scaled optimal regularization factor. Therefore, we have shown that the proposed regularization factor can be used on behalf of the optimal regularization factor. The simulation results confirmed that the proposed regularization factor behaves almost the same as the conventional regularization factor with the true system response.

Author Contributions

Conceptualization, J.L.; methodology, J.L.; validation, J.L., K.L. and S.L.; formal analysis, J.L.; investigation, J.L., K.L. and S.L.; writing—original draft preparation, J.L.; writing—review and editing, K.L. and S.L.; visualization, J.L.; project administration, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research was supported by the Agency for Defense Development (ADD) in Korea (UD190005DD).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schreiber, W.F. Advanced television systems for terrestrial broadcasting: Some problems and some proposed solutions. IEEE Proc. 1995, 83, 958–981. [Google Scholar] [CrossRef]
  2. Ariyavisitakul, S.; Sollenberger, N.R.; Greenstein, L.J. Tap selectable decision-feedback equalization. IEEE Trans. Commun. 1997, 45, 1497–1500. [Google Scholar] [CrossRef]
  3. Li, W.; Preisig, J.C. Estimation of rapidly time-varying sparse channels. IEEE J. Ocean. Eng. 2007, 32, 927–939. [Google Scholar] [CrossRef]
  4. Bruckstein, A.M.; Donoho, D.L.; Elad, M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 2009, 51, 34–81. [Google Scholar] [CrossRef] [Green Version]
  5. Chen, Y.; Gu, Y.; Hero, A. Sparse LMS for system identification. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2009), Taipei, Taiwan, 19–24 April 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 3125–3128. [Google Scholar]
  6. Taheri, O.; Vorobyov, S. Sparse channel estimation with lp-norm and reweighted l1-norm penalized least mean squares. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2011), Prague, Czech Republic, 22–27 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2864–2867. [Google Scholar]
  7. Gui, G.; Xu, L.; Adachi, F. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing. EURASIP J. Adv. Signal Process. 2014, 2014, 125–134. [Google Scholar] [CrossRef] [Green Version]
  8. Amresh, K.; Ikhlaq, H.; Bhim, S. Double-stage three-phase grid-integrated solar PV system with fast zero attracting normalized least mean fourth based adaptive control. IEEE Trans. Ind. Electron. 2018, 65, 3921–3931. [Google Scholar]
  9. Gui, G.; Abolfazl, M.; Adachi, F. Sparse LMS/F algorithms with application to adaptive system identification. Wirel. Commun. Mob. Comput. 2015, 15, 1649–1658. [Google Scholar] [CrossRef]
  10. Gui, G.; Xu, L.; Adachi, F. Extra gain: Improved sparse channel estimation using reweighted l1-norm penalized LMS/F algorithm. In Proceedings of the 2014 IEEE/CIC International Conference on Communications in China (ICCC 2014), Shanghai, China, 13–15 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 13–15. [Google Scholar]
  11. Babadi, B.; Kalouptsidis, N.; Tarokh, V. SPARLS: The sparse RLS algorithm. IEEE Trans. Signal Process. 2010, 58, 4013–4025. [Google Scholar] [CrossRef] [Green Version]
  12. Angelosante, D.; Bazerque, J.; Giannakis, G. Online adaptive estimation of sparse signals: Where RLS meets the l1-norm. IEEE Trans. Signal Process. 2010, 58, 3436–3477. [Google Scholar] [CrossRef]
  13. Eksioglu, E. RLS algorithm with convex regularization. IEEE Signal Process. Lett. 2011, 18, 470–473. [Google Scholar] [CrossRef]
  14. Zakharov, Y.; Nascimento, V. DCD-RLS adaptive filters with penalties for sparse identification. IEEE Trans. Signal Process. 2013, 61, 3198–3213. [Google Scholar] [CrossRef]
  15. Lim, J.; Pang, H. Mixed norm regularized recursive total least squares for group sparse system identification. Int. J. Adapt. Control Signal Process. 2016, 30, 664–673. [Google Scholar] [CrossRef]
  16. Lim, J.; Pang, H. l1 regularized recursive total least squares based sparse system identification for the error-in-variables. SpringerPlus 2016, 5, 1460. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Chen, Y.; Gui, G. Recursive least square-based fast sparse multipath channel estimation. Int. J. Commun. Syst. 2017, 30, e3278. [Google Scholar] [CrossRef]
  18. Yang, J.; Xu, Y.; Rong, H.; Du, S.; Chen, B. Sparse recursive least mean p-power extreme learning machine for regression. IEEE Access 2018, 6, 16022–16034. [Google Scholar] [CrossRef]
  19. Yang, C.; Qiao, J.; Ahmad, Z.; Nie, K.; Wang, L. Online sequential echo state network with sparse RLS algorithm for time series prediction. Neural Netw. 2019, 118, 32–42. [Google Scholar] [CrossRef] [PubMed]
  20. Lim, J.; Lee, S. Regularization factor selection method for l1-Regularized RLS and its modification against uncertainty in the regularization factor. Appl. Sci. 2019, 9, 202. [Google Scholar] [CrossRef] [Green Version]
  21. Lim, J. l1-norm iterative Wiener filter for sparse channel estimation. Circuits Syst. Signal Process. CSSP 2020, 39, 6386–6393. [Google Scholar] [CrossRef]
  22. Stankovic, L. Digital Signal Processing with Selected Topics Adaptive Systems, Time-Frequency Analysis, Sparse Signal Processing; CreateSpace Independent Publishing Platform: Charleston, SC, USA, 2015; p. 766. [Google Scholar]
  23. Khalid, S.; Abrar, S. Blind adaptive algorithm for sparse channel equalization using projections onto lp-ball. Electron. Lett. 2015, 51, 1422–1424. [Google Scholar] [CrossRef]
  24. Oppenheim, A.; Schafer, R. Digital Signal Processing; Pretice-Hall: Hoboken, NJ, USA, 1975; p. 204. [Google Scholar]
  25. Farhang-Boroujeny, B. Adaptive Filter Theory and Applicaions; John Wiley & Sons: Hoboken, NY, USA, 1999; pp. 419–427. [Google Scholar]
  26. Diniz, P. Adaptive Filtering Algorithms and Practical Implementation, 3rd ed.; Springer: New York, NY, USA, 2008; p. 208. [Google Scholar]
Figure 1. MSD comparison in N = 64 for S = 2, 4, 8, and 16 when applying the proposed regularization factor to the l 1 -RLS: (a) MSD at S = 2 (b) MSD at S = 4 (c) MSD at S = 8 (d) MSD at S = 16 (-▹-: l 1 -RLS using the proposed regularization factor without the true system impulse response, -×-: l 1 -RLS using the conventional regularization factor with the true system impulse response, -∘-: l 1 -RLS from [19], -⋄-: conventional RLS without considering sparsity, -▿-: l 1 -IWF from [21]).
Figure 1. MSD comparison in N = 64 for S = 2, 4, 8, and 16 when applying the proposed regularization factor to the l 1 -RLS: (a) MSD at S = 2 (b) MSD at S = 4 (c) MSD at S = 8 (d) MSD at S = 16 (-▹-: l 1 -RLS using the proposed regularization factor without the true system impulse response, -×-: l 1 -RLS using the conventional regularization factor with the true system impulse response, -∘-: l 1 -RLS from [19], -⋄-: conventional RLS without considering sparsity, -▿-: l 1 -IWF from [21]).
Mathematics 09 01580 g001
Figure 2. MSD comparison in N = 256 for S = 2, 4, 8, and 16 when applying the proposed regularization factor to the l 1 -RLS: (a) MSD at S = 2 (b) MSD at S = 4 (c) MSD at S = 8 (d) MSD at S = 16 (-▹-: l 1 -RLS using the proposed regularization factor without the true system impulse response, -×-: l 1 -RLS using the conventional regularization factor with the true system impulse response, -∘-: l 1 -RLS from [19], -⋄-: conventional RLS without considering sparsity, -▿-: l 1 -IWF from [21]).
Figure 2. MSD comparison in N = 256 for S = 2, 4, 8, and 16 when applying the proposed regularization factor to the l 1 -RLS: (a) MSD at S = 2 (b) MSD at S = 4 (c) MSD at S = 8 (d) MSD at S = 16 (-▹-: l 1 -RLS using the proposed regularization factor without the true system impulse response, -×-: l 1 -RLS using the conventional regularization factor with the true system impulse response, -∘-: l 1 -RLS from [19], -⋄-: conventional RLS without considering sparsity, -▿-: l 1 -IWF from [21]).
Mathematics 09 01580 g002aMathematics 09 01580 g002b
Table 1. MSD comparison in various SNR.
Table 1. MSD comparison in various SNR.
Channel LengthNo. of Non-Zero CoefficientsSNR20 dB10 dB0 dB
L = 64S = 4proposed method−30.0−20.2−10.2
l 1 -RLS with true impulse response−30.8−20.6−11.0
l 1 -RLS from [19]−28.7−18.7−8.7
l 1 -IWF from [21]−29.5−19.7−9.0
conventional RLS−27.7−17.5−7.8
S = 16proposed method−28.4−18.6−9.1
l 1 -RLS with true impulse response−28.5−18.5−9.3
l 1 -RLS from [19]−28.5−18.4−8.6
l 1 -IWF from [21]−29.0−18.8−9.2
conventional RLS−27.6−17.7−7.8
L = 256S = 4proposed method−24.4−14.3−4.2
l 1 -RLS with true impulse response−25.3−15.5−5.5
l 1 -RLS from [19]−22.7−12.7−2.6
l 1 -IWF from [21]−25.2−15.2−3.6
conventional RLS−21.2−11.2−1.4
S = 16proposed method−24.2−14.3−4.4
l 1 -RLS with true impulse response−24.6−14.6−5.0
l 1 -RLS from [19]−22.7−12.5−2.6
l 1 -IWF from [21]−24.8−14.8−3.4
conventional RLS−21.1−11.1−1.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lim, J.; Lee, K.; Lee, S. A Modified Recursive Regularization Factor Calculation for Sparse RLS Algorithm with l1-Norm. Mathematics 2021, 9, 1580. https://doi.org/10.3390/math9131580

AMA Style

Lim J, Lee K, Lee S. A Modified Recursive Regularization Factor Calculation for Sparse RLS Algorithm with l1-Norm. Mathematics. 2021; 9(13):1580. https://doi.org/10.3390/math9131580

Chicago/Turabian Style

Lim, Junseok, Keunhwa Lee, and Seokjin Lee. 2021. "A Modified Recursive Regularization Factor Calculation for Sparse RLS Algorithm with l1-Norm" Mathematics 9, no. 13: 1580. https://doi.org/10.3390/math9131580

APA Style

Lim, J., Lee, K., & Lee, S. (2021). A Modified Recursive Regularization Factor Calculation for Sparse RLS Algorithm with l1-Norm. Mathematics, 9(13), 1580. https://doi.org/10.3390/math9131580

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop