Next Article in Journal
Influence of Failure Probability Due to Parameter and Anchor Variance of a Freeway Dip Slope Slide—A Case Study in Taiwan
Previous Article in Journal
Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels

1
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
2
National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
3
Department of Electronics, Valahia University of Targoviste, Targoviste 130082, Romania
4
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(8), 432; https://doi.org/10.3390/e19080432
Submission received: 6 July 2017 / Revised: 16 August 2017 / Accepted: 18 August 2017 / Published: 22 August 2017

Abstract

:
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties on the two grouped channel coefficients to improve the channel estimation performance in a mixed noise environment. As a result, a zero attraction term is obtained from the expected l 0 and l 1 penalty techniques. Furthermore, a reweighting factor is adopted and incorporated into the zero-attraction term of the GC-MCC algorithm which is denoted as the reweighted GC-MCC (RGC-MMC) algorithm to enhance the estimation performance. Both the GC-MCC and RGC-MCC algorithms are developed to exploit well the inherent sparseness properties of the sparse multi-path channels due to the expected zero-attraction terms in their iterations. The channel estimation behaviors are discussed and analyzed over sparse channels in mixed Gaussian noise environments. The computer simulation results show that the estimated steady-state error is smaller and the convergence is faster than those of the previously reported MCC and sparse MCC algorithms.

1. Introduction

With the rapid rise of various wireless technologies, wireless transmissions have been widely developed in various fields such as mobile communications and satellite communication systems [1,2]. However, the signal might be distorted due to the diffraction, refraction, reflection or deviation from obstacles such as buildings, mountains and so on, resulting in transmission delays and other adverse effects. In the wireless communication transmission channels, the selective fading will lead to delays, which is also known as multi-path effects. In fact, the multi-path channel is always sparse [3,4,5], which means that most of the channel impulse response (CIR) coefficients are small while only a few of them are large in magnitude [6]. Since the CIR is sparse, many channel estimation algorithms have been presented to utilize this characteristic to improve the communication quality [2,7]. It is known that the adaptive filtering (AF) algorithm can be used for implementing channel estimation. Thus, various AF algorithms have been reported and used for channel estimation. Among these AF algorithms, the most typical algorithm is the least mean square (LMS) which is invented by B. Widrow. The LMS algorithm has been extensively investigated in channel estimation and noise cancellation owing to its simple implementation, high stability and fast convergence speed [4,8]. However, its performance is not satisfactory for sparse channel estimation with a low signal to noise ratio (SNR).
Recently, the compressive sampling (CS) concept has been introduced into AF algorithms to handle the sparse signals [9,10]. After that, Y. Chen et al. put forward to the zero attracting LMS (ZA-LMS) and reweighting ZA-LMS (RZA-LMS) algorithms [11]. ZA-LMS and RZA-LMS algorithms are implemented by integrating l 1 -norm and reweighting l 1 -norm into the LMS’s cost function, respectively. These two algorithms achieve lower steady-error and faster convergence speed than that of the basic LMS algorithm for handling sparse signals owing to the constructed zero attractors. Moreover, l 0 -norm and l p -norm have also been employed and introduced into the LMS’s cost function to improve the performance of the ZA- and RZA-LMS algorithms in the sparse signal processing area [12,13,14,15,16]. All of those norm-constrained LMS algorithms can effectively exploit sparse characteristics of the in-nature sparse channels. However, they have a common drawback, i.e., their sensitivity to the input signal scaling (ISS) and noise interferences. In order to reduce the effects of the ISS, several improved AF algorithms have been presented by using high order error criteria or mixed error norms such as least mean fourth (LMF), least mean squares-fourth (LMS/F) and so on [17,18,19,20]. Similarly, their related sparse forms have also been developed based on the above mentioned norm penalties [17,21,22,23,24,25,26,27]. However, those AF algorithms and their related sparse forms are not good enough for dealing with the sparse channel under non-Gaussian or mixed noise environments.
In recent years, information theoretic quantities were used for implementing cost function in adaptive systems. The effective entropy-based AF algorithms include the maximum correntropy criterion (MCC) and the minimum error entropy (MEE) [28,29,30,31,32]. In [28], it is shown that the MEE is more complex than the MCC algorithm in the computational overhead. Therefore, the MCC algorithm has been extensively developed in non-Gaussian environments [29,31,32]. Furthermore, the MCC has low complexity which is nearly the same as that of the LMS-like algorithms. However, the performance of the MCC algorithm may be degraded for sparse signal processing. In order to enhance the MCC algorithm for handling sparse signal and sparse system identification, l 1 -norm and reweighting l 1 -norm constraints have been exerted on the channel coefficient vector and integrated into the MCC’s cost function. Similar to the ZA-LMS and RZA-LMS algorithms, the zero attracting MCC (ZA-MCC) and reweighting ZA-MCC (RZA-MCC) algorithms [33] were obtained within the zero attracting framework. Then, the normalized MCC (NMCC) algorithm was also presented [34,35] by referring to the normalized least mean square (NLMS) algorithm. Recently, W. Ma proposed a correntropy-induced metric (CIM) penalized MCC algorithm in [33], and Y. Li presented a soft parameter function (SPF) constrained MCC algorithm [34]. The CIM and SPF are also one kind of l 0 -norm approximation to form sparse MCC algorithms, and the SPF-MCC is given in the appendix. As for these improved MCC algorithms, the l 0 -norm, CIM and SPF penalties are incorporated into the MCC’s cost function to devise desired zero attractors. In the ZA-MCC algorithm, the zero attractor gives uniform penalty on all the channel coefficients, while the l 0 -norm approximation MCC algorithms will increase the complexity.
In this paper, a group-constrained maximum correntropy criterion (GC-MCC) algorithm based on the CS concept and zero attracting (ZA) techniques is proposed in order to fully exploit the sparseness characteristics of the multi-path channels. The GC-MCC algorithm is derived by incorporating a non-uniform norm into the MCC’s cost function and the non-uniform norm is split into two groups according to the mean of the magnitudes of the channel coefficients. For the large channel coefficients group, the l 0 -norm penalty is used, while the l 1 -norm penalty is implemented for the small channel group. Then, a reweighting technique is utilized in the GC-MCC algorithm to develop the reweighted GC-MCC (RGC-MCC) algorithm. The performance of the GC- and RGC-MCC algorithms is evaluated and discussed for estimating mix-noised sparse channels. The GC- and RGC-MCC algorithms achieve superior performance in both steady-error and convergence for sparse channel estimations with different sparsity levels. Simulation results show that the GC- and RGC-MCC algorithms can effectively enhance the sparse channel estimation by using the proposed group constraints and can provide smaller steady-state errors and faster convergence for application in a mixed noise environment.
The structure of this paper is summarized as follows. In Section 2, the MCC and its related sparse algorithms are briefly reviewed. Section 3 introduces the GC-MCC and RGC-MCC algorithms, and mathematically derives them. In Section 4, simulations that show the effectiveness of the proposed GC-MCC and RCG-MCC algorithms are presented. Finally, our work is summarized in Section 5.

2. Review of the MCC and Its Related Sparse Algorithms

We consider a typical channel estimation system based on the MCC algorithm, which is given in Figure 1. x n is the input training signal with a length of M, which is surveyed to an unknown sparse channel whose vector form is g = g 1 , g 2 , , g M T . As a sparse channel, most of the channel coefficients in the unknown sparse channel g are zeros or near-zeros. Herein, we use K to represent the number of nonzero coefficients in the unknown sparse channel g . Then, the desired signal can be written as
d n = g T x n + r n ,
where r n is a mixed Gaussian noise which is independent of the input training signal x n . Moreover, the instantaneous estimation error at n-th iteration is defined as
e n = d n g ^ T n x n ,
where g ^ n denotes the estimated channel vector. The MCC-based channel estimation is to obtain the minimization of the iterative instantaneous estimation error e n , and hence, we can estimate the unknown sparse channel g .
The MCC algorithm is using the localized similarity to solve the following problem
min 1 2 g ^ n + 1 g ^ n 2 subject to e ^ n = 1 α exp e 2 n 2 σ 2 e n ,
where e ^ n = d n g ^ T n + 1 x n , and σ > 0 is a trade off parameter. Additionally, 2 represents the Euclidean norm [36], and α = β MCC x n 2 , where β MCC is the step-size of the MCC algorithm. Based on Equation (3), we can write MCC’s cost function as
J 0 n = 1 2 g ^ n + 1 g ^ n 2 + λ MCC e ^ n 1 α exp e 2 n 2 σ 2 e n ,
where λ MCC is the multiplier parameter. By utilizing the Lagrange multiplier method (LMM), we can obtain the partial derivatives of g ^ n + 1 and λ MCC in Equations (5) and (6)
g ^ n + 1 g ^ n λ MCC x n = 0 ,
e ^ n 1 α exp e 2 n 2 σ 2 e n = 0 .
Then, we get λ MCC
λ MCC = α exp e 2 n 2 σ 2 e n x n 2 .
Substituting (7) into (5), the updating equation with the vector formed of the MCC algorithm is
g ^ n + 1 = g ^ n + β MCC exp e 2 n 2 σ 2 e n x n .
It can be seen that the recursion of the MCC algorithm is similar to the LMS algorithm, except the exponential term. Similarly, the MCC algorithm does not utilize the intrinsic sparsity property of the sparse channels. Motivated by the ZA-LMS and RZA-LMS algorithms, sparse MCC algorithms have been presented and they are named as zero attracting MCC (ZA-MCC) and reweighting ZA-MCC (RZA-MCC) [33]. For the ZA-MCC algorithm, it is implemented by integrating a l 1 -norm into MCC’s cost function, and solving [33]
min 1 2 g ^ n + 1 g ^ n 2 + θ ZA g ^ n + 1 1 subject to e ^ n = 1 α exp e 2 n 2 σ 2 e n ,
where 1 is the l 1 -norm, and a regularization parameter θ ZA is used for controlling its ability. Based on the LMM, the ZA-MCC’s cost function is [33]
J 1 n = 1 2 g ^ n + 1 g ^ n 2 + θ ZA g ^ n + 1 1 + λ ZA e ^ n 1 α exp e 2 n 2 σ 2 e n ,
where λ ZA is the multiplier parameter of the ZA-MCC algorithm. By using LMM, the updating Equation of the ZA-MCC algorithm is [33]
g ^ n + 1 = g ^ n + β ZA exp e 2 n 2 σ 2 e n x n θ ZA sgn g ^ n ,
where θ ZA is a zero attracting ability controlling parameter that is used for giving a tradeoff between the estimation error and l 1 -norm constraint, and β ZA is the step-size of the ZA-MCC algorithm. However, we noticed that the zero attraction term θ ZA sgn g ^ n uniformly attracts all the channel coefficients to zero. Therefore, its performance might be degraded when it deals with less sparse channels. Then, a reweighting factor is introduced into the zero attraction term θ ZA sgn g ^ n , resulting in a RZA-MCC algorithm whose updating equation is [33]
g ^ n + 1 = g ^ n + β RZA exp e 2 n 2 σ 2 e n x n θ RZA sgn g ^ n 1 + ε g ^ n ,
where ε , θ RZA and β RZA are the reweighting controlling factor, zero attraction controlling parameter and the RZA-MCC’s step-size, respectively. It can be seen from the upgrading equation that θ RZA sgn g ^ n 1 + ε g ^ n acts as the zero attractor which exerts different zero attraction to the channel coefficients that depends on their magnitudes.

3. The Proposed Group-Constrained Sparse MCC Algorithms

From the above discussion, we found that the zero attractor θ ZA sgn g ^ n is realized by incorporating the l 1 -norm penalty in the MCC’s cost function, and it can speed up the convergence and reduce the channel estimation error. Moreover, the l 0 -norm-constrained MCC algorithm can further improve the performance since the l 0 -norm can count the number of the non-zero channel coefficients. However, the complexity is significantly increased due to the calculation of the l 0 -norm approximation and its constraint. In order to fully exploit the sparsity property of the multi-path channel, we propose a group-constrained MCC algorithm by exerting the l 0 -norm penalty on the group of large channel coefficients and forcing the l 1 -norm penalty on the group of small channel coefficients. Herein, a non-uniform norm is used to split the non-uniform penalized algorithms into a large group and a small group, and the non-uniform norm is defined as [37,38]
g ^ ( n ) p p = i = 1 M g ^ i ( n ) p , 0 p 1 ,
which is a l 0 -norm when p 0
lim p 0 g ^ ( n ) p p = g ^ ( n ) 0 ,
and it is a l 1 -norm when p is infinitely close to 1
lim p 1 g ^ ( n ) p p = g ^ ( n ) 1 .
Herein, the uniform-norm given in Equation (13) is a variable norm which is controlled by the parameter p. When p is very close to zero, the proposed norm in Equation (13) can be regarded as a l 0 -norm. As for p = 1, the norm in Equation (13) is the l 1 -norm. Then, the constructed non-uniform norm in Equation (13) is introduced into the MCC’s cost function to devise our proposed GC-MCC algorithm, and GC-MCC is to solve
min 1 2 g ^ n + 1 g ^ n 2 + θ GC g ^ ( n + 1 ) p , M p subject to e ^ n = 1 α exp e 2 n 2 σ 2 e n ,
where g ^ ( n + 1 ) p , M p is a kind of g ^ ( n + 1 ) p p , which uses a different value of p for each channel coefficient at M-th position in the sparse channels, here with the introduction of p = p 1 , p 2 , , p M , and θ GC is a regularization parameter. Then, the cost function of the GC-MCC algorithm is
J GC n = 1 2 g ^ n + 1 g ^ n 2 + θ GC g ^ ( n + 1 ) p , M p + λ GC e ^ n 1 α exp e 2 n 2 σ 2 e n .
In Equation (17), λ GC is a multiplier. Based on the LMM, the gradients of J GC n with respect to g ^ n + 1 and λ GC are
J GC n g ^ n + 1 = 0 ,
and
J GC n λ GC = 0 .
Then, we can get
g ^ i n + 1 = g ^ i n θ GC p i · sgn g ^ i n + 1 g ^ i n + 1 1 p i + λ GC x i n ,
and
e ^ n = 1 α exp e 2 n 2 σ 2 e n ,
where p i is the i-th element of matrix p . From Equations (20) and (21), we can obtain λ GC
λ GC = α exp e 2 n 2 σ 2 e n + θ GC p i · sgn g ^ i n + 1 g ^ i n + 1 1 p i x i T n x i n 2 .
Substituting λ GC into Equation (20), the updated equation of the proposed GC-MCC algorithm is
g ^ i n + 1 = g ^ i n + β GC exp e 2 n 2 σ 2 e n x i n θ GC p i · sgn g ^ i n + 1 g ^ i n + 1 1 p i 1 x i n x i T n x i n 2
where β GC is a step-size for GC-MCC algorithm. It can be seen from Equation (23), the matrix p = p 1 , p 2 , , p M can assign different p i to each channel coefficients. To better exert the p i to the channel coefficients, the channel coefficients are classified according to their magnitudes. From the measurement and the previous investigations of the sparse channels [2,6,7,10,11,12,13,14,15,16,21,22,23,24,26,27], we found that few channel coefficients are active non-zero ones, while most of the channel coefficients are inactive zero or near-zero ones. Thus, we propose a threshold to categorize the channel coefficients into two groups. Herein, a classify criterion which is used as the threshold is proposed based on the absolute value expectation of g ^ n and it is defined as
y ( n ) = E g ^ i ( n ) , 1 i < M .
Then, we categorize the channel coefficients into two groups in terms of the criterion in (24). When the channel coefficients g ^ i ( n ) > y ( n ) , the channel coefficients belong to the “large” group, while g ^ i ( n ) < y ( n ) , the channel coefficients are located in the “small” group. In fact, a threshold is proposed to split the variable norm into the “large” group and the “small” group, where the threshold is implemented using the mean of the estimated channel coefficients. If the channel coefficients are greater than this threshold, they are the “large” group. Otherwise, the channel coefficients belong to the “small” group when the channel coefficients are smaller than this threshold. For the “large” group, l 0 -norm penalty is used to count the number of active channel coefficients, and l 1 -norm penalty is adopted to uniformly attract inactive coefficients to zero for the “small” group. To effectively integrate these two groups into (23), we define [37]
f i = sgn y ( n ) g ^ i ( n ) + 1 2 , 1 i < M .
Therefore, f i is set to be 0 when g ^ i ( n ) > y ( n ) , while f i will be 1 for g ^ i ( n ) < y ( n ) . Finally, the updated equation of the GC-MCC is
g ^ i n + 1 = g ^ i n + β GC exp e 2 n 2 σ 2 e n x i n θ GC f i sgn g ^ i n + 1 1 x i n x i T n x i n 2 .
In Equation (26), x i n x i T n x i n 2 is far less than 1, so it can be ignored. Thus, the updating recursion of the GC-MCC algorithm is rewritten as
g ^ n + 1 = g ^ n + β GC exp e 2 n 2 σ 2 e n x n θ GC F sgn g ^ n .
The last term θ GC F sgn g ^ n is the proposed zero attraction term which exerts different zero attracting on the two grouped channel coefficients. We can see that both the l 0 -norm and l 1 -norm constraints are implemented on the channel coefficients in the GC-MCC algorithm, which is different from the ZA-MCC and l 0 -MCC algorithms. For the GC-MCC channel estimation, our proposed zero attractor can distinguish the value of coefficients and categorize the channel coefficients into two groups. The proposed GC-MCC algorithm can achieve small steady-state error and fast convergence due to the term θ GC F sgn g ^ n .
To further enhance the performance of our proposed GC-MCC algorithm, a reweighting factor is introduced into the GC-MCC’s update equation to implement the RGC-MCC algorithm, which is similar to the RZA-LMS algorithm. Thus, the RGC-MCC’s updating equation is
g ^ n + 1 = g ^ n + β RGC exp e 2 n 2 σ 2 e n x n θ RGC F sgn g ^ n 1 + ε 1 g ^ n ,
where θ RGC is the zero attraction controlling parameter, ε 1 is the reweighting controlling parameter and β RGC is a step-size of the RGC-MCC algorithm.
With the help of θ RGC F sgn g ^ n 1 + ε 1 g ^ n , our proposed RGC-MCC algorithm can assign well different values p i to channel coefficients according to the magnitudes of the channel coefficients. Moreover, we can properly select the value of ε 1 to obtain a better channel estimation performance. In fact, the proposed GC-MCC and RZA-GC-MCC algorithms are the extension of the ZA-MCC and RZA-MCC algorithms. However, the proposed GC-MCC and RGC-MCC are different with the proposed ZA-MCC, RZA-MCC and l 0 -norm penalized MCC algorithms. Our contributions are summarized herein. The GC-MCC algorithm is realized by incorporating a variable norm in Equation (13) into the cost function of the traditional MCC algorithm, where the variable norm is controlled by the parameter p. To distinguish the large channel coefficients and the small channel coefficients, a mean of the estimated channel coefficients shown in Equation (24) is proposed to provide a threshold. Then, the variable norm is split into two groups by comparing the channel coefficients with the threshold. As a result, a large group is given when g ^ i ( n ) > y ( n ) , while a small group is created when g ^ i ( n ) < y ( n ) . For the channel coefficients in the large group, we use a l 0 -norm to count the number of the non-zero channel coefficients. As for the channel coefficients in the small group, l 1 -norm penalty is used for attracting the zero or near-zero channel coefficients to zero quickly. It is found that a norm penalty matrix with values in its diagonal is proposed in Equation (25) to implement the l 0 and l 1 in the GC-MCC algorithm, which is also different with the conventional l 0 - and l 1 - norm constraints. Then, we use a reweighting factor to enhance the zero attracting ability in the GC-MCC algorithm to generate the RGC-MCC algorithm. Both the GC-MCC and RGC-MCC algorithms are developed to exploit well the inherent sparseness properties of the multi-path channels due to the expected zero-attraction terms in their iterations. The channel estimation behaviors will be discussed over sparse channels in mixed Gaussian noise environments in the next section.

4. Computational Simulations and Discussion of Results

In this section, we will construct several experiments to verify the GC- and RGC-MCC’s channel estimation performance. The steady-state channel estimation error and the convergence are considered to give an evaluation of the proposed GC- and RGC-MCC algorithms. The results are also compared with the MCC, NMCC, ZA-, RZA- and SPF-MCC algorithms. From the discussion, we know that the CIM and SPF are l 0 -norm approximations. Thus, we first set up an experiment to discuss the zero attraction ability of the CIM, SPF and l 0 -norm approximation. The zero attraction ability of these l 0 -norm approximations is shown in Figure 2. The related parameters in these l 0 -norm approximations are τ 1 , σ 1 and τ . It can be seen that the zero attraction ability of the SPF is close to the l 0 -norm for larger τ 1 , while the zero attraction ability of the CIM approximates to l 0 -norm when σ 1 is small. From these results, we found that the SPF can give an accurate approximation and can provide a better zero attraction when τ 1 = τ = 5 and σ 1 = 0.05 . For τ 1 = τ = 1 and σ 1 = 0.08 , the zero attraction ability of SPF is weak. However, it can give an adaption for a wider range of sparse channel estimation applications. It can be concluded that the zero attraction ability of the zero attractor produced by SPF is superior. Thus, we will choose SPF-MCC to give a comparison with our proposed GC-MCC and RGC-MCC algorithms in terms of the steady-state error and convergence.
All of the experiments are constructed under a mix-Gaussian noise environment, and the mixed noise model has been used in the previous study and is a better description of the real wireless communication environment. The mixed noise model is given by [34,36]
1 χ N μ 1 , ν 1 2 + χ N μ 2 , ν 2 2 ,
where N μ i , ν i 2 i = 1 , 2 represent the Gaussian distribution, and parameters μ i , ν i 2 and χ are the means, variances and mixing parameter, respectively. In all the experiments, we set μ 1 , μ 2 , ν 1 2 , ν 2 2 , χ = 0 , 0 , 0.05 , 20 , 0.05 . The estimated performance of our proposed GC-MCC and RGC-MCC algorithms is given by mean square deviation (MSD), which is defined as
MSD g ^ n = E g g ^ n 2 .
For all the simulations, 300 Monte Carlo runs are performed for obtaining each point in all mentioned algorithms. Herein, the total length of the unknown channel is M = 16 and the number of the non-zero channel coefficients is K. The signal-to-noise ratio (SNR) is 30 dB. Those non-zero channel coefficients are randomly distributed within the length of the channel. Since the regularization parameters have an important effect on the performance of the proposed GC-MCC and RGC-MCC algorithms, the effects of regularization parameters θ GC , θ RGC and β GC / RGC are investigated and given in Figure 3, Figure 4 and Figure 5.
In this experiment, the step size of the GC-MCC and RGC-MCC is 0.026, ε 1 = 5 and K = 1. The influence of θ GC on the MSDs of the GC-MCC algorithm is shown in Figure 3, while the influence of θ RGC on the MSDs of the RGC-MCC algorithm is shown in Figure 4. From Figure 3 and Figure 4, the steady-state error with respect to the MSDs of the GC-MCC is reduced with the decrease of the θ GC ranging from θ GC = 3 × 10 3 to θ GC = 3 × 10 4 . Then, the steady-state error of the GC-MCC algorithm becomes worse when the θ GC continues to reduce. When θ GC = 3 × 10 4 , the GC-MCC algorithm achieves the smallest steady-state MSD. As for the RGC-MCC algorithm, the MSD is reduced when θ RGC ranges from θ RGC = 7 × 10 3 to θ RGC = 7 × 10 4 . Then, the MSD of the RGC-MCC algorithm is increased with a continuous decrement of θ RGC . The RGC-MCC algorithm achieves the smallest steady-state MSD for θ RGC = 7 × 10 4 . Therefore, we take θ GC = 3 × 10 4 and θ RGC = 7 × 10 4 into consideration to ensure the best channel estimation performance of the proposed GC-MCC and RGC-MCC algorithms. Next, the effect of the β that includes β GC and β RGC on the proposed GC-MCC and RGC-MCC algorithms is given in Figure 5. It can be seen that the MSD of the RGC-MCC is worse than the GC-MCC when the step size is less than 0.013, while the RGC-MCC is better than the GC-MCC when β > 0.013 . It is noted that the steady-state errors of the GC-MCC and RGC-MCC algorithms become worse as the parameter β increases. Therefore, we should carefully choose the step size and the zero attraction controlling parameters to achieve a good estimation performance for handling sparse channel estimations. The effects of the reweighting controlling parameter are given in Figure 6. It can be seen that the reweighting controlling parameter mainly affects the small channel coefficients when ε 1 increases from 4 to 25. It means that the reweighted zero attractor mainly exerts effects on the magnitudes which are comparable to 1 / ε 1 , while little shrinkage is penalized on the channel coefficients that are far greater than 1 / ε 1 . In the RGC-MCC algorithm, the reweighting controlling parameter exerts strong zero attracting on the small group to provide a fast convergence.
As we know, adaptive filters have been extensively investigated and applied for channel estimation, and have been used in real time systems. Moreover, adaptive filters algorithms have been further developed for sparse channel estimations. Similar to the previous investigations [2,6,7,10,11,12,13,14,15,16,21,22,23,24,26,27], the proposed GC-MCC and RGC-MCC algorithms are investigated and their performance is compared with the SPF-MCC algorithm. In the following experiment, the proposed GC-MCC and RGC-MCC algorithms are investigated in different SNR environments. In this experiment, the step size of the GC-MCC and RGC-MCC algorithms is 0.026, and θ GC = 3 × 10 4 and θ RGC = 7 × 10 4 . The MSD performance at different SNRs is shown in Figure 7. It can be seen that the performance of the proposed GC-MCC and RGC-MCC algorithms is improved with the increment of SNR. It is worth noting that the performance of the RGC-MCC is always better than that of the GC-MCC at the same SNR. This is because the reweighting factor provides a selective zero attracting in the RGC-MCC algorithm.
Next, the convergence of the GC-MCC and RGC-MCC algorithms is studied and it is compared with the MCC, NMCC, ZA-MCC, RZA-MCC and SPF-MCC algorithms at SNR = 30 dB. The corresponding simulation parameters for the mentioned algorithms are β MCC = 0.0052 , β NMCC = 0.085 , β ZA = 0.01 , θ ZA = 3 × 10 5 , β RZA = 0.015 , θ RZA = 7 × 10 5 , β SPF = 0.016 , θ SPF = 2.7 × 10 5 , τ 1 = 100 , β GC = 0.026 , θ GC = 3 × 10 4 , β RGC = 0.032 , θ RGC = 7 × 10 4 , ε 1 = 5 and σ = 1000 . Herein, β NMCC is the step size of the NMCC algorithm. The convergence of the proposed GC-MCC and RGC-MCC algorithms is given in Figure 8. It can be seen that the convergence of the proposed GC-MCC and RGC-MCC algorithms is better than that of the MCC, NMCC, ZA-MCC, RZA-MCC and SPF-MCC algorithms at the same MSD level. Moreover, our RGC-MCC has the fastest convergence speed rate.
Next, the channel estimation performance of our presented GC-MCC and RGC-MCC algorithms is analyzed under a different sparsity level K that is also the number of the non-zero channel coefficients of the sparse channel. Firstly, only one non-zero channel coefficient is randomly distributed within the unknown sparse channel. This means that the sparsity level is K = 1 . In this experiment, the related parameters are the following β MCC = 0.03 , β NMCC = 0.4 , β ZA = β RZA = 0.03 , θ ZA = 8 × 10 5 , θ RZA = 2 × 10 4 , β SPF = 0.03 , θ SPF = 3 × 10 5 , τ 1 = 100 , β GC = β RGC = 0.026 , θ GC = 3 × 10 4 , θ RGC = 7 × 10 4 and ε 1 = 5 . The MSD performance of the GC-MCC and RGC-MCC algorithms is demonstrated in Figure 9 for K = 1 . It can be seen in Figure 9 that the steady-state MSD of our presented GC-MCC and RGC-MCC algorithms is lower than that of the mentioned MCC algorithms with the same convergence speed. The channel estimations with respect to the MSD for K = 3 and K = 5 are given in Figure 10 and Figure 11, respectively. When the sparsity level increases from K = 3 to K = 5 , the MSD floor is increased in comparison with K = 1 because the sparseness is reduced. However, it is worth noting that our proposed GC-MCC and RGC-MCC algorithms still outperform the existing MCC and its sparse forms. In addition, the RGC-MCC always achieves the lowest MSD for different K.
Then, a realistic IEEE 802.15.4a channel model, which can be downloaded from [39], that works in CM1 mode is employed to discuss the effectiveness of the proposed GC-MCC and RGC-MCC algorithms. The simulation parameters are β MCC = 0.002 , β NMCC = 0.8 , β ZA = 0.0011 , β RZA = 0.001 , θ ZA = 6 × 10 6 , θ RZA = 9 × 10 5 , θ SPF = 8 × 10 6 , τ 1 = 100 , β SPF = β GC = β RGC = 0.001 , θ GC = 5 × 10 4 , θ RGC = 3 × 10 4 and ε 1 = 5 . The simulation result is given in Figure 12. It is found that the proposed GC-MCC and RGC-MCC algorithms achieve better performance with respect to the MSD, which means that the proposed GC-MCC and RGC-MCC algorithms have small MSDs.
At last, our GC-MCC and RGC-MCC algorithms are used for estimating an echo channel to further discuss their channel estimation performance. The sparseness measurement of the echo channel is defined as ϑ 12 = M M M 1 g 1 g 1 M g 2 M g 2 . In this experiment, ϑ 12 = 0.8222 and ϑ 12 = 0.7362 are used for discussing the estimation performance for the first 8000 iterations and after 8000 iterations, respectively. For the echo channel, its total length is 256 and there are 16 non-zero coefficients in the total echo channel. The simulation parameters are β MCC = 0.0055 , β NMCC = 1.3 , β ZA = β RZA = 0.0055 , θ ZA = 4 × 10 6 , θ RZA = 1 × 10 5 , β SPF = 0.0055 , θ SPF = 1 × 10 5 , τ 1 = 5 , β GC = 0.0045 , β RGC = 0.004 , θ GC = 3 × 10 5 , θ RGC = 3 × 10 5 and ε 1 = 5 . The estimation behavior of the proposed GC-MCC and RGC-MCC algorithms for the echo channel is depicted in Figure 13. From Figure 13, we found that our GC-MCC and RGC-MCC algorithms outperform the MCC, NMCC and spare MCC algorithms with respect to both the steady-state MSD and convergence. Although the sparsity reduces from ϑ 12 = 0.8222 to ϑ 12 = 0.7362 , our GC-MCC and RGC-MCC algorithms are still superior to the other MCC algorithms, which means that our GC-MCC and RGC-MCC algorithms have little effect on the sparsity under the mixture-noised sparse channel.
Based on the parameter analysis and performance investigation of our proposed GC-MCC and RGC-MCC algorithms, we can summarize that the proposed RGC-MCC algorithm can provide the fastest convergence speed rate in comparison with all the mentioned algorithms when they converge to the same steady-state MSD level. Also, the proposed RGC-MCC algorithm provides the lowest steady-state MSD when all the MCC algorithms have the same convergence speed rate. In addition, the proposed GC- and PGC-MCC algorithms outperform the MCC, NMCC and the related sparse MCC algorithms. This is because the proposed GC- and RGC-MCC algorithms exert the l 0 -norm penalty on the large group to seek the non-zero channel coefficients quickly; then, they provide l 1 -norm constraint on the small group to attract the zero or near-zero channel coefficients to zero quickly. Thus, both the GC- and RGC-MCC can provide a faster convergence and a lower steady-state MSD. However, both the GC- and RGC-MCC increase the computational complexity by calculating the group matrix. Additionally, the complexity of the RGC-MCC algorithm also comes from the computation of the reweighting factor. However, the complexity of the GC-MCC and RGC-MCC algorithms is comparable with the previously reported sparse MCC algorithms. According to the early published articles, we believe that the proposed GC-MCC and RGC-MCC algorithms can be used in a real system.

5. Conclusions

In this paper, a GC-MCC algorithm and a RGC-MCC algorithm have been proposed and they have been derived mathematically in detail. These two algorithms were implemented by exerting a non-uniform norm on the MCC’s cost function, and then, the non-uniform norm was split into l 0 -norm and l 1 -norm to give penalties on the large and small groups. The channel estimation behaviors of both the GC-MCC and RGC-MCC algorithms were investigated over a sparse channel and an echo channel under the mixture noise environment. The simulation results from these two channels demonstrated that the proposed GC-MCC and RGC-MCC algorithms can provide faster convergence and smaller MSD for different sparsity levels. Especially, the RGC-MCC algorithm achieves the fastest convergence and smallest MSD.

Acknowledgments

This work was partially supported by the PhD Student Research and Innovation Fund of the Fundamental Research Funds for the Central Universities (HEUGIP201707), National Key Research and Development Program of China-Government Corporation Special Program (2016YFE0111100), National Science Foundation of China (61571149), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), and Projects for the Selected Returned Overseas Chinese Scholars of Heilongjiang Province and MOHRSS of China.

Author Contributions

Yanyan Wang wrote the code and did simulation experiments, then wrote the draft of the paper. Yingsong Li helped to check the codes and simulations, and he also put forward the idea of the developed GC- and RGC-MCC algorithms. Felix Albu helped modified the paper and checked the grammar. He also gave some discussion of the channel estimation results. Rui Yang provided some analysis on the performance of the GC-MCC and RGC-MCC algorithms. All the authors wrote this paper together and they have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The soft parameter function (SPF) constraint MCC (SPF-MCC) algorithm.
In [34], a SPF was given by
S g ^ n + 1 = 1 + τ 1 1 1 e τ 1 g ^ n + 1 ,
where τ 1 is greater than 0. The performance of the SPF is shown in Figure A1. It is noted that the SPF approximates to be a l 0 -norm when τ 1 is larger. In the SPF-MCC algorithm, the SPF is integrated into the MCC’s cost function to utilize the sparsity of the channels. The cost function of the SPF-MCC is
J 4 n = 1 2 g ^ n + 1 g ^ n 2 + θ SPF S g ^ n + 1 + λ SPF e ^ n 1 α exp e 2 n 2 σ 2 e n .
By using LMM, we can obtain the updating equation of the SPF-MCC, and it is written as
g ^ n + 1 = g ^ n + β SPF exp e 2 n 2 σ 2 e n x n θ SPF 1 + τ 1 e τ 1 g ^ n sgn g ^ n .
Figure A1. Features of the soft parameter function with difference τ 1 .
Figure A1. Features of the soft parameter function with difference τ 1 .
Entropy 19 00432 g014
Herein, β SPF is the step-size of the SPF-MCC algorithm, and θ SPF 1 + τ 1 e τ 1 g ^ n sgn g ^ n is the desired zero attraction term, which is flexible by choosing τ 1 to exploit the sparsity of the sparse channels.

References

  1. Chong, C.C.; Watanabe, F.; Inamura, H. Potential of UWB technology for the next generation wireless communications. In Proceedings of the 2006 IEEE Ninth International Symposium on Spread Spectrum Techniques and Applications, Manaus, Brazil, 28–31 August 2006; pp. 422–429. [Google Scholar]
  2. Cotter, S.F.; Rao, B.D. Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 2002, 50, 374–377. [Google Scholar] [CrossRef]
  3. Sailes, K.S. Fundamentals of statistical signal processing: Estimation theory. Technometrics 1995, 2, 465–466. [Google Scholar]
  4. Widrow, B.; Stearns, S.D. Adaptive Signal Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1985. [Google Scholar]
  5. Marzetta, T.L.; Hochwald, B.M. Fast transfer of channel state information in wireless systems. IEEE Trans. Signal Process. 2006, 54, 1268–1278. [Google Scholar] [CrossRef]
  6. Trussell, H.J.; Rouse, D.M. Reducing non-zero coefficients in FIR filter design using POCS. In Proceedings of the 13th European Signal Processing Conference, Antalya, Turkey, 4–8 September 2005; pp. 1–4. [Google Scholar]
  7. Tauboeck, G.; Hlawatsch, F.; Rauhut, H. Compressive estimation of doubly selective channels: Exploiting channel sparsity to improve spectral efficiency in multicarrier transmissions. IEEE J. Sel. Top. Signal Process. 2009, 4, 255–271. [Google Scholar] [CrossRef]
  8. Haykin, S.S.; Widrow, B. Least-Mean-Square Adaptive Filters; Wiley: Toronto, ON, Canada, 2005; pp. 335–443. [Google Scholar]
  9. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  10. Kalouptsidis, N.; Mileounis, G.; Babadi, B.; Tarkh, V. Adaptive algorithms for sparse system identification. Signal Process. 2011, 91, 1910–1919. [Google Scholar] [CrossRef]
  11. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustic Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128. [Google Scholar]
  12. Gu, Y.; Jin, J.; Mei, S. L0 norm constraint LMS algorithms for sparse system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  13. Gui, G.; Peng, W.; Adachi, F. Improved adaptive sparse channel estimation based on the least mean square algorithm. In Proceedings of the 2013 IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 7–10 April 2013; pp. 3105–3109. [Google Scholar]
  14. Taheri, O.; Vorobyov, S.A. Sparse channel estimation with lp-norm and reweighted l1-norm penalized least mean squares. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 2864–2867. [Google Scholar]
  15. Li, Y.; Wang, Y.; Jiang, T. Sparse channel estimation based on a p-norm-like constrained least mean fourth algorithm. In Proceedings of the 2015 International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 15–17 October 2015. [Google Scholar]
  16. Gui, G.; Adachi, F. Improved least mean square algorithm with application to adaptive sparse channel estimation. EURASIP J. Wirel. Commun. Netw. 2013, 2013. [Google Scholar] [CrossRef]
  17. Walach, E.; Widrow, B. The least mean fouth (LMF) adaptive algorithm and its family. IEEE Trans. Inf. Theory 1984, 30, 275–283. [Google Scholar] [CrossRef]
  18. Lim, S.J.; Harris, J.G. Combined LMS/F algorithm. Electron. Lett. 1997, 33, 467–468. [Google Scholar] [CrossRef]
  19. Chambers, J.A.; Tanrikulu, O.; Constantinides, A.G. Least mean mixed-norm adaptive filtering. Electron. Lett. 1994, 30, 1574–1575. [Google Scholar] [CrossRef]
  20. Zidouri, A. Convergence analysis of a mixed controlled L2-LP adaptive algorithm. EURASIP J. Adv. Signal Process. 2010, 2010. [Google Scholar] [CrossRef]
  21. Gui, G.; Adachi, F. Sparse least mean fourth algorithm for adaptive channel estimation in low signal-to-noise ratio region. Int. J. Commun. Syst. 2014, 27, 3147–3157. [Google Scholar] [CrossRef]
  22. Gui, G.; Mehbodniya, A.; Adachi, F. Sparse LMS/F algorithms with application to adaptive system identification. Wirel. Commun. Mob. Comput. 2015, 15, 1649–1658. [Google Scholar] [CrossRef]
  23. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  24. Li, Y.; Wang, Y.; Albu, F. Sparse channel estimation based on a reweighted least-mean mixed-norm adaptive filter algorithm. In Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 2380–2384. [Google Scholar]
  25. Tanrikulu, O.; Chambers, J.A. Convergence and steady-state properties of the least-mean mixednorm (LMMN) adaptive algorithm. IEE Proc. Vis. Image Signal Process. 1996, 143, 137–142. [Google Scholar] [CrossRef]
  26. Li, Y.; Jin, Z.; Wang, Y.; Yang, R. A robust sparse adaptive filtering algorithm with a correntropy induced metric constraint for broadband multi-path channel estimation. Entropy 2016, 18, 10. [Google Scholar] [CrossRef]
  27. Wang, Y.; Li, Y.; Yang, R. Sparse adaptive channel estimation based on mixed controlled l2 and lp-norm error criterion. J. Frankl. Inst. 2017, in press. [Google Scholar] [CrossRef]
  28. Liu, W.; Pokharel, P.; Principe, J.C. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Trans. Signal Process. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  29. Singh, A.; Principe, J.C. Using correntropy as cost function in adaptive filters. In Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 2950–2955. [Google Scholar]
  30. Erdogmus, D.; Principe, J. Generalized information potential criterion for adaptive system training. IEEE Trans. Neural Netw. 2002, 13, 1035–1044. [Google Scholar] [CrossRef] [PubMed]
  31. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Principe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  32. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Principe, J.C. Convergence of a fixed-point algorithm under maximum correntropy criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  33. Ma, W.; Qu, H.; Gui, G.; Xu, L.; Zhao, J.; Chen, B. Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-Gaussian environments. J. Frankl. Inst. 2015, 352, 2708–2727. [Google Scholar] [CrossRef]
  34. Li, Y.; Wang, Y.; Yang, R.; Albu, F. A soft parameter function penalized normalized maximum correntropy criterion algorithm for sparse system identification. Entropy 2017, 19, 45. [Google Scholar] [CrossRef]
  35. Haddad, D.B.; Petraglia, M.R.; Petraglia, A. A unified approach for sparsity-aware and maximum correntropy adaptive filters. In Proceedings of the 24th European Signal processing Conference (EUSIPCO’16), Budapest, Hungary, 29 August–2 September 2016; pp. 170–174. [Google Scholar]
  36. Everett, H. Generalized lagrange multiplier method for solving problems of optimum allocation of resources. Oper. Res. 1963, 11, 399–417. [Google Scholar] [CrossRef]
  37. Wu, F.Y.; Tong, F. Non-uniform norm constraint LMS algorithm for sparse system identification. IEEE Commun. Lett. 2013, 17, 385–388. [Google Scholar] [CrossRef]
  38. Wang, Y.; Li, Y. Sparse multipath channel estimation using norm combination constrained set-membership NLMS algorithms. Wirel. Commun. Mob. Comput. 2017, 2017, 8140702. [Google Scholar] [CrossRef]
  39. IEEE 802.15.4a Channel Model. Available online: https://heim.ifi.uio.no/haakoh/uwb/links/ (accessed on 22 August 2017).
Figure 1. Structure diagram of sparse channel estimation.
Figure 1. Structure diagram of sparse channel estimation.
Entropy 19 00432 g001
Figure 2. Zero attraction ability of the zero attracting term produced by soft parameter function (SPF), l 0 -norm and correntropy-induced metric (CIM) penalties.
Figure 2. Zero attraction ability of the zero attracting term produced by soft parameter function (SPF), l 0 -norm and correntropy-induced metric (CIM) penalties.
Entropy 19 00432 g002
Figure 3. Effects of θ GC on the mean square deviation (MSD) of the proposed GC-MCC algorithm.
Figure 3. Effects of θ GC on the mean square deviation (MSD) of the proposed GC-MCC algorithm.
Entropy 19 00432 g003
Figure 4. Effects of θ RGC on the MSD of the proposed RGC-MCC algorithm.
Figure 4. Effects of θ RGC on the MSD of the proposed RGC-MCC algorithm.
Entropy 19 00432 g004
Figure 5. MSD performance of the proposed GC-MCC and RGC-MCC algorithms with different β .
Figure 5. MSD performance of the proposed GC-MCC and RGC-MCC algorithms with different β .
Entropy 19 00432 g005
Figure 6. Effects of ε 1 on the MSD of the proposed GC-MCC and RGC-MCC algorithm.
Figure 6. Effects of ε 1 on the MSD of the proposed GC-MCC and RGC-MCC algorithm.
Entropy 19 00432 g006
Figure 7. Estimation behavior of the proposed GC-MCC and RGC-MCC algorithms with different SNR.
Figure 7. Estimation behavior of the proposed GC-MCC and RGC-MCC algorithms with different SNR.
Entropy 19 00432 g007
Figure 8. Convergence of the proposed GC-MCC and RGC-MCC algorithms.
Figure 8. Convergence of the proposed GC-MCC and RGC-MCC algorithms.
Entropy 19 00432 g008
Figure 9. MSD performance of the proposed GC-MCC and RGC-MCC algorithms for K = 1.
Figure 9. MSD performance of the proposed GC-MCC and RGC-MCC algorithms for K = 1.
Entropy 19 00432 g009
Figure 10. MSD performance of the proposed GC-MCC and RGC-MCC algorithms for K = 3.
Figure 10. MSD performance of the proposed GC-MCC and RGC-MCC algorithms for K = 3.
Entropy 19 00432 g010
Figure 11. MSD performance of the proposed GC-MCC and RGC-MCC algorithms for K = 5.
Figure 11. MSD performance of the proposed GC-MCC and RGC-MCC algorithms for K = 5.
Entropy 19 00432 g011
Figure 12. Estimation behavior of the proposed GC-MCC and RGC-MCC algorithms for estimating a IEEE 802.15.4a channel in CM1 mode.
Figure 12. Estimation behavior of the proposed GC-MCC and RGC-MCC algorithms for estimating a IEEE 802.15.4a channel in CM1 mode.
Entropy 19 00432 g012
Figure 13. Estimation behavior of the proposed GC-MCC and RGC-MCC algorithms for an echo channel.
Figure 13. Estimation behavior of the proposed GC-MCC and RGC-MCC algorithms for an echo channel.
Entropy 19 00432 g013

Share and Cite

MDPI and ACS Style

Wang, Y.; Li, Y.; Albu, F.; Yang, R. Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels. Entropy 2017, 19, 432. https://doi.org/10.3390/e19080432

AMA Style

Wang Y, Li Y, Albu F, Yang R. Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels. Entropy. 2017; 19(8):432. https://doi.org/10.3390/e19080432

Chicago/Turabian Style

Wang, Yanyan, Yingsong Li, Felix Albu, and Rui Yang. 2017. "Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels" Entropy 19, no. 8: 432. https://doi.org/10.3390/e19080432

APA Style

Wang, Y., Li, Y., Albu, F., & Yang, R. (2017). Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels. Entropy, 19(8), 432. https://doi.org/10.3390/e19080432

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop