Next Article in Journal
A New Chaotic System with Positive Topological Entropy
Previous Article in Journal
Life’s a Gas: A Thermodynamic Theory of Biological Evolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of a Fixed-Point Minimum Error Entropy Algorithm

1
School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027, China
2
School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China
3
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(8), 5549-5560; https://doi.org/10.3390/e17085549
Submission received: 3 May 2015 / Revised: 17 July 2015 / Accepted: 28 July 2015 / Published: 3 August 2015

Abstract

:
The minimum error entropy (MEE) criterion is an important learning criterion in information theoretical learning (ITL). However, the MEE solution cannot be obtained in closed form even for a simple linear regression problem, and one has to search it, usually, in an iterative manner. The fixed-point iteration is an efficient way to solve the MEE solution. In this work, we study a fixed-point MEE algorithm for linear regression, and our focus is mainly on the convergence issue. We provide a sufficient condition (although a little loose) that guarantees the convergence of the fixed-point MEE algorithm. An illustrative example is also presented.

1. Introduction

In recent years, information theoretic measures, such as entropy and mutual information, have been widely applied in domains of machine learning (so called information theoretic learning (ITL) [1]) and signal processing [1,2]. A possible main reason for the success of ITL is that information theoretic quantities can capture higher-order statistics of the data and offer potentially significant performance improvement in machine learning applications [1]. Based on the Parzen window method [3], the smooth and nonparametric information theoretic estimators can be applied directly to the data without imposing any a priori assumptions (say the Gaussian assumption) about the underlying probability density functions (PDFs). In particular, Renyi’s quadratic entropy estimator can be easily calculated by a double sum over samples [4,5,6,7]. The entropy in supervised learning serves as a measure of similarity and follows a similar framework of the well-known mean square error (MSE) [1,2]. An adaptive system can be trained by minimizing the entropy of the error over the training dataset [4]. This learning criterion is called the minimum error entropy (MEE) criterion [1,2,8,9,10]. MEE may achieve much better performance than MSE especially when data are heavy-tailed or multimodal non-Gaussian [1,2,10].
However, the MEE solution cannot be obtained in closed form even when the system is a simple linear model such as a finite impulse response (FIR) filter. A practical approach is to search the solution over performance surface by an iterative algorithm. Usually, a simple gradient based search algorithm is adopted. With a gradient based learning algorithm, however, one has to select a proper learning rate (or step-size) to ensure the stability and achieve a better tradeoff between misadjustment and convergence speed [4,5,6,7]. Another more promising search algorithm is the fixed-point iterative algorithm, which is step-size free and is often much faster than gradient based methods [11]. The fixed-point algorithms have received considerable attention in machine learning and signal processing due to their desirable properties of low computational requirement and fast convergence speed [12,13,14,15,16,17].
The convergence is a key issue for an iterative learning algorithm. For the gradient based MEE algorithms, the convergence problem has already been studied and some theoretical results have been obtained [6,7]. For the fixed-point MEE algorithms, up to now there is still no study concerning the convergence. The goal of this paper is to study the convergence of a fixed-point MEE algorithm and provide a sufficient condition that ensures the convergence to a unique solution (the fixed point). It is worth noting that the convergence of a fixed-point maximum correntropy criterion (MCC) algorithm has been studied in [18]. The remainder of the paper is organized as follows. In Section 2, we derive a fixed-point MEE algorithm. In Section 3, we prove a sufficient condition to guarantee the convergence. In Section 4, we present an illustrative example. Finally in Section 5, we give the conclusion.

2. Fixed-Point MEE Algorithm

Consider a simple linear regression (filtering) case where the error signal is
e ( i ) = d ( i ) y ( i ) = d ( i ) W T X ( i )
with d ( i ) being a desired value at time i , y ( i ) = W T X ( i ) the output of the linear model, W = [ w 1 , w 2 , , w m ] T m the weight vector, and X ( i ) = [ x 1 ( i ) , x 2 ( i ) , , x m ( i ) ] T m the input vector (i.e., the regressor). The goal is to find a weight vector such that the error signal is as small as possible. Under the MEE criterion, the optimal weight vector is obtained by minimizing the error entropy [1,2]. With Renyi’s quadratic entropy, the MEE solution can be expressed as
W = arg W m min log p e 2 ( x ) d x = arg W m max p e 2 ( x ) d x
where p e ( . ) denotes the PDF of the error signal. In ITL the quantity p e 2 ( x ) d x is also called the quadratic information potential (QIP) [1]. In a practical situation, however, the error distribution is usually unknown, and one has to estimate it from the error samples { e ( 1 ) , e ( 2 ) , , e ( N ) } , where N denotes the sample number. Based on the Parzen window approach [3], the estimated PDF takes the form
p ^ e ( x ) = 1 N i = 1 N κ ( x e ( i ) )
where κ ( . ) stands for a kernel function (not necessarily a Mercer kernel), satisfying κ ( x ) 0 and κ ( x ) d x = 1 . Without mentioned otherwise, the kernel function is selected as a Gaussian kernel, given by
κ σ ( x ) = 1 σ 2 π exp ( x 2 2 σ 2 )
where σ denotes the kernel bandwidth. With Gaussian kernel, the QIP can be simply estimated as [1]
p ^ e 2 ( x ) d x = ( 1 N i = 1 N κ σ ( x e ( i ) ) ) 2 d x = 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) )
Therefore, in practical situations, the MEE solution of (2) becomes
W = arg W m max 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) )
Unfortunately, there is no closed form solution of (6). One can apply a gradient based iterative algorithm to search the solution, starting from an initial point. Below we derive a fixed-point iterative algorithm, which is, in general, much faster than a gradient based method (although a gradient method can be viewed as a special case of the fixed-point methods, it involves a step-size parameter). Let’s take the following first order derivative:
W 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) )        = 1 2 σ 2 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ]        = 1 2 σ 2 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) ( d ( i ) d ( j ) ) [ X ( i ) X ( j ) ]         1 2 σ 2 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T W        = 1 2 σ 2 { P d X M E E R X X M E E W }
where
{ R X X M E E = 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T P d X M E E = 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) ( d ( i ) d ( j ) ) [ X ( i ) X ( j ) ]
Let W 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) = 0 , and assume that the matrix R X X M E E is invertible. Then, we obtain the following solution [15]:
W = ( R X X M E E ) 1 P d X M E E
The above solution is, in form, very similar to the well-known Wiener solution [19]. However, it is not a closed form solution, since both matrix R X X M E E and vector P d X M E E depend on the weight vector W (note that e ( i ) depends on W ). Therefore, the solution of (9) is actually a fixed-point equation, which can also be expressed as W = f ( W ) , where
f ( W ) = ( R X X M E E ) 1 P d X M E E
The solution (fixed-point) of the equation W = f ( W ) can be found by the following iterative fixed-point algorithm:
W k + 1 = f ( W k )
where W k denotes the estimated weight vector at iteration k . This algorithm is called the fixed-point MEE algorithm [15]. An online fixed-point MEE algorithm was also derived in [15]. In the next section, we will prove a sufficient condition under which the algorithm (11) surely converges to a unique fixed-point.

3. Convergence of the Fixed-Point MEE

The convergence of a fixed-point algorithm can be proved by the well-known contraction mapping theorem (also known as the Banach fixed-point theorem) [11]. According to the contraction mapping theorem, the convergence of the fixed-point MEE algorithm (11) is guaranteed if β > 0 and 0 < α < 1 such that the initial weight vector W 0 p β , and W { W m : W p β } , it holds that
{ f ( W ) p β W f ( W ) p = f ( W ) W T p α
where . p denotes an lp-norm of a vector or an induced norm of a matrix, defined by A p = max X p 0 A X p / X p , with p 1 , A m × m , X m × 1 , and W f ( W ) denotes the m × m Jacobian matrix of f ( W ) with respect to W , given by
W f ( W ) = [ w 1 f ( W ) w 2 f ( W ) w m f ( W ) ]
where
     w s f ( W ) = w s ( [ R X X M E E ] 1 P d X M E E ) = [ R X X M E E ] 1 ( w s R X X M E E ) [ R X X M E E ] 1 P d X M E E + [ R X X M E E ] 1 ( w s P d X M E E ) = [ R X X M E E ] 1 ( 1 N 2 i = 1 N j = 1 N w s κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ) f ( W )     + [ R X X M E E ] 1 ( 1 N 2 i = 1 N j = 1 N w s κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] ) = [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ) f ( W )     + [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] )
To obtain a sufficient condition to guarantee the convergence of the fixed-point MEE algorithm (11), we prove two theorems below.
Theorem 1. If
β > ξ = m i = 1 N j = 1 N | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 λ min [ i = 1 N j = 1 N [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ]
and σ σ * , where σ * is the solution of equation φ ( σ ) = β , where
φ ( σ ) = m i = 1 N j = 1 N | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 λ min [ i = 1 N j = 1 N exp ( ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) 2 4 σ 2 ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ] , σ ( 0 , )
Then f ( W ) 1 β for all W { W m : W 1 β } .
Proof. The induced matrix norm is compatible with the corresponding vector lp-norm, hence
f ( W ) 1 = [ R X X M E E ] 1 P d X M E E 1 [ R X X M E E ] 1 1 P d X M E E 1
where [ R X X M E E ] 1 1 is the 1-norm (also referred to as the column-sum norm) of the inverse matrix [ R X X M E E ] 1 , which is simply the maximum absolute column sum of the matrix. According to the matrix theory, the following inequality holds:
[ R X X M E E ] 1 1 m [ R X X M E E ] 1 2 = m λ max [ [ R X X M E E ] 1 ]
where [ R X X M E E ] 1 2 is the 2-norm (also referred to as the spectral norm) of [ R X X M E E ] 1 , which equals the maximum eigenvalue of the matrix. Further, we have
λ max [ ( R X X M E E ) 1 ] = 1 λ min [ R X X M E E ]                           = N 2 λ min [ i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ]                           ( a ) N 2 λ min [ i = 1 N j = 1 N κ σ 2 ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ]
where (a) comes from
| e ( i ) e ( j ) | = | d ( i ) d ( j ) W T ( X ( i ) X ( j ) ) |              W 1 X ( i ) X ( j ) 1 + | d ( i ) d ( j ) |              β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) |
In addition, it holds that
P d X M E E 1 = 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] 1              ( b ) 1 N 2 i = 1 N j = 1 N κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] 1              ( c ) 1 2 σ N 2 π i = 1 N j = 1 N | d ( i ) d ( j ) | × X ( i ) X ( j ) 1
where (b) follows from the convexity of the vector l1-norm, and (c) is because κ σ 2 ( x ) 1 2 σ π for any x . Combining (16)–(18) and (20), we derive
f ( W ) 1 1 2 σ m π i = 1 N j = 1 N | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 λ min [ i = 1 N j = 1 N κ σ 2 ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ]              = m i = 1 N j = 1 N | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 λ min [ i = 1 N j = 1 N exp ( ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) 2 4 σ 2 ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ]              = φ ( σ )
Clearly, the function φ ( σ ) is a continuous and monotonically decreasing function of σ over ( 0 , ) , satisfying lim σ 0 + φ ( σ ) = , and lim σ φ ( σ ) = ξ . Therefore, if β > ξ , the equation φ ( σ ) = β will have a unique solution σ * over ( 0 , ) , and if σ σ * , we have φ ( σ ) β , which completes the proof. □
Theorem 2. If
β > ξ = m i = 1 N j = 1 N | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 λ min [ i = 1 N j = 1 N [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ] ,
and σ max { σ , σ } , where σ * is the solution of the equation φ ( σ ) = β , and σ is the solution of equation ψ ( σ ) = α ( 0 < α < 1 ), where
ψ ( σ ) = γ m 2 σ 2 λ min [ i = 1 N j = 1 N exp ( ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) 2 4 σ 2 ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ] , σ ( 0 , )
in which
γ = i = 1 N j = 1 N ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) X ( i ) X ( j ) 1 ( β [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T 1 +       | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 )
then it holds that f ( W ) 1 β , and W f ( W ) 1 α for all W { W m : W 1 β } .
Proof. By Theorem 1, we have f ( W ) 1 β . To prove W f ( W ) 1 α , it suffices to prove s ,   w s f ( W ) 1 α . By (14), we have
    w s f ( W ) 1 = [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ) f ( W )         + [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] ) 1 [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ) f ( W ) 1     + [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] ) 1
It is easy to derive
   [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ) f ( W ) 1 1 2 N 2 σ 2 [ R X X M E E ] 1 1 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T 1 f ( W ) 1 ( d ) β 2 N 2 σ 2 [ R X X M E E ] 1 1 { i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T 1 } ( e ) β 4 N 2 σ 3 π [ R X X M E E ] 1 1 { i = 1 N j = 1 N ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) X ( i ) X ( j ) 1 [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T 1 }
where (d) follows from the convexity of the vector l1-norm and f ( W ) 1 β , and (e) is due to the fact that | ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) | ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) X ( i ) X ( j ) 1 and κ σ 2 ( x ) 1 2 σ π for any x . In a similar way, one can derive
    [ R X X M E E ] 1 ( 1 2 N 2 σ 2 i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] ) 1 1 2 N 2 σ 2 [ R X X M E E ] 1 1 { i = 1 N j = 1 N ( e ( i ) e ( j ) ) ( x s ( i ) x s ( j ) ) κ σ 2 ( e ( i ) e ( j ) ) [ d ( i ) d ( j ) ] [ X ( i ) X ( j ) ] 1 } 1 4 N 2 σ 3 π [ R X X M E E ] 1 1 { i = 1 N j = 1 N ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) × | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 2 }
Then, combining (24)–(26), (17) and (18), we have
w s f ( W ) 1 β 4 N 2 σ 3 π [ R X X M E E ] 1 1    × { i = 1 N j = 1 N ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) X ( i ) X ( j ) 1 [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T 1 }    + 1 4 N 2 σ 3 π [ R X X M E E ] 1 1 { i = 1 N j = 1 N ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) × | d ( i ) d ( j ) | × X ( i ) X ( j ) 1 2 } γ m / π 4 σ 3 λ min [ i = 1 N j = 1 N κ σ 2 ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ] = γ m 2 σ 2 λ min [ i = 1 N j = 1 N exp ( ( β X ( i ) X ( j ) 1 + | d ( i ) d ( j ) | ) 2 4 σ 2 ) [ X ( i ) X ( j ) ] [ X ( i ) X ( j ) ] T ] = ψ ( σ )
Obviously, ψ ( σ ) is also a continuous and monotonically decreasing function of σ over ( 0 , ) , and satisfies lim σ 0 + ψ ( σ ) = , lim σ ψ ( σ ) = 0 . Therefore, given 0 < α < 1 , the equation ψ ( σ ) = α has a unique solution σ over ( 0 , ) , and if σ σ , we have ψ ( σ ) α . This completes the proof. □
According to Theorem 2 and Banach Fixed-Point Theorem [11], given an initial weight vector satisfying W 0 1 β , the fixed-point MEE algorithm (11) will surely converge to a unique fixed point in the range W { W m : W 1 β } provided that the kernel bandwidth σ is larger than a certain value. Moreover, the value of α ( 0 < α < 1 ) guarantees the convergence speed. It is worth noting that the derived sufficient condition will be, certainly, a little loose, due to the zooming out in the proof process.

4. Illustrative Example

In the following, we give an illustrative example to verify the derived sufficient condition that guarantees the convergence of the fixed-point MEE algorithm. Let us consider a simple linear model:
d ( i ) = 2 X ( i ) + v ( i )
where X ( i ) is a scalar input, and v ( i ) is an additive noise. Assume that X ( i ) is uniform distributed over [ 3 , 3 ] and v ( i ) is zero-mean Gaussian with variance 0.01 . There are 100 training samples { X ( i ) , d ( i ) } i = 1 100 generated from the system (28). Based on these data we calculate
ξ = i = 1 100 j = 1 100 | d ( i ) d ( j ) | × | X ( i ) X ( j ) | i = 1 100 j = 1 100 | X ( i ) X ( j ) | 2 = 1.9714
We choose β = 3 > ξ and α = 0.9938 < 1 . Then by solving the equations φ ( σ ) = β and ψ ( σ ) = α , we obtain σ = 2.38 and σ = 2.68 . Therefore, by Theorem 2, if σ 2.68 the fixed-point MEE algorithm will converge to a unique solution in the range 3 W 3 . Figure 1, Figure 2 and Figure 3 illustrate the curves of the functions W , f ( W ) = ( R X X M E E ) 1 P d X M E E , and | d f ( W ) d W | when σ = 3.0 , 0.1 , 0.01 , respectively. From the Figures we observe: (i) when σ = 3.0 > 2.68 , we have | f ( W ) | < 3 and | d f ( W ) d W | < α for 3 W 3 ; (ii) when σ = 0.1 < 2.68 , we still have | f ( W ) | < 3 and | d f ( W ) d W | < α for 3 W 3 . In this case, the algorithm still will converge to a unique solution in the range 3 W 3 . This result confirms the fact that the derived sufficient condition is a little loose (i.e., far from being necessary). The main reason for this is that there is a lot of zooming out in the derivation process; (iii) however, when σ is too small, say σ = 0.01 , the condition | d f ( W ) d W | < α will not hold for some W { 3 W 3 } . In this case, the algorithm may diverge.
Figure 1. Plots of the functions W , f ( W ) and | d f ( W ) d W | when σ = 3.0 .
Figure 1. Plots of the functions W , f ( W ) and | d f ( W ) d W | when σ = 3.0 .
Entropy 17 05549 g001
Figure 2. Plots of the functions W , f ( W ) and | d f ( W ) d W | when σ = 0.1 .
Figure 2. Plots of the functions W , f ( W ) and | d f ( W ) d W | when σ = 0.1 .
Entropy 17 05549 g002
Figure 3. Plots of the functions W , f ( W ) and | d f ( W ) d W | when σ = 0.01 .
Figure 3. Plots of the functions W , f ( W ) and | d f ( W ) d W | when σ = 0.01 .
Entropy 17 05549 g003
Table 1 shows the numbers of iterations for convergence with different kernel bandwidths (3.0, 1.0, 0.1, 0.05). The initial weight vector is set at W 0 = 0.1 , and the stop condition for the convergence is
| W k W k 1 W k 1 | < 10 6
As one can see, when σ = 3.0 max { σ , σ } , the fixed-point MEE algorithm will surely converge to a solution with few iterations. When σ becomes smaller, the algorithm may still converge, but the convergence speed will become much slower. Note that when σ is too small (e.g., σ = 0.01 ), the algorithm will diverge (the corresponding results are not shown in Table 1).
Table 1. Numbers of iterations for convergence with different kernel bandwidths σ .
Table 1. Numbers of iterations for convergence with different kernel bandwidths σ .
σ 3.01.00.10.05
Iterations341643

5. Conclusion

The MEE criterion has received increasing attention in signal processing and machine learning due to its desirable performance in adaptive system training especially with non-Gaussian data. Many iterative optimization methods have been developed to minimize the error entropy for practical use. But the fixed-point algorithms have been seldom studied, and in particular, too little attention has been paid to the convergence issue of the fixed-point MEE algorithms. This paper presented a theoretical study of this problem, and proved a sufficient condition to guarantee the convergence of a fixed-point MEE algorithm. The results of this study may provide a possible range for choosing a kernel bandwidth for MEE learning. However, the derived sufficient condition may give a much larger kernel bandwidth than a desired one due to the zooming out in the formula derivation process. In the future study, we will try to derive a tighter sufficient condition that ensures the convergence of thefixed-point MEE algorithm.

Acknowledgments

This work was supported by 973 Program (No. 2015CB351703) and National NSF of China (No. 61372152).

Author Contributions

Yu Zhang and Badong Chen proved the main theorems in this paper, Xi Liu presented the illustrative example, Zejian Yuan and Jose C. Principe polished the language and were in charge of technical checking. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Principe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Chen, B.; Zhu, Y.; Hu, J.C.; Principe, J.C. System Parameter Identification: Information Criteria and Algorithms; Elsevier: Amsterdam, the Netherlands, 2013. [Google Scholar]
  3. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman & Hall: New York, NY, USA, 1986. [Google Scholar]
  4. Erdogmus, D.; Principe, J.C. An error-entropy minimization for supervised training of nonlinear adaptive systems. IEEE Trans. Signal Process. 2002, 50, 1780–1786. [Google Scholar] [CrossRef]
  5. Erdogmus, D.; Principe, J.C. Generalized information potential criterion for adaptive system training. IEEE Trans. Neural Netw. 2002, 13, 1035–1044. [Google Scholar] [CrossRef] [PubMed]
  6. Erdogmus, D.; Principe, J.C. Convergence properties and data efficiency of the minimum error entropy criterion in adaline traing. IEEE Trans. Signal Process. 2003, 51, 1966–1978. [Google Scholar] [CrossRef]
  7. Chen, B.; Zhu, Y.; Hu, J. Mean-square convergence analysis of ADALINE training with minimum error entropy criterion. IEEE Trans. Neural Netw. 2010, 21, 1168–1179. [Google Scholar] [CrossRef] [PubMed]
  8. Chen, B.; Principe, J.C. Some further results on the minimum error entropy estimation. Entropy 2012, 14, 966–977. [Google Scholar] [CrossRef]
  9. Chen, B.; Principe, J.C. On the Smoothed Minimum Error Entropy Criterion. Entropy 2012, 14, 2311–2323. [Google Scholar] [CrossRef]
  10. Marques de Sá, J.P.; Silva, L.M.A.; Santos, J.M.F.; Alexandre, L.A. Minimum Error Entropy Classification; Springer: London, UK, 2013. [Google Scholar]
  11. Agarwal, R.P.; Meehan, M.; O’Regan, D. Fixed Point Theory and Applications; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  12. Cichocki, A.; Amari, S. Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications; Wiley: New York, NY, USA, 2002. [Google Scholar]
  13. Regalia, P.A.; Kofidis, E. Monotonic convergence of fixed-point algorithms for ICA. IEEE Trans. Neural Netw. 2003, 14, 943–949. [Google Scholar] [CrossRef] [PubMed]
  14. Fiori, S. Fast fixed-point neural blind-deconvolution algorithm. IEEE Trans. Neural Netw. 2004, 15, 455–459. [Google Scholar] [CrossRef] [PubMed]
  15. Han, S.; Principe, J.C. A fixed-point minimum error entropy algorithm. In Proceedings of the 16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing, Arlington, VA, USA, 6–8 September 2006; pp. 167–172.
  16. Chen, J.; Richard, C.; Bermudez, J.C.M.; Honeine, P. Non-negative least-mean-square algorithm. IEEE Trans. Signal Process. 2011, 59, 5225–5235. [Google Scholar] [CrossRef]
  17. Chen, J.; Richard, C.; Bermudez, J.C.M.; Honeine, P. Variants of non-negative least-mean-square algorithm and convergence analysis. IEEE Trans. Signal Process. 2014, 62, 3990–4005. [Google Scholar] [CrossRef]
  18. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Principe, J.C. Convergence of a fixed-point algorithm under Maximum Correntropy Criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  19. Kailath, T.; Sayed, A.H.; Hassibi, B. Linear Estimation; Prentice Hall: Upper Saddle River, NJ, USA, 2000. [Google Scholar]

Share and Cite

MDPI and ACS Style

Zhang, Y.; Chen, B.; Liu, X.; Yuan, Z.; Principe, J.C. Convergence of a Fixed-Point Minimum Error Entropy Algorithm. Entropy 2015, 17, 5549-5560. https://doi.org/10.3390/e17085549

AMA Style

Zhang Y, Chen B, Liu X, Yuan Z, Principe JC. Convergence of a Fixed-Point Minimum Error Entropy Algorithm. Entropy. 2015; 17(8):5549-5560. https://doi.org/10.3390/e17085549

Chicago/Turabian Style

Zhang, Yu, Badong Chen, Xi Liu, Zejian Yuan, and Jose C. Principe. 2015. "Convergence of a Fixed-Point Minimum Error Entropy Algorithm" Entropy 17, no. 8: 5549-5560. https://doi.org/10.3390/e17085549

APA Style

Zhang, Y., Chen, B., Liu, X., Yuan, Z., & Principe, J. C. (2015). Convergence of a Fixed-Point Minimum Error Entropy Algorithm. Entropy, 17(8), 5549-5560. https://doi.org/10.3390/e17085549

Article Metrics

Back to TopTop