Next Article in Journal
On Entanglement-Assisted Multistatic Radar Techniques
Next Article in Special Issue
Index Coding with Multiple Interpretations
Previous Article in Journal
A Bregman-Split-Based Compressive Sensing Method for Dynamic Harmonic Estimation
Previous Article in Special Issue
Focused Information Criterion for Restricted Mean Survival Times: Non-Parametric or Parametric Estimators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Residual ISI for Which the Convolutional Noise Probability Density Function Associated with the Blind Adaptive Deconvolution Problem Turns Approximately Gaussian

Department of Electrical and Electronic Engineering, Ariel University, Ariel 40700, Israel
Entropy 2022, 24(7), 989; https://doi.org/10.3390/e24070989
Submission received: 19 June 2022 / Revised: 12 July 2022 / Accepted: 14 July 2022 / Published: 17 July 2022
(This article belongs to the Special Issue Applications of Information Theory in Statistics)

Abstract

:
In a blind adaptive deconvolution problem, the convolutional noise observed at the output of the deconvolution process, in addition to the required source signal, is—according to the literature—assumed to be a Gaussian process when the deconvolution process (the blind adaptive equalizer) is deep in its convergence state. Namely, when the convolutional noise sequence or, equivalently, the residual inter-symbol interference (ISI) is considered small. Up to now, no closed-form approximated expression is given for the residual ISI, where the Gaussian model can be used to describe the convolutional noise probability density function (pdf). In this paper, we use the Maximum Entropy density technique, Lagrange’s Integral method, and quasi-moment truncation technique to obtain an approximated closed-form equation for the residual ISI where the Gaussian model can be used to approximately describe the convolutional noise pdf. We will show, based on this approximated closed-form equation for the residual ISI, that the Gaussian model can be used to approximately describe the convolutional noise pdf just before the equalizer has converged, even at a residual ISI level where the “eye diagram” is still very closed, namely, where the residual ISI can not be considered as small.

1. Introduction

The convolutional noise brought on by a blind adaptive deconvolution or blind adaptive equalization system is the subject of this research. In a blind adaptive deconvolution (blind adaptive equalization) system, all that is available is the output sequence of an unidentified linear system (channel), and the goal is to recover the input sequence of that system without the aid of a training sequence [1,2,3,4,5,6,7]. Numerous fields, including seismology, underwater acoustics, image restoration, and digital communication, use blind adaptive deconvolution systems [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39]. For a moment, let us think about the scenario of digital communication, where a source signal gets convolutedly distorted during transmission between its symbols and the channel impulse response. The recovery process is significantly hampered by this distortion, known as ISI, which produces detrimental distortions [39]. An adaptive blind equalizer is used to solve the ISI problem. The coefficients of the ideal blind adaptive equalizer are unknown because the channel coefficients are unknown. In a blind adaptive deconvolution system, the equalizer’s coefficients are only approximated values to the optimal ones, resulting in the addition of an error signal to the source signal at the deconvolution process’s output. This error signal is defined as the convolutional noise and is closely related to the residual ISI. If the system’s residual ISI level is relatively low after the blind adaptive equalization, it means that the convolutional noise is considered very small. Until recently, the Gaussian pdf was frequently used [1,2,6,34,40,41,42,43] to approximate the convolutional noise pdf throughout the iterative deconvolution process. However, according to [41], the convolutional noise pdf tends approximately to a Gaussian pdf only at the end of the iterative deconvolution process when the equalizer has converged to a relatively low residual ISI (where the convolutional noise is relatively low). The input sequence and the convolutional noise sequence are significantly associated in the early stages of the iterative deconvolution process because the ISI is often high, and the convolutional noise pdf is more uniform than Gaussian [41,44]. It should be noted that even though the Gaussian model was utilized for the convolutional noise sequence throughout the entire deconvolution procedure in [1,2,6,34,40,41,42,43], satisfying equalization performances were obtained. Recently, an attempt was made to approximate the convolutional noise pdf differently than with the Gaussian one in order to obtain improved equalization performance. In [4], the maximum entropy density approximation method [1,2,45,46] and Lagrange multipliers up to order four were used to approximate the convolutional noise pdf. In [5], the convolutional noise pdf was approximated with the Edgeworth Expansion series [47,48] up to order six. In [3], the Generalized Gaussian Density (GGD) [49,50] function and the Edgeworth Expansion [47,48] were applied to approximate the convolutional noise pdf. The GGD [49,50] is based on a shape parameter that modifies the pdf, which for shape parameters equal to one, two, and infinity, respectively, may have a Laplacian, or double exponential distribution, a Gaussian distribution, or a uniform distribution. Even though equalization performance was enhanced with these new approximation techniques for the convolutional noise pdf compared with the Gaussian case, a much higher equalization performance improvement was expected but not achieved. Thus, it makes us wonder if maybe the Gaussian model for the convolutional noise pdf is approximately correct even when the residual ISI is not so small. There is currently no closed-form approximated expression for the residual ISI where the Gaussian model can be used to describe approximately the convolutional noise pdf. It is well known that the equalizer can converge at a residual ISI level that might not be very low even high, depending on the applied step-size parameter used in the equalizer’s coefficients update mechanism. Furthermore, the equalization performance from the residual ISI point of view depends on the chosen equalization algorithm, equalizer’s tap length, input signal statistics, channel characteristics, and step size parameter. Thus, any closed-form approximated expression for the residual ISI where the convolutional noise pdf can be considered approximately Gaussian, must be a function of all the abovementioned parameters playing a role in the equalization performance from the residual ISI point of view. In this study, we address the noiseless and 16 Quadrature Amplitude Modulation (16QAM) situation. We use the Maximum Entropy density approximation technique [1,2,45,46] with Lagrange multipliers up to order six to approximate the pdf of the real part of the convolutional noise pdf. Then, we use Laplace’s integral method [51] and quasi-moment truncation technique [48] in order to obtain an approximated closed-form expression for the residual ISI for which the pdf of the real part of the convolutional noise can be approximately considered as Gaussian. This closed-form approximated expression is a function of the channel power, input sequence statistics, equalizer’s tap length and properties of the chosen equalizer. It is appropriate for the type of blind adaptive equalizers where the error that is given into the adaptive mechanism that updates the equalizer’s taps can be described as a polynomial function of the equalized output up to order three. It should be pointed out that Godard’s algorithm [52], for example, belongs to the mentioned type of blind equalizers. Based on this closed-form approximated expression for the residual ISI, we are able to show via simulation results that the Gaussian assumption for the convolutional noise pdf can be approximately made just before the equalizer has converged, even at a residual ISI level where the “eye diagram” is still very closed, namely, where the residual ISI can not be considered as very small. At that level of residual ISI, the fourth Lagrange multiplier in the approximated pdf of the real part of the convolutional noise is approximately zero, while the sixth Lagrange multiplier is very small and tends approximately to zero. Please note, since we deal with a two-independent quadrature carrier input case, all the even Lagrange multipliers in the convolutional noise pdf approximation are zero.

2. System Description

Figure 1 provides a description of the system under study. Please note that the described system is recalled from [53,54]. It should be noted here that the described system in Figure 1 is not unique but is a general description of a system using a blind adaptive equalizer to recover the input sequence from an unknown linear channel, as is the case in [52,55,56] and in other works.
In this paper, we make the following assumptions:
  • The input sequence x [ n ] is a 16QAM source, which can be expressed as x [ n ] = x 1 [ n ] + j x 2 [ n ] where x 1 [ n ] and x 2 [ n ] are x [ n ] ’s real and imaginary parts, respectively. 16QAM is a modulation that uses ± {1,3} levels for in-phase and quadrature components. E [ x [ n ] ] = 0 and E [ ( · ) ] denotes the expectation of ( · ) . The real and imaginary parts of x [ n ] are independent.
  • The unidentified channel h [ n ] is a linear time-invariant filter that may not have a minimum phase and whose transfer function lacks “deep zeros,” or zeros that are sufficiently removed from the unit circle. The channel’s tap length is R.
  • The filter c [ n ] is a tap-delay line.
  • The channel noise w [ n ] is an additive Gaussian white noise.
The sequence x [ n ] is sent through the channel h [ n ] where the output sequence from the channel is corrupted with noise w [ n ] . The input sequence to the blind adaptive equalizer is denoted as y [ n ] and expressed by:
y [ n ] = x [ n ] h [ n ] + w [ n ]
where the convolution operation is indicated by the symbol ∗. z [ n ] is the equalized output sequence and given by:
z [ n ] = c [ n ] y [ n ] = x [ n ] s [ n ] + c [ n ] w [ n ] = x [ n ] + p [ n ] + c [ n ] w [ n ] where s [ n ] = h [ n ] c [ n ] ; p [ n ] = x [ n ] ( s [ n ] δ [ n ] )
where p [ n ] is the convolutional noise arising for not having the optimal values for the equalizer’s coefficients. The real and imaginary components of p [ n ] are denoted in the following as p 1 [ n ] and p 2 [ n ] , respectively. Since we deal with a two independent quadrature carrier constellation input, E [ x 1 v [ n ] ] = E [ x 2 v [ n ] ] and E [ p 1 v [ n ] ] = E [ p 2 v [ n ] ] for v = 1 , 2 , , V . The ISI expression is used to evaluate the equalizer’s performance:
I S I = m | s m [ n ] | 2 | s m [ n ] | m a x 2 | s m [ n ] | m a x 2
where | · | is the value of ( · ) in absolute terms and | s m [ n ] | m a x is the component of s [ n ] in (2) with the highest absolute value. For the noiseless case, | s m [ n ] | m a x 2 = 1 and the equalizer has entered its convergence state, we may write according to [1]:
I S I 10 l o g 10 E [ | p [ n ] | 2 ] E [ | x [ n ] | 2 ] for | s m [ n ] | m a x 2 = 1
Since we are dealing with the 16QAM constellation input situation of two independent quadrature carriers, we may express (4) as:
I S I 10 l o g 10 E [ p 1 2 [ n ] ] E [ x 1 2 [ n ] ] for | s m [ n ] | m a x 2 = 1
The equalizer’s update mechanism can be described by:
c ̲ [ n + 1 ] = c ̲ [ n ] μ F [ n ] z [ n ] y ̲ [ n ]
where the conjugate operation is ()*, the step-size parameter is μ , the cost function is F [ n ] , F [ n ] z [ n ] is the cost function’s derivation from the equalized output sequence and c ̲ [ n ] is the equalizer vector where y ̲ [ n ] = [ y [ n ] y [ n N + 1 ] ] T is the input vector. The equalizer’s tap length is N, and the operator ()T stands for the transpose of the function (). In this paper, the MMA algorithm [55,56] and Godard’s algorithm [52] are used. The equalizer’s coefficients are updated according to the MMA algorithm ([55,56]) by:
c ̲ [ n + 1 ] = c ̲ [ n ] μ M M A R e z n | R e z n | 2 E | x 1 [ n ] | 4 E | x 1 [ n ] | 2 + j I m z n | I m z n | 2 E | x 2 [ n ] | 4 E | x 2 [ n ] | 2 y ̲ [ n ]
where μ M M A 0 , R e · and I m · are the real and imaginary parts of · , respectively. The equalizer’s coefficients are updated according to Godard’s algorithm [52] by:
c ̲ [ n + 1 ] = c ̲ [ n ] μ G z [ n ] 2 E x [ n ] 4 E x [ n ] 2 z [ n ]
where μ G 0 . It needs to be stated that according to [57], Godard’s algorithm is one of the most widely used blind equalization algorithms and has become the workhorse for blind equalization. According to [57], Godard’s algorithm is carrier phase independent. Therefore, carrier synchronization is not necessary prior to blind equalization. However, an arbitrary phase rotation is present in the constellation visible at the equalized output sequence [57]. As a result, a phase rotator is necessary at the equalizer’s convergence state in order to spin the constellation back into place [57]. The MMA method ([55,56]) avoids the necessity for a phase rotator, in accordance with [57], because it employs a separate error-calculation, i.e., for the real and imaginary parts of the received signal, separately. In this paper, we assume that F [ n ] z [ n ] can be expressed as a polynomial function of the equalized output namely as:
P [ z [ n ] ] = F [ n ] z [ n ]
Thus, based on (9) and [58], the real part of the polynomial function P [ z [ n ] ] of order up to three can be expressed by:
R e P [ z [ n ] ] = a 1 z 1 [ n ] + a 3 z 1 3 [ n ] + a 12 z 1 [ n ] z 2 2 [ n ]
where the real and imaginary components of the equalized output z [ n ] are denoted as z 1 [ n ] and z 2 [ n ] , respectively. Please note that for the 16QAM constellation input case, a 1 = E x [ n ] 4 E x [ n ] 2 , a 3 = 1 and a 12 = 1 for Godard’s algorithm [52] while for the MMA algorithm ([55,56]) we have that a 1 = E x 1 [ n ] 4 E x 1 [ n ] 2 , a 3 = 1 and a 12 = 0 .

3. The Residual ISI That Leads Approximately to a Gaussian pdf for the Convolutional Noise

In this section, we present a closed-form approximated expression for the residual ISI as a function of the system’s parameters (step-size parameter, input constellation statistics, Equalizer’s tap length, channel power and properties of the chosen equalizer via a 1 , a 3 and a 12 ) for which the convolutional noise pdf can be approximately considered as a Gaussian pdf.
Theorem 1.
The residual ISI ( I S I r e s ) for which the convolutional noise pdf associated with the blind adaptive equalization problem is approximately a Gaussian pdf can be expressed for the noiseless case by:
I S I r e s 10 l o g 10 m p 10 l o g 10 m 2 ; m 2 = E [ x 1 2 [ n ] ] ; m p = E [ p 1 [ n ] 2 ]
where m p is the solution of the following equation:
A 1 m p 3 + A 2 m p 2 + A 3 m p + A 4 = 0 w h e r e A 1 = 105 T 3 B 2 a 12 2 3 T 1 + B 2 a 3 2 135 T 15 + 6 B 2 a 3 a 12 3 T 1 30 B 2 a 3 2 28 350 T 945 3 a 12 2 3 T 1 135 T 15 + 2 a 3 a 12 1890 T 105 A 2 = 12 a 12 135 T 15 30 B 2 6 m 2 a 12 2 135 T 15 + 15 a 3 2 m 2 1890 T 105 9 m 2 a 12 2 3 T 1 2 + 2 a 1 a 12 135 T 15 + 2 a 1 a 3 1890 T 105 + 12 a 3 m 2 a 12 135 T 15 + 2 a 3 m 2 a 12 1890 T 105 + 12 a 3 1890 T 105 + 105 T 2 B a 12 3 a 3 3 T 1 2 B 2 a 1 a 12 6 B 2 m 2 a 12 2 12 B 2 a 3 m 2 a 12 + 6 B 2 a 1 a 3 3 T 1 + 45 B 2 a 3 2 m 2 3 T 1 + 3 B 2 m 2 a 12 2 3 T 1 + 6 B 2 a 3 m 2 a 12 3 T 1 A 3 = 2 B 6 a 1 135 T 15 + 18 a 3 m 2 135 T 15 + 6 m 2 a 12 135 T 15 30 B 2 a 1 2 135 T 15 + 15 a 3 2 m 4 135 T 15 + m 4 a 12 2 135 T 15 + 18 m 2 2 a 12 2 3 T 1 + 6 a 1 m 2 a 12 3 T 1 + 6 a 3 m 4 a 12 3 T 1 + 12 a 1 a 3 m 2 135 T 15 + 2 a 1 m 2 a 12 135 T 15 + 12 a 3 m 2 2 a 12 135 T 15 105 T B 2 a 1 2 2 B a 1 + 3 a 3 m 2 + m 2 a 12 + 15 B 2 a 3 2 m 4 + B 2 m 4 a 12 2 + 6 B 2 m 2 2 a 12 2 + 12 B 2 a 1 a 3 m 2 + 4 B 2 a 1 m 2 a 12 + 2 B 2 a 3 m 4 a 12 + 12 B 2 a 3 m 2 2 a 12 A 4 = 105 T B 2 a 1 2 m 2 + 2 m 4 B 2 a 1 a 3 + 2 B 2 a 1 m 2 2 a 12 + m 6 B 2 a 3 2 + 2 m 4 B 2 a 3 m 2 a 12 + m 4 B 2 m 2 a 12 2 30 B 2 3 a 1 2 m 2 3 T 1 + 3 a 3 2 m 6 3 T 1 + 6 a 1 a 3 m 4 3 T 1 + 6 a 1 m 2 2 a 12 3 T 1 + 3 m 2 m 4 a 12 2 3 T 1 + 6 a 3 m 2 m 4 a 12 3 T 1 w h e r e m l = E [ x 1 l [ n ] ] ; l = 2 , 4 , 6 ; B = 2 m 2 μ N k = 0 R 1 h k [ n ] 2 T 100
Proof of Theorem 1. 
Since we deal with the 16QAM constellation input (a two-independent quadrature carrier case), we consider in the following only the pdf of the real part of the convolutional noise. Please note that the pdf of the imaginary part of the convolutional noise is approximately equal to the pdf of the real part of the convolutional noise. The pdf of the real part of the convolutional noise at time indexes n and n + 1 , can be approximately expressed with the help of the Maximum Entropy density approximation technique [1,2,45,46] with Lagrange multipliers up to order six as:
f p 1 n exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n + λ 6 p 1 6 n f p 1 n + 1 exp λ 0 + λ 2 p 1 2 n + 1 + λ 4 p 1 4 n + 1 + λ 6 p 1 6 n + 1
where λ 2 , λ 4 and λ 6 are the Lagrange multipliers up to order six. Next, the difference between the pdf of the real part of the convolutional noise at time index n + 1 with that of time index n is given by:
Δ f = f p 1 n + 1 f p 1 n = exp λ 0 + λ 2 p 1 2 n + 1 + λ 4 p 1 4 n + 1 + λ 6 p 1 6 n + 1 exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n + λ 6 p 1 6 n = exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n + λ 6 p 1 6 n exp λ 2 p 1 2 n + 1 p 1 2 n + λ 4 ( p 1 4 n + 1 p 1 4 n + λ 6 ( p 1 6 n + 1 p 1 6 n ) 1
At the convergence state of the equalizer we may assume that:
Δ f 0
Thus, based on (14) and (15) we may write:
E exp λ 2 p 1 2 n + 1 p 1 2 n + λ 4 p 1 4 n + 1 p 1 4 n + λ 6 ( p 1 6 n + 1 p 1 6 n ) 1 0
By using Taylor’s expansion for the exponent [59] ( exp ( Q ) 1 + Q ) we may write (16) as:
E exp λ 2 p 1 2 n + 1 p 1 2 n + λ 4 p 1 4 n + 1 p 1 4 n + λ 6 ( p 1 6 n + 1 p 1 6 n ) 1 E λ 2 p 1 2 n + 1 p 1 2 n + λ 4 p 1 4 n + 1 p 1 4 n + λ 6 ( p 1 6 n + 1 p 1 6 n ) 0
Based on (17) we have for E ( p 1 6 n + 1 p 1 6 n ) 0 :
λ 6 = λ 2 G λ 4 F w h e r e G = E p 1 2 n + 1 p 1 2 n E ( p 1 6 n + 1 p 1 6 n ) F = E p 1 4 n + 1 p 1 4 n E ( p 1 6 n + 1 p 1 6 n )
By using (13) and (18) we have:
f p 1 n exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n
In order to find closed-form approximated expressions for λ 0 , λ 2 and λ 4 as a function of the convolutional noise statistics, we use:
f p 1 n = 1 p 1 2 n f p 1 n d p 1 [ n ] = E p 1 2 [ n ] p 1 4 n f p 1 n d p 1 [ n ] = E p 1 4 [ n ]
Now, based on (19) and (20) we can write:
exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n d p 1 [ n ] 1 p 1 2 n exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n d p 1 [ n ] E p 1 2 [ n ] p 1 4 n exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n d p 1 [ n ] E p 1 4 [ n ]
The first integral in (21) can be written as:
exp ( λ 0 ) exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) exp λ 2 p 1 2 n d p 1 [ n ] = exp ( λ 0 ) g 1 ( p 1 [ n ] ) exp ( Ψ p 1 [ n ] γ ) d p 1 [ n ] where g 1 ( p 1 [ n ] ) = exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) Ψ p 1 [ n ] = p 1 2 [ n ] γ = 1 λ 2
According to the Laplace’s integral method [51], we can solve the integral in (22) by:
g 1 ( p 1 [ n ] ) exp ( Ψ p 1 [ n ] γ ) d p 1 [ n ] exp Ψ p 0 γ 2 π γ Ψ p 0 g 1 ( p 0 ) + g 1 ( p 0 ) 2 γ Ψ p 0 + g 1 ( p 0 ) 8 γ Ψ p 0 2 + g 1 V I ( p 0 ) 48 γ Ψ p 0 3 + g 1 V I I I ( p 0 ) 384 γ Ψ p 0 4 + g 1 X ( p 0 ) 3840 γ Ψ p 0 5 + O γ 6 Ψ p 0 6
where ( ) , ( ) , ( ) V I , ( ) V I I I and ( ) X denote the second, fourth, sixth, eighth and tenth derivative of (), respectively. O v is defined as l i m v 0 O v / v = r c o n s t and r c o n s t is a constant. The function Ψ p 0 and p 0 are obtained via:
Ψ p 1 [ n ] = 2 p 1 [ n ] Ψ p 1 [ n ] = 2 Ψ p 1 [ n ] = 2 p 1 [ n ] Ψ p 0 = 2 p 0 = 0 p 0 = 0
while:
g 1 ( p 0 ) = 1 ; g 1 ( p 0 ) = 0 ; g 1 ( p 0 ) = 24 λ 4 ; g 1 V I ( p 0 ) = 720 F λ 4 + G λ 2
Based on (23)–(25), we may write (22) as:
exp ( λ 0 ) exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) exp λ 2 p 1 2 n d p 1 [ n ] exp ( λ 0 ) 2 π 1 λ 2 2 1 + 24 λ 4 8 1 λ 2 2 2 720 F λ 4 + G λ 2 48 1 λ 2 2 3 exp ( λ 0 ) π λ 2 1 + 3 λ 4 4 λ 2 2 + 15 F λ 4 + G λ 2 8 λ 2 3
Based on (21) and (26) we have:
exp ( λ 0 ) π λ 2 1 + 3 λ 4 4 λ 2 2 + 15 F λ 4 + G λ 2 8 λ 2 3
Next, we turn to calculate the second integral in (21), which can be expressed as:
exp ( λ 0 ) p 1 2 [ n ] exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) exp λ 2 p 1 2 n d p 1 [ n ] = exp ( λ 0 ) g 2 ( p 1 [ n ] ) exp ( Ψ p 1 [ n ] γ ) d p 1 [ n ] where g 2 ( p 1 [ n ] ) = p 1 2 [ n ] exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n )
According to Laplace’s integral method [51], we can write (28) as:
g 2 ( p 1 [ n ] ) exp ( Ψ p 1 [ n ] γ ) d p 1 [ n ] exp Ψ p 0 γ 2 π γ Ψ p 0 g 2 ( p 0 ) + g 2 ( p 0 ) 2 γ Ψ p 0 + g 2 ( p 0 ) 8 γ Ψ p 0 2 + g 2 V I ( p 0 ) 48 γ Ψ p 0 3 + g 2 V I I I ( p 0 ) 384 γ Ψ p 0 4 + O γ 5 Ψ p 0 5
where:
g 2 ( p 0 ) = 0 ; g 2 ( p 0 ) = 2 ; g 2 ( p 0 ) = 0 g 2 V I p 0 = 720 λ 4 ; g 2 V I I I p 0 = 40320 F λ 4 40320 G λ 2
Based on (21), (29) and (30) we may write (28) as:
exp ( λ 0 ) p 1 2 [ n ] exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) exp λ 2 p 1 2 n d p 1 [ n ] exp ( λ 0 ) π λ 2 1 2 λ 2 15 λ 4 1 8 λ 2 3 105 F λ 4 + 105 G λ 2 1 16 λ 2 4 m p
Based on (27) and (31) we have:
π λ 2 1 2 λ 2 15 λ 4 1 8 λ 2 3 105 F λ 4 + 105 G λ 2 1 16 λ 2 4 m p exp ( λ 0 ) m p π λ 2 1 + 3 λ 4 4 λ 2 2 + 15 F λ 4 + G λ 2 8 λ 2 3 λ 4 1 2 λ 2 105 G λ 2 1 16 λ 2 4 m p 1 + 15 G λ 2 8 λ 2 3 15 8 λ 2 3 + 105 F 16 λ 2 4 + m p 3 4 λ 2 2 + 15 F 8 λ 2 3
Next, we turn to calculate the third integral in (21), which can be expressed as:
exp ( λ 0 ) p 1 4 [ n ] exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) exp λ 2 p 1 2 n d p 1 [ n ] = exp ( λ 0 ) g 3 ( p 1 [ n ] ) exp ( Ψ p 1 [ n ] γ ) d p 1 [ n ] where g 3 ( p 1 [ n ] ) = p 1 4 [ n ] exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n )
According to Laplace’s integral method [51], we can write (33) as:
g 3 ( p 1 [ n ] ) exp ( Ψ p 1 [ n ] γ ) d p 1 [ n ] exp Ψ p 0 γ 2 π γ Ψ p 0 g 3 ( p 0 ) + g 3 ( p 0 ) 2 γ Ψ p 0 + g 3 ( p 0 ) 8 γ Ψ p 0 2 + g 3 V I ( p 0 ) 48 γ Ψ p 0 3 + g 3 V I I I ( p 0 ) 384 γ Ψ p 0 4 + g 3 X ( p 0 ) 3840 γ Ψ p 0 5 + O γ 6 Ψ p 0 6
where:
g 3 p 0 = 0 ; g 3 p 0 = 0 ; g 3 p 0 = 24 g 3 V I p 0 = 0 ; g 3 V I I I p 0 = 40320 λ 4 ; g 3 X p 0 = 3628800 F λ 4 3628800 G λ 2
Based on (21), (34) and (35) we can write (33) as:
exp ( λ 0 ) p 1 4 [ n ] exp ( λ 4 p 1 4 n λ 2 G p 1 6 n λ 4 F p 1 6 n ) exp λ 2 p 1 2 n d p 1 [ n ] exp λ 0 π λ 2 3 1 4 λ 2 2 + 105 λ 4 1 16 λ 2 4 945 F λ 4 + G λ 2 1 32 λ 2 5 = E p 1 4 [ n ]
Based on (27) and (36) we may write:
π λ 2 3 1 4 λ 2 2 + 105 λ 4 1 16 λ 2 4 945 F λ 4 + G λ 2 1 32 λ 2 5 π λ 2 1 + 3 λ 4 4 λ 2 2 + 15 F λ 4 + G λ 2 8 λ 2 3 E p 1 4 [ n ] λ 4 1 + 15 G 8 λ 2 2 E p 1 4 [ n ] 3 4 λ 2 2 + 945 G 1 32 λ 2 4 105 16 λ 2 4 945 F 1 32 λ 2 5 3 4 λ 2 2 15 F 8 λ 2 3
However, we already received λ 4 in (32). Thus, the expression for λ 4 in (37) and that obtained in (32) should be approximately the same. Thus, we may write:
1 2 λ 2 105 G λ 2 1 16 λ 2 4 m p 1 + 15 G λ 2 8 λ 2 3 15 8 λ 2 3 + 105 F 16 λ 2 4 + m p 3 4 λ 2 2 + 15 F 8 λ 2 3 1 + 15 G 8 λ 2 2 E p 1 4 [ n ] 3 4 λ 2 2 + 945 G 1 32 λ 2 4 105 16 λ 2 4 945 F 1 32 λ 2 5 3 4 λ 2 2 15 F 8 λ 2 3
Now, for 8 λ 2 2 105 G we may write (38) as:
1 2 λ 2 m p 15 8 λ 2 3 + 105 F 16 λ 2 4 + m p 3 4 λ 2 2 + 15 F 8 λ 2 3 E p 1 4 [ n ] 3 4 λ 2 2 + 945 G 1 32 λ 2 4 105 16 λ 2 4 945 F 1 32 λ 2 5 3 4 λ 2 2 15 F 8 λ 2 3
Next, let us write G 8 λ 2 2 105 T where T is very large positive value and use it in (39):
1 2 λ 2 m p 15 8 λ 2 3 + 105 F 16 λ 2 4 + m p 3 4 λ 2 2 + 15 F 8 λ 2 3 E p 1 4 [ n ] 3 4 λ 2 2 + 9 4 λ 2 2 T 105 16 λ 2 4 945 F 1 32 λ 2 5 3 4 λ 2 2 15 F 8 λ 2 3
A possible solution for the left side of (40) being equal to its right side is when:
1 2 λ 2 m p ; E p 1 4 [ n ] 3 4 λ 2 2 + 9 4 λ 2 2 T 0 m p 1 2 λ 2 E p 1 4 [ n ] 3 m p 2 1 3 T
Please note that when (41) holds it means that λ 4 0 . In addition, for T , E p 1 4 [ n ] 3 m p 2 , which holds in the Gaussian case. Next, by using (18), G 8 λ 2 2 105 T and (41) we may write:
E p 2 n + 1 p 2 n E ( p 6 n + 1 p 6 n ) 8 λ 2 2 105 T 8 1 2 m p 2 105 T 2 105 m p 2 T λ 6 = λ 2 E p 1 2 n + 1 p 1 2 n E ( p 1 6 n + 1 p 1 6 n ) λ 4 E p 1 4 n + 1 p 1 4 n E ( p 1 6 n + 1 p 1 6 n ) λ 2 E p 1 2 n + 1 p 1 2 n E ( p 1 6 n + 1 p 1 6 n ) 1 105 m p 3 T
By using (27), G 8 λ 2 2 105 T , (41) and (42) we may write f p 1 n in (13) as:
f p 1 n exp λ 0 + λ 2 p 1 2 n + λ 4 p 1 4 n + λ 6 p 1 6 n 1 2 π m p 1 + 1 7 T exp 1 2 m p p 1 2 n + 1 105 m p 3 T p 1 6 n
Please note that for m p 0 and T , the convolutional noise pdf given in (43) tends to the Gaussian one. However, this does not tell us for which residual ISI this occurs. Thus, we turn to the expression of G 8 λ 2 2 105 T and ask ourselves what is the residual ISI or m p for which G 8 λ 2 2 105 T holds. In order to do this, we first have to obtain closed-form expressions for E p 2 n + 1 p 2 n and E p 6 n + 1 p 6 n . According to [58], we have:
Δ g ˜ i = g ˜ i p 1 [ n ] Δ p 1 + 1 2 2 g ˜ i 2 p 1 [ n ] Δ p 1 2 + O Δ p 1 3 ; i = 1 , 2 ; where Δ g ˜ i = g ˜ i [ n + 1 ] g ˜ i [ n ] ; Δ p 1 = R e μ P [ z [ n ] ] m = 0 m = R 1 y n m y n m
where P [ z [ n ] ] and R e [ P [ z [ n ] ] ] are given in (9) and (10) respectively. Thus, based on (44) we have:
g ˜ 1 [ n ] = p 1 2 [ n ] E p 1 2 [ n + 1 ] p 1 2 [ n ] 2 E p 1 [ n ] Δ p 1 + E Δ p 1 2 g ˜ 2 [ n ] = p 1 6 [ n ] E p 1 6 [ n + 1 ] p 1 6 [ n ] 6 E p 1 5 [ n ] Δ p 1 + 15 E p 1 4 [ n ] Δ p 1 2
In the following, we assume that E ( m = 0 m = R 1 y n m y n m ) 2 B 2 , as was done in [58]. Please note that in [58], the expression for E p 1 2 [ n + 1 ] p 1 2 [ n ] was already derived. However, in [58], the Gaussian case was assumed, while here, we do not use this assumption. Therefore, in our case, all the higher moments (higher than four) associated with the real part of the convolutional noise have to be obtained differently than in [58]. In addition, the obtained expression for E p 1 2 [ n + 1 ] p 1 2 [ n ] in [58] was set to zero to find the residual ISI applicable in the convergence state, while in our case it is not set to zero. In order to carry out the calculations of E p 1 2 [ n + 1 ] p 1 2 [ n ] and E p 1 6 [ n + 1 ] p 1 6 [ n ] , we need the moments up to order ten of the real part of the convolutional noise. Since we can not use the Gaussian assumption but have on hand the fourth moment of the real part of the convolutional noise (41), thus we only need to find a technique that supplies us with all the higher order moments (higher than four) of the real part of the convolutional noise. The quasi moment truncation technique is related to the Hermite polynomials where the high-order central moments are approximated in terms of lower order central moments [48]. Thus, according to the quasi moment truncation technique [48] and with the help of (41) we have:
E p 1 6 15 m p E p 1 4 30 m p 3 = 15 135 T m p 3 E p 1 8 28 m p E p 1 6 210 m p 2 E p 1 4 + 315 m p 4 = 105 1890 T m p 4 E p 1 10 45 m p E p 1 8 630 m p 2 E p 1 6 + 3150 m p 3 E p 1 4 3780 m p 5 = 945 28 350 T m p 5
Next, by using (41), (44)–(46), we may write:
E p 1 2 [ n + 1 ] p 1 2 [ n ] 3 B 2 a 12 2 3 T 1 B 2 a 3 2 135 T 15 6 B 2 a 3 a 12 3 T 1 m p 3 + 2 B 2 a 1 a 12 2 B a 12 3 a 3 3 T 1 + 6 B 2 m 2 a 12 2 + 12 B 2 a 3 m 2 a 12 6 B 2 a 1 a 3 3 T 1 45 B 2 a 3 2 m 2 3 T 1 3 B 2 m 2 a 12 2 3 T 1 6 B 2 a 3 m 2 a 12 3 T 1 m p 2 + B 2 a 1 2 2 B a 1 + 3 a 3 m 2 + m 2 a 12 + 15 B 2 a 3 2 m 4 + B 2 m 4 a 12 2 + 6 B 2 m 2 2 a 12 2 + 12 B 2 a 1 a 3 m 2 + 4 B 2 a 1 m 2 a 12 + 2 B 2 a 3 m 4 a 12 + 12 B 2 a 3 m 2 2 a 12 m p + B 2 a 1 2 m 2 + 2 m 4 B 2 a 1 a 3 + 2 B 2 a 1 m 2 2 a 12 + m 6 B 2 a 3 2 + 2 m 4 B 2 a 3 m 2 a 12 + m 4 B 2 m 2 a 12 2
and:
E p 1 6 [ n + 1 ] p 1 6 [ n ] 15 B 2 a 3 2 28 350 T 945 3 a 12 2 3 T 1 135 T 15 + 2 a 3 a 12 1890 T 105 m p 5 + 6 a 12 135 T 15 15 B 2 6 m 2 a 12 2 135 T 15 + 15 a 3 2 m 2 1890 T 105 9 m 2 a 12 2 3 T 1 2 + 2 a 1 a 12 135 T 15 + 2 a 1 a 3 1890 T 105 + 12 a 3 m 2 a 12 135 T 15 + 2 a 3 m 2 a 12 1890 T 105 + 6 a 3 1890 T 105 m p 4 + B 6 a 1 135 T 15 + 18 a 3 m 2 135 T 15 + 6 m 2 a 12 135 T 15 15 B 2 a 1 2 135 T 15 + 15 a 3 2 m 4 135 T 15 + m 4 a 12 2 135 T 15 + 18 m 2 2 a 12 2 3 T 1 + 6 a 1 m 2 a 12 3 T 1 + 6 a 3 m 4 a 12 3 T 1 + 12 a 1 a 3 m 2 135 T 15 + 2 a 1 m 2 a 12 135 T 15 + 12 a 3 m 2 2 a 12 135 T 15 m p 3 + 15 B 2 3 a 1 2 m 2 3 T 1 + 3 a 3 2 m 6 3 T 1 + 6 a 1 a 3 m 4 3 T 1 + 6 a 1 m 2 2 a 12 3 T 1 + 3 m 2 m 4 a 12 2 3 T 1 + 6 a 3 m 2 m 4 a 12 3 T 1 m p 2
Based on (42) we may write:
2 E ( p 6 n + 1 p 6 n ) E p 2 n + 1 p 2 n 105 m p 2 T 0
Next, by substituting (47) and (48) into (49) and dividing both sides of the obtained equation by m p 2 for m p 0 , we obtain (12). By substituting the solution for m p obtained in (12) into (5), together with E [ x 1 2 [ n ] ] , (11) is obtained. □

4. Simulation

In the previous section we derived a closed-form approximated expression for the residual ISI (11) as a function of the system’s parameter (step-size parameter, input constellation statistics, equalizer’s tap length, channel power and properties of the chosen equalizer via a 1 , a 3 and a 12 ) for which the pdf of the real part of the convolutional noise can be approximately considered as a Gaussian pdf. In this section we wish to calculate the residual ISI (11) and see if the obtained level for the residual ISI (11) is above the 16 [dB] where the “eye diagram” is still closed. Specifically, when decisions on the equalized output sequence cannot be made with confidence. In the following, we use the channel proposed by [7]: h n = 0 for n < 0 ; 0.4 for n = 0 ; 0.84 · 0.4 n 1 for n > 0 .
Figure 2 describes the averaged ISI and equalized output constellation obtained for Godard’s algorithm [52], where decisions on the equalized output sequence can be made rather reliably. Figure 3 describes the comparison between the simulated ISI with Godard’s algorithm [52] and with those calculated via the expression for the residual ISI given in (11) for two different values for T and using the values for a 1 , a 12 and a 3 associated with Godard’s algorithm [52]. According to Figure 3, the simulated ISI is above the 16 [dB] where decisions on the equalized output sequence cannot be made in a reliable manner as it can be clearly seen from Figure 4. According to Figure 3, the calculated residual ISI (11) for both cases ( T = 500 and T = 1000 ) is above the obtained level for the simulated ISI. Thus, we have shown here via simulation that the Gaussian assumption for the convolutional noise pdf can be approximately made just before the equalizer has converged, even at a residual ISI level where no trustworthy judgements can be made. Next, we apply the MMA algorithm [55,56] for the blind adaptive equalization task. Figure 5 describes the comparison between the simulated ISI with the MMA algorithm [55,56] and with those calculated via the expression for the residual ISI given in (11) for two different values for T and using the values for a 1 , a 12 and a 3 associated with the MMA algorithm [55,56]. According to Figure 5, the simulated ISI is above the 16 [dB] where decisions on the equalized output sequence cannot be made in a reliable manner as it can be clearly seen from Figure 6. According to Figure 5, the calculated residual ISI (11) for both cases ( T = 500 and T = 1000 ) is above the obtained level for the simulated ISI. Thus, also here, we see via simulation that the Gaussian assumption for the convolutional noise pdf can be approximately made just before the equalizer has converged even at a residual ISI level where no reliable decisions can be carried out. Please note that we used T = 500 and T = 1000 since according to (12) the value for T should be T 100 .

5. Discussion

In this study, a closed-form approximated expression was established for the residual ISI (11) for which the real part of the convolutional noise’s pdf can be approximately regarded as Gaussian. In the previous section, we have shown via simulation that the Gaussian assumption for the real part of the convolutional noise’s pdf can be approximately made just before the equalizer has converged even at a residual ISI level where no reliable decisions can be made unlike it was believed in the literature [41]. Thus, this may be the reason for achieving satisfactory equalization results from an ISI and acquisition perspective from those blind adaptive equalizer’s ([1,2,6,34,40,41,42,43]) based on the Gaussian assumption throughout the entire deconvolution procedure for the convolutional noise pdf.
Let us go back for a moment to the obtained expression for the pdf associated with the real part of the convolutional noise given in (43). As it was already stated, for m p 0 and T , the convolutional noise pdf given in (43) tends to the Gaussian one. Since G 8 λ 2 2 105 T , it means that G 0 for T . Now, based on (18), G 0 for E p 1 2 n + 1 p 1 2 n 0 . In [58], the expression for E p 1 2 [ n + 1 ] p 1 2 [ n ] was obtained for the Gaussian case and set to zero to find the residual ISI applicable in the convergence state for the noiseless case. Thus, for T , the obtained expression for the residual ISI (11) for which the real part of the convolutional noise’s pdf can be approximately regarded as Gaussian is approximately the obtained expression for the residual ISI applicable in the convergence state given in [58]. This may be the reason why having very satisfying results in [58] for the 16QAM case, even for residual ISI levels above 16 [dB]. Although we considered in this paper only the 16QAM constellation input, the expression for the residual ISI given in (11) holds also for the real valued input case and for any other input constellation that belongs to the two-independent quadrature carrier case as the 64QAM and 256QAM inputs. Although we used only one channel for the simulation task, the expression for the residual ISI given in (11) holds also for any channel that complies with assumption two from the system description section. As was already pointed out, for T , the obtained expression for the residual ISI (11) for which the real part of the convolutional noise’s pdf can be approximately regarded as Gaussian is approximately the obtained expression for the residual ISI applicable in the convergence state given in [58]. Eight different channels and three different input sources (16QAM, 64QAM and 4QAM) were considered in [58] for the simulation task. According to [58], very satisfying simulation results were obtained. Thus, in this paper, there was no need to take more channels and different input sources than the 16QAM for the simulation task.

Funding

This research received no external funding.

Data Availability Statement

All the relevant data is given in the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Pinchas, M.; Bobrovsky, B.Z. A Maximum Entropy approach for blind deconvolution. Signal Process. 2006, 86, 2913–2931. [Google Scholar] [CrossRef]
  2. Pinchas, M. A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range of Signal-to-Noise Ratio. Entropy 2019, 21, 72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Shlisel, S.; Pinchas, M. Improved Approach for the Maximum Entropy Deconvolution Problem. Entropy 2021, 23, 547. [Google Scholar] [CrossRef] [PubMed]
  4. Freiman, A.; Pinchas, M. A Maximum Entropy inspired model for the convolutional noise PDF. Digit. Signal Process. 2015, 39, 35–49. [Google Scholar] [CrossRef]
  5. Rivlin, Y.; Pinchas, M. Edgeworth Expansion Based Model for the Convolutional Noise pdf. Math. Probl. Eng. 2014, 2014, 951927. [Google Scholar] [CrossRef]
  6. Pinchas, M. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case. Entropy 2016, 18, 65. [Google Scholar] [CrossRef] [Green Version]
  7. Shalvi, O.; Weinstein, E. New criteria for blind deconvolution of nonminimum phase systems (channels). IEEE Trans. Inf. Theory 1990, 36, 312–321. [Google Scholar] [CrossRef]
  8. Wiggins, R.A. Minimum entropy deconvolution. Geoexploration 1978, 16, 21–35. [Google Scholar] [CrossRef]
  9. Kazemi, N.; Sacchi, M.D. Sparse multichannel blind deconvolution. Geophysics 2014, 79, V143–V152. [Google Scholar] [CrossRef]
  10. Guitton, A.; Claerbout, J. Nonminimum phase deconvolution in the log domain: A sparse inversion approach. Geophysics 2015, 80, WD11–WD18. [Google Scholar] [CrossRef] [Green Version]
  11. Silva, M.T.M.; Arenas-Garcia, J. A Soft-Switching Blind Equalization Scheme via Convex Combination of Adaptive Filters. IEEE Trans. Signal Process. 2013, 61, 1171–1182. [Google Scholar] [CrossRef]
  12. Mitra, R.; Singh, S.; Mishra, A. Improved multi-stage clustering-based blind equalisation. IET Commun. 2011, 5, 1255–1261. [Google Scholar] [CrossRef] [Green Version]
  13. Gul, M.M.U.; Sheikh, S.A. Design and implementation of a blind adaptive equalizer using Frequency Domain Square Contour Algorithm. Digit. Signal Process. 2010, 20, 1697–1710. [Google Scholar]
  14. Sheikh, S.A.; Fan, P. New Blind Equalization techniques based on improved square contour algorithm. Digit. Signal Process. 2008, 18, 680–693. [Google Scholar] [CrossRef]
  15. Thaiupathump, T.; He, L.; Kassam, S.A. Square contour algorithm for blind equalization of QAM signals. Signal Process. 2006, 86, 3357–3370. [Google Scholar] [CrossRef]
  16. Sharma, V.; Raj, V.N. Convergence and performance analysis of Godard family and multimodulus algorithms for blind equalization. IEEE Trans. Signal Process. 2005, 53, 1520–1533. [Google Scholar] [CrossRef] [Green Version]
  17. Yuan, J.T.; Lin, T.C. Equalization and Carrier Phase Recovery of CMA and MMA in BlindAdaptive Receivers. IEEE Trans. Signal Process. 2010, 58, 3206–3217. [Google Scholar] [CrossRef]
  18. Yuan, J.T.; Tsai, K.D. Analysis of the multimodulus blind equalization algorithm in QAM communication systems. IEEE Trans. Commun. 2005, 53, 1427–1431. [Google Scholar] [CrossRef]
  19. Wu, H.C.; Wu, Y.; Principe, J.C.; Wang, X. Robust switching blind equalizer for wireless cognitive receivers. IEEE Trans. Wirel. Commun. 2008, 7, 1461–1465. [Google Scholar]
  20. Kundur, D.; Hatzinakos, D. A novel blind deconvolution scheme for image restoration using recursive filtering. IEEE Trans. Signal Process. 1998, 46, 375–390. [Google Scholar] [CrossRef]
  21. Likas, C.L.; Galatsanos, N.P. A variational approach for Bayesian blind image deconvolution. IEEE Trans. Signal Process. 2004, 52, 2222–2233. [Google Scholar] [CrossRef]
  22. Li, D.; Mersereau, R.M.; Simske, S. Blind Image Deconvolution Through Support Vector Regression. IEEE Trans. Neural Netw. 2007, 18, 931–935. [Google Scholar] [CrossRef] [PubMed]
  23. Amizic, B.; Spinoulas, L.; Molina, R.; Katsaggelos, A.K. Compressive Blind Image Deconvolution. IEEE Trans. Image Process. 2013, 22, 3994–4006. [Google Scholar] [CrossRef] [Green Version]
  24. Tzikas, D.G.; Likas, C.L.; Galatsanos, N.P. Variational Bayesian Sparse Kernel-Based Blind Image Deconvolution with Student’s-t Priors. IEEE Trans. Image Process. 2009, 18, 753–764. [Google Scholar] [CrossRef]
  25. Feng, C.; Chi, C. Performance of cumulant based inverse filters for blind deconvolution. IEEE Trans. Signal Process. 1999, 47, 1922–1935. [Google Scholar] [CrossRef]
  26. Abrar, S.; Nandi, A.S. Blind Equalization of Square-QAM Signals: A Multimodulus Approach. IEEE Trans. Commun. 2010, 58, 1674–1685. [Google Scholar] [CrossRef]
  27. Vanka, R.N.; Murty, S.B.; Mouli, B.C. Performance comparison of supervised and unsupervised/blind equalization algorithms for QAM transmitted constellations. In Proceedings of the 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 20–21 February 2014. [Google Scholar]
  28. Ram Babu, T.; Kumar, P.R. Blind Channel Equalization Using CMA Algorithm. In Proceedings of the 2009 International Conference on Advances in Recent Technologies in Communication and Computing (ARTCom 09), Kottayam, India, 27–28 October 2009. [Google Scholar]
  29. Qin, Q.; Huahua, L.; Tingyao, J. A new study on VCMA-based blind equalization for underwater acoustic communications. In Proceedings of the 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC), Shengyang, China, 20–22 December 2013. [Google Scholar]
  30. Wang, J.; Huang, H.; Zhang, C.; Guan, J. A Study of the Blind Equalization in the Underwater Communication. In Proceedings of the WRI Global Congress on Intelligent Systems, GCIS ’09, Xiamen, China, 19–21 May 2009. [Google Scholar]
  31. Miranda, M.D.; Silva, M.T.M.; Nascimento, V.H. Avoiding Divergence in the Shalvi Weinstein Algorithm. IEEE Trans. Signal Process. 2008, 56, 5403–5413. [Google Scholar] [CrossRef]
  32. Samarasinghe, P.D.; Kennedy, R.A. Minimum Kurtosis CMA Deconvolution for Blind Image Restoration. In Proceedings of the 4th International Conference on Information and Automation for Sustainability (ICIAFS 2008), Colombo, Sri Lanka, 12–14 December 2008. [Google Scholar]
  33. Zhao, L.; Li, H. Application of the Sato blind deconvolution algorithm for correction of the gravimeter signal distortion. In Proceedings of the 2013 Third International Conference on Instrumentation, Measurement, Computer, Communication and Control, Shenyang, China, 21–23 September 2013; pp. 1413–1417. [Google Scholar]
  34. Fiori, S. Blind deconvolution by a Newton method on the non-unitary hypersphere. Int. J. Adapt. Control Signal Process. 2013, 7, 488–518. [Google Scholar] [CrossRef]
  35. Shevach, R.; Pinchas, M. A Closed-form approximated expression for the residual ISI obtained by blind adaptive equalizers applicable for the non-square QAM constellation input and noisy case. In Proceedings of the International Conference on Pervasive and Embedded Computing and Communication Systems (PECCS), Angers, France, 11–13 February 2015; pp. 217–223. [Google Scholar]
  36. Abrar, S.; Ali, A.; Zerguine, A.; Nandi, A.K. Tracking Performance of Two Constant Modulus Equalizers. IEEE Commun. Lett. 2013, 17, 830–833. [Google Scholar] [CrossRef]
  37. Azim, A.W.; Abrar, S.; Zerguine, A.; Nandi, A.K. Steady-state performance of multimodulus blind equalizers. Signal Process. 2015, 108, 509–520. [Google Scholar] [CrossRef] [Green Version]
  38. Azim, A.W.; Abrar, S.; Zerguine, A.; Nandi, A.K. Performance analysis of a family of adaptive blind equalization algorithms for square-QAM. Digit. Signal Process. 2016, 48, 163–177. [Google Scholar] [CrossRef] [Green Version]
  39. Johnson, R.C.; Schniter, P.; Endres, T.J.; Behm, J.D.; Brown, D.R.; Casas, R.A. Blind Equalization Using the Constant Modulus Criterion: A Review. Proc. IEEE 1998, 86, 1927–1950. [Google Scholar] [CrossRef] [Green Version]
  40. Pinchas, M.; Bobrovsky, B.Z. A Novel HOS Approach for Blind Channel Equalization. IEEE Wirel. Commun. J. 2007, 6, 875–886. [Google Scholar] [CrossRef]
  41. Haykin, S. Adaptive filter theory. In Blind Deconvolution; Haykin, S., Ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 1991; Chapter 20. [Google Scholar]
  42. Bellini, S. Bussgang techniques for blind equalization. IEEE Glob. Telecommun. Conf. Rec. 1986, 3, 1634–1640. [Google Scholar]
  43. Bellini, S. Blind Equalization. Alta Frequenza 1988, 57, 445–450. [Google Scholar]
  44. Godfrey, R.; Rocca, F. Zero memory non-linear deconvolution. Geophys. Prospect. 1981, 29, 189–228. [Google Scholar] [CrossRef]
  45. Jumarie, G. Nonlinear filtering. A weighted mean squares approach and a Bayesian one via the Maximum Entropy principle. Signal Process. 1990, 21, 323–338. [Google Scholar] [CrossRef]
  46. Papulis, A. Probability, Random Variables, and Stochastic Processes, 2nd ed.; International Edition; McGraw-Hill: New York, NY, USA, 1984; p. 536, Chapter 15. [Google Scholar]
  47. Assaf, S.A.; Zirkle, L.D. Approximate analysis of nonlinear stochastic systems. Int. J. Control 1976, 23, 477–492. [Google Scholar] [CrossRef] [Green Version]
  48. Bover, D.C.C. Moment equation methods for nonlinear stochastic systems. J. Math. Anal. Appl. 1978, 65, 306–320. [Google Scholar] [CrossRef] [Green Version]
  49. Armando Domínguez-Molina, J.; González-farías, G.; Rodríguez-Dagnino, R.M. A Practical Procedure to Estimate the Shape Parameter in the Generalized Gaussian Distribution. Universidad de Guanajuato, ITESM Campus Monterrey. 2003. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.329.2835 (accessed on 28 March 2021).
  50. González-Farías, G.; Domínguez-Molina, J.A.; Rodríguez-Dagnino, R.M. Efficiency of the Approximated Shape Parameter Estimator in the Generalized Gaussian Distribution. IEEE Trans. Veh. Technol. 2009, 8, 4214–4223. [Google Scholar] [CrossRef]
  51. Orszag, S.A.; Bender, C.M. Advanced Mathematical Methods for Scientist Engineers International Series in Pure and Applied Mathematics; McDraw-Hill: New York, NY, USA, 1978; Chapter 6. [Google Scholar]
  52. Godard, D.N. Self recovering equalization and carrier tracking in two-dimenional data communication system. IEEE Trans. Comm. 1980, 28, 1867–1875. [Google Scholar] [CrossRef] [Green Version]
  53. Pinchas, M. Two Blind Adaptive Equalizers Connected in Series for Equalization Performance Improvement. J. Signal Inf. Process. 2013, 4, 64–71. [Google Scholar] [CrossRef] [Green Version]
  54. Pinchas, M. The tap-length associated with the blind adaptive equalization/deconvolution problem. In Proceedings of the 1st International Electronic Conference—Futuristic Applications on Electronics, Basel, Switzerland, 1–30 November 2020. [Google Scholar] [CrossRef]
  55. Oh, K.N.; Chin, Y.O. Modified constant modulus algorithm: Blind equalization and carrier phase recovery algorithm. In Proceedings of the IEEE International Conference on Communications ICC ’95, Seattle, WA, USA, 18–22 June 1995; Volume 1, pp. 498–502. [Google Scholar]
  56. Yang, J.; Werner, J.-J.; Dumont, G.A. The multimodulus blind equalization and its generalized algorithms. IEEE J. Sel. Areas Commun. 2002, 20, 997–1015. [Google Scholar] [CrossRef]
  57. Tadmor, S.; Carmi, S.; Pinchas, M. A Novel Dual Mode Decision Directed Multimodulus Algorithm (DM-DD-MMA) for Blind Adaptive Equalization. In Proceedings of the 11th International Conference on Electronics, Coommunications and Networks (CECNet), Beijing, China, 18–21 November 2021. [Google Scholar]
  58. Pinchas, M. A Closed Approximated Formed Expression for the Achievable Residual Intersymbol Interference Obtained by Blind Equalizers. Signal Process. J. 2010, 90, 1940–1962. [Google Scholar] [CrossRef]
  59. Spiegel, M.R. Mathematical Handbook of Formulas and Tables, SCHAUM’S Outline Series; Mcgraw-Hill: New York, NY, USA, 1968. [Google Scholar]
Figure 1. System description.
Figure 1. System description.
Entropy 24 00989 g001
Figure 2. Right side plot: Simulated ISI with Godard’s algorithm for the 16QAM constellation input. For the noiseless case, 100 Monte Carlo runs produced the averaged results. The equalizer’s and channel’s tap length were set to 13 ( N = R = 13 ), μ = 0.0001 . Left side plot: Equalized output constellation with Godard’s algorithm for N = R = 13 and μ = 0.0001 .
Figure 2. Right side plot: Simulated ISI with Godard’s algorithm for the 16QAM constellation input. For the noiseless case, 100 Monte Carlo runs produced the averaged results. The equalizer’s and channel’s tap length were set to 13 ( N = R = 13 ), μ = 0.0001 . Left side plot: Equalized output constellation with Godard’s algorithm for N = R = 13 and μ = 0.0001 .
Entropy 24 00989 g002
Figure 3. Simulated ISI with Godard’s algorithm for the 16QAM constellation input. For the noiseless case, 100 Monte Carlo runs produced the averaged results. The averaged results were compared with the calculated residual ISI given in (11) for N = R = 13 , μ = 0.00022 and two cases for T.
Figure 3. Simulated ISI with Godard’s algorithm for the 16QAM constellation input. For the noiseless case, 100 Monte Carlo runs produced the averaged results. The averaged results were compared with the calculated residual ISI given in (11) for N = R = 13 , μ = 0.00022 and two cases for T.
Entropy 24 00989 g003
Figure 4. Equalized output constellation with Godard’s algorithm for N = R = 13 and μ = 0.00022 .
Figure 4. Equalized output constellation with Godard’s algorithm for N = R = 13 and μ = 0.00022 .
Entropy 24 00989 g004
Figure 5. Simulated ISI with the MMA algorithm for the 16QAM constellation input. For the noiseless case, 100 Monte Carlo runs produced the averaged results. The averaged results were compared with the calculated residual ISI given in (11) for N = R = 13 , μ = 0.000365 and two cases for T.
Figure 5. Simulated ISI with the MMA algorithm for the 16QAM constellation input. For the noiseless case, 100 Monte Carlo runs produced the averaged results. The averaged results were compared with the calculated residual ISI given in (11) for N = R = 13 , μ = 0.000365 and two cases for T.
Entropy 24 00989 g005
Figure 6. Equalized output constellation with the MMA algorithm for N = R = 13 and μ = 0.000365 .
Figure 6. Equalized output constellation with the MMA algorithm for N = R = 13 and μ = 0.000365 .
Entropy 24 00989 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pinchas, M. The Residual ISI for Which the Convolutional Noise Probability Density Function Associated with the Blind Adaptive Deconvolution Problem Turns Approximately Gaussian. Entropy 2022, 24, 989. https://doi.org/10.3390/e24070989

AMA Style

Pinchas M. The Residual ISI for Which the Convolutional Noise Probability Density Function Associated with the Blind Adaptive Deconvolution Problem Turns Approximately Gaussian. Entropy. 2022; 24(7):989. https://doi.org/10.3390/e24070989

Chicago/Turabian Style

Pinchas, Monika. 2022. "The Residual ISI for Which the Convolutional Noise Probability Density Function Associated with the Blind Adaptive Deconvolution Problem Turns Approximately Gaussian" Entropy 24, no. 7: 989. https://doi.org/10.3390/e24070989

APA Style

Pinchas, M. (2022). The Residual ISI for Which the Convolutional Noise Probability Density Function Associated with the Blind Adaptive Deconvolution Problem Turns Approximately Gaussian. Entropy, 24(7), 989. https://doi.org/10.3390/e24070989

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop