Next Article in Journal
Information Architecture for Data Disclosure
Previous Article in Journal
Quantum Stream Cipher Based on Holevo–Yuen Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Complexity Alternatives to the Optimal Linear Coding Scheme for Transmitting ARMA Sources

by
Jesús Gutiérrez-Gutiérrez
,
Fernando M. Villar-Rosety
*,
Xabier Insausti
and
Marta Zárraga-Rodríguez
Department of Biomedical Engineering and Sciences, Tecnun, University of Navarra, Manuel Lardizábal 13, 20018 Donostia-San Sebastián, Spain
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(5), 669; https://doi.org/10.3390/e24050669
Submission received: 7 April 2022 / Revised: 28 April 2022 / Accepted: 6 May 2022 / Published: 10 May 2022
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In the era of the Internet of Things, there are many applications where numerous devices are deployed to acquire information and send it to analyse the data and make informed decisions. In these applications, the power consumption and price of the devices are often an issue. In this work, analog coding schemes are considered, so that an ADC is not needed, allowing the size and power consumption of the devices to be reduced. In addition, linear and DFT-based transmission schemes are proposed, so that the complexity of the operations involved is lowered, thus reducing the requirements in terms of processing capacity and the price of the hardware. The proposed schemes are proved to be asymptotically optimal among the linear ones for WSS, MA, AR and ARMA sources.

1. Introduction

In the past few years, and specially in the context of the Internet of Things (IoT), numerous applications in which devices with constrained power and processing capacity acquire and transmit data have emerged. Therefore, it is crucial to find coding strategies for transmitting information with an acceptable distortion that are undemanding in terms of processing while minimizing power consumption. A well-known low-power and low-complexity coding strategy is the so called analog joint source-channel coding (see, e.g., [1]). In this context, some authors have lately proposed the use of low-power/low-cost all-analog sensors [2,3], avoiding the high power demanding analog-to-digital converters (ADC).
Here, we are focused on analog joint source-channel linear coding. Specifically, we study here the transmission of realizations of an n-dimensional continuous random vector over an additive white Gaussian noise (AWGN) channel employing a linear encoder and decoder. For this scenario, the minimum average transmission power under a fixed average distortion constraint is achieved whenever it is applied the coding strategy given by Lee and Petersen in [4]. Nevertheless, in [5,6] it was presented a linear coding scheme based on the discrete Fourier transform (DFT) that, with a lower complexity, is asymptotically optimal among the linear schemes in terms of transmission power for certain sources, namely wide-sense stationary (WSS) and asymptotically WSS (AWSS) autoregressive (AR) sources.
In this paper, we present two new DFT-based alternatives to the optimal linear coding scheme given in [4]. We prove that the average transmission power required for one of them is lower than for the approach in [5,6]. The average transmission power required for the other new alternative is shown to be higher, although it is conceptually simpler, and so is its implementation. We prove that the two new schemes along with the strategy in [5,6] are asymptotically optimal for any AWSS source. Additionally, we show that the convergence speed of the average transmission power of the alternative schemes is O ( 1 / n ) for the case of WSS, moving average (MA), AWSS AR and AWSS autoregressive moving average (ARMA) sources. Therefore, we conclude that it is possible to achieve a good performance with small size data blocks, allowing this schemes to be used in applications that require low latency.
This paper is organized as follows. In Section 2, we mathematically describe the communications system and review the transmission strategy presented in [4], and the alternative given in [5,6]. In Section 3, the two new coding schemes are presented. In Section 4, we compare the performance of each of the four coding strategies studied in terms of the required transmission power, and we analyze their asymptotic behavior for several types of sources. In Section 5 and Section 6 a numerical example and some conclusions are presented respectively.

2. Preliminaries

We begin this section by introducing some notation. We will denote by R , C , Z and N the sets of real numbers, complex numbers, integers and positive integers respectively. a represents the smallest integer higher than or equal to a R . R m × n is the set of the m × n real matrices. The symbols ⊤ and ∗ denote transpose and conjugate transpose respectively, the imaginary unit is represented by i , E stands for expectation and δ is the Kronecker delta (i.e., δ j , k = 1 if j = k , and it is zero otherwise). If z C , Re ( z ) and Im ( z ) will designate the real part and the imaginary part of z respectively, z ¯ is the conjugate of z and z ^ is 2-dimensional the real column vector Re ( z ) , Im ( z ) . I n denotes the n × n identity matrix and V n is the n × n Fourier unitary matrix, i.e.,
[ V n ] j , k = 1 n e 2 π ( j 1 ) ( k 1 ) n i j , k { 1 , , n } .
λ 1 ( A ) , , λ n ( A ) and σ 1 ( B ) , , σ n ( B ) are the eigenvalues of an n × n real symmetric matrix A and the singular values of an n × n matrix B sorted in decreasing order, respectively. If { x n } n N is a random process, we denote by x n : 1 the random n-dimensional column vector x n , x n 1 , , x 1 . The Frobenius norm and the spectral norm are represented by · F and · 2 , respectively.
If f : R C is a continuous and 2 π -periodic function, we denote by T n ( f ) the n × n Toeplitz matrix given by
T n ( f ) j , k = f j k , j , k { 1 , , n } ,
where { f k } k Z is the set of Fourier coefficients of f:
f k = 1 2 π 0 2 π f ( ω ) e k ω i d ω k Z .

2.1. Problem Statement

We consider a discrete-time analog n-dimensional real vector source x n : 1 with invertible correlation matrix. We want to transmit realizations of this vector by using n times a real AWGN channel with noise variance σ 2 > 0 . The n noise iid random variables can then be represented by the Gaussian n-dimensional vector ν n : 1 , with correlation matrix E ν n : 1 ν n : 1 = σ 2 I n . We further assume that the input x n : 1 and the channel noise ν n : 1 are both zero-mean and that they are uncorrelated, i.e., E ( x j ν k ) = 0 for all j , k { 1 , , n } .
The communications system is depicted in Figure 1, where G and H are n × n real matrices representing the linear encoder and decoder, respectively. Specifically, a source vector symbol is encoded using a linear transformation, u n : 1 = G x n : 1 , and then transmitted through the AWGN channel. An estimation of the source vector symbol is then obtained from the perturbed vector, u n : 1 ˜ = u n : 1 + ν n : 1 , using another linear transformation, x n : 1 ˜ = H u n : 1 ˜ .
In [4], Lee and Petersen found matrices G and H that minimize the average transmission power, 1 n j = 1 n E ( u j 2 ) , under a given average distortion constraint D, that is,
1 n E x n : 1 x n : 1 ˜ F 2 D .

2.2. Known Linear Coding Schemes

2.2.1. Optimal Linear Coding Scheme

As it has been aforementioned, Lee and Petersen presented the optimal linear coding scheme in [4]. The encoder of the optimal linear coding scheme converts the n correlated variables of the source vector into n uncorrelated variables, and assigns a weight to each of the n uncorrelated variables. If E x n : 1 x n : 1 = U n diag ( λ 1 E x n : 1 x n : 1 , , λ n E x n : 1 x n : 1 ) U n 1 is an eigenvalue decomposition of the correlation matrix of the source, where the eigenvector matrix U n is real and orthogonal, the optimal linear coding scheme is of the type shown in Figure 2, with W n = U n . Its average transmission power is
P n ( D ) = σ 2 D 1 n j = 1 n λ j E x n : 1 x n : 1 2 D
under a given average distortion constraint D 0 , λ n E x n : 1 x n : 1 .
Since this coding scheme implies a multiplication of an n × n matrix by an n × 1 column vector, its computational complexity is O ( n 2 ) .

2.2.2. DFT-Based Alternative

In [5,6], an alternative to the optimal coding scheme was presented. The encoder of that alternative scheme assigns weights to the real and imaginary parts of the entries of the DFT of the source vector. This alternative coding scheme is of the type shown in Figure 2, with W n = M n V n * , where M n is the n × n sparse matrix defined in ([5] Equation (3)). Its average transmission power is
P ^ n ( D ) = σ 2 D 1 n j = 1 n E z j 2 2 D
under a given average distortion constraint D 0 , λ n E ( x n : 1 x n : 1 ) , where
E z n j + 1 2 = E y n j + 1 2 if j 1 , n 2 + 1 N , 2 E Re y n j + 1 2 if 2 j n 2 , 2 E Im y n j + 1 2 if n n 2 + 2 j n ,
with y n : 1 = V n * x n : 1 .
The computational complexity of this coding scheme is O ( n log ( n ) ) whenever the fast Fourier transform (FFT) algorithm is used.

3. New Coding Schemes

In this section, we propose two new transmission schemes. Similarly to the scheme reviewed in Section 2.2.2, our two new schemes make use of the DFT, and their computational complexity is also O ( n log ( n ) ) if the FFT algorithm is applied.

3.1. Low-Power Alternative

The encoder of this scheme first computes the DFT of the source vector, y n : 1 = V n * x n : 1 . Afterwards, each 2-dimensional vector y j ^ is encoded using a 2 × 2 real orthogonal eigenvector matrix of the correlation matrix E y j ^ y j ^ . Therefore, the real part and the imaginary part of each y j are here jointly encoded, unlike in Section 2.2.2, where they where separately encoded. This coding scheme is shown in Figure 3 for n even (the scheme for n odd is similar), with
E y j ^ y j ^ = U y j ^ diag λ 1 E y j ^ y j ^ , λ 2 E y j ^ y j ^ U y j ^ 1 j n + 1 2 , , n 1
being an eigenvalue decomposition of E y j ^ y j ^ where the eigenvector matrix U y j ^ is real and orthogonal,
α ˇ j = 1 E ( z j 2 ) σ 2 ( σ 2 + P ˇ n ( D ) ) E ( z j 2 ) D σ 2 j { 1 , , n }
and
β ˇ j = α ˇ j E z j 2 α ˇ j 2 E z j 2 + σ 2 j { 1 , , n } .
In Appendix A, we prove that the average transmission power of this scheme under a given average distortion constraint D is
P ˇ n ( D ) = σ 2 D 1 n j = 1 n E z j 2 2 D D 0 , λ n E x n : 1 x n : 1 ,
with
E z n j + 1 2 = E y n 2 if j = 1 , 2 λ 1 E y n j 2 ^ y n j 2 ^ if j even , j n , 2 λ 2 E y n j 1 2 ^ y n j 1 2 ^ if j odd , j 1 , E y n 2 2 if j = n , n even .
Furthermore, in Section 4, we show that P ˇ n ( D ) P ^ n ( D ) , i.e., the average transmission power for this new scheme is lower than the one required for the scheme in Section 2.2.2.

3.2. DFT/IDFT Alternative

The encoder of this scheme uses both the DFT and the inverse DFT (IDFT). The coding scheme is shown in Figure 4, where
α ˜ j = 1 E y j 2 σ 2 σ 2 + P ˜ n ( D ) E y j 2 D σ 2 j { 1 , , n }
and
β ˜ j = α ˜ j E y j 2 α ˜ j 2 E y j 2 + σ 2 j { 1 , , n } .
In Appendix B, we prove that the average transmission power of this scheme under a given average distortion constraint D is
P ˜ n ( D ) = σ 2 D 1 n j = 1 n E y j 2 2 D D 0 , λ n E x n : 1 x n : 1 .
As it can be seen in Figure 4, this coding scheme is conceptually simpler than the ones in Section 2.2.2 and Section 3.1.

4. Analysis of the Transmission Power of the Coding Schemes

We begin this section by comparing the performance of each of the four considered coding strategies in terms of the required transmission power.
Theorem 1.
Let P n ( D ) , P ^ n ( D ) , P ˇ n ( D ) and P ˜ n ( D ) be the average transmission powers given in (1), (2), (5) and (8), respectively. Then,
P n ( D ) P ˇ n ( D ) P ^ n ( D ) P ˜ n ( D ) .
Proof. 
See Appendix C. □

4.1. Asymptotic Behavior

We now show that the three alternatives to the optimal linear coding scheme, which were presented in Section 2.2.2, Section 3.1 and Section 3.2, are asymptotically optimal whenever the source is AWSS. To that end, we first need to review three definitions. We begin with the Gray concept of asymptotically equivalent sequences of matrices given in [7].
Definition 1
(Asymptotically equivalent sequences of matrices).Let A n and B n be n × n matrices for all n N . The two sequences of matrices { A n } n N and { B n } n N are said to be asymptotically equivalent, abbreviated { A n } { B n } , if there exists M 0 such that
A n 2 , B n 2 M n N
and
lim n A n B n F n = 0 .
Now we recall the well-known concept of WSS process.
Definition 2
(WSS process).Let f : R R be a continuous, 2 π -periodic and non-negative function. A random process { x n } n N is said to be WSS, with power spectral density (PSD) f, if it has constant mean, i.e., E ( x j ) = E ( x k ) j , k N , and E x n : 1 x n : 1 * = T n ( f ) .
We can now review the Gray concept of AWSS process given in ([8] p. 225).
Definition 3
(AWSS process).Let f : R R be a continuous, 2 π -periodic and non-negative function. A random process { x n } n N is said to be AWSS, with (asymptotic) PSD f, if it has constant mean and E x n : 1 x x : 1 * T n ( f ) .
Theorem 2.
Suppose that { x n } n N is an AWSS process with PSD f as in Definition 3. If inf n N λ n E x n : 1 x n : 1 > 0 then
lim n P n ( D ) = lim n P ^ n ( D ) = lim n P ˇ n ( D ) = lim n P ˜ n ( D ) = σ 2 D 1 2 π 0 2 π f ( ω ) d ω 2 D
for all D 0 , inf n N λ n E ( x n : 1 x n : 1 ) .
Proof. 
See Appendix D. □

4.2. Convergence Speed

Here, we prove that the convergence speed of the average transmission power of the three alternative schemes described in Section 2.2.2, Section 3.1 and Section 3.2 to the average transmission power of the optimal linear scheme is O ( 1 / n ) for AWSS ARMA, MA, AWSS AR and WSS sources. We also recall the definitions of ARMA, MA and AR processes.

4.2.1. AWSS ARMA Sources

Definition 4
(ARMA process).A real zero-mean random process { x n } n N is said to be ARMA if
x n = w n + j = 1 n 1 b j w n j j = 1 n 1 a j x n j n N ,
where b j , a j R for all j N , and { w n } n N is a real zero-mean random process satisfying that E ( w j w k ) = δ j , k σ w 2 for all j , k N with σ w 2 > 0 . If there exist p , q N such that a j = 0 for all j > p and b j = 0 for all j > q , then { x n } n N is called an ARMA ( p , q ) process.
Theorem 3.
Suppose that { x n } n N is an ARMA( p , q ) process as in Definition 4. Let a ( ω ) = 1 + j = 1 p a j e j ω i and b ( ω ) = 1 + j = 1 q b j e j ω i for all ω R . Assume that a ( ω ) 0 and b ( ω ) 0 for all ω R , and that there exist K 1 , K 2 > 0 such that T n ( a ) 1 2 K 1 and T n ( b ) 1 2 K 2 for all n N . Then, { x n } n N is AWSS with PSD σ w 2 | b | 2 | a | 2 , and
P ˜ n ( D ) P n ( D ) = O 1 / n .
for all D 0 , inf n N λ n E ( x n : 1 x n : 1 ) .
Proof. 
See Appendix E. □

4.2.2. MA Sources

Definition 5
(MA process).A real zero-mean random process { x n } n N is said to be MA if
x n = w n + j = 1 n 1 b j w n j n N ,
where b j R for all j N , and { w n } n N is a real zero-mean random process satisfying that E ( w j w k ) = δ j , k σ w 2 for all j , k N with σ w 2 > 0 . If there exists q N such that b j = 0 for all j > q , then { x n } n N is called an MA ( q ) process.
Theorem 4.
Suppose that { x n } n N is an MA(q) process as in Definition 5. Let b ( ω ) = 1 + j = 1 q b j e j ω i for all ω R . Assume that b ( ω ) 0 for all ω R , and that there exists K > 0 such that T n ( b ) 1 2 K for all n N . Then, { x n } n N is AWSS with PSD σ w 2 | b | 2 , and
P ˜ n ( D ) P n ( D ) = O 1 / n .
for all D 0 , inf n N λ n E ( x n : 1 x n : 1 ) .
Proof. 
It is a direct consequence of Theorem 3. □

4.2.3. AWSS AR Sources

Definition 6
(AR process).A real zero-mean random process { x n } n N is said to be AR if
x n = w n j = 1 n 1 a j x n j n N ,
where a j R for all j N , and { w n } n N is a real zero-mean random process satisfying that E ( w j w k ) = δ j , k σ w 2 for all j , k N with σ w 2 > 0 . If there exists p N such that a j = 0 for all j > p , then { x n } n N is called an AR ( p ) process.
Theorem 5.
Suppose that { x n } n N is an AR(p) process as in Definition 6. Let a ( ω ) = 1 + j = 1 p a j e j ω i for all ω R . Assume that a ( ω ) 0 for all ω R , and that there exists K > 0 such that T n ( a ) 1 2 K for all n N . Then, { x n } n N is AWSS with PSD σ w 2 | a | 2 , and
P ˜ n ( D ) P n ( D ) = O 1 / n .
for all D 0 , inf n N λ n E ( x n : 1 x n : 1 ) .
Proof. 
It is a direct consequence of Theorem 3. □

4.2.4. WSS Sources

Theorem 6.
Suppose that { x n } n N is a WSS process as in Definition 2 with PSD f. Assume that f ( ω ) 0 for all ω R and that there exists m N such that f k = 0 whenever | k | > m . Then,
P ˜ n ( D ) P n ( D ) = O 1 / n .
for all D 0 , min ( f ) .
Proof. 
See Appendix F. □

5. Numerical Example

In this section we give a numerical example. We consider an ARMA( 1 , 1 ) process { x n } n N with a 1 = 1 2 , b 1 = 1 3 and σ w 2 = 1 , channel noise variance σ 2 = 1 and an average distortion constraint D = 0.5 . Figure 5 shows the theoretical value of the average transmission power required for each of the considered schemes with n { 1 , , 100 } . It can be observed how the graphs of the average transmission power of the different schemes follow the inequalities in (9), and how the transmission power of the different schemes get closer as n increases. Moreover, we have simulated the transmission of 20,000 samples of the considered ARMA( 1 , 1 ) process for n { 1 , , 100 } . Figure 6 shows the 10th and the 90th percentile of the power and distortion of the samples.

6. Conclusions

In this paper, two new low-complexity linear coding schemes for transmitting n-dimensional vectors by using n times an AWGN channel have been presented. These schemes are based on the DFT. The performance of these schemes, along with another DFT-based scheme that had been previously presented, has been analyzed in comparison with the optimal scheme among the linear ones. These three DFT-based schemes allow good performance in terms of distortion using low-cost, low-power hardware.
In particular, it has been proved that, under a maximum average distortion constraint, the considered low-complexity schemes require the same average transmission power as the optimal linear scheme for AWSS sources when the block length, n, tends to infinity. Moreover, it has been proved that for certain types of AWSS sources (namely WSS, MA, AWSS AR and AWSS ARMA), the difference between the transmission power of each of the three alternative schemes and the transmission power of the optimal linear scheme decreases as O 1 / n . Therefore, their performance will be similar to that of the optimal linear coding scheme even for small values of n. In other words, replacing the optimal linear scheme with any of the schemes studied here will not have, even for small block sizes, a large penalty in terms of transmission power, while it will lead to a noticeable reduction in complexity.

Author Contributions

Authors are listed in order of their degree of involvement in the work, with the most active contributors listed first. J.G.-G. conceived the research question. F.M.V.-R. carried out the numerical example. All authors were involved in the research and wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Spanish Ministry of Science and Innovation through the ADELE project (PID2019-104958RB-C44).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADCAnalog-to-digital converter
ARAutoregressive
ARMAAutoregressive moving average
AWGNAdditive white Gaussian noise
AWSSAsymptotically wide-sense stationary
DFTDiscrete Fourier transform
FFTFast Fourier transform
IDFTInverse discrete Fourier transform
iidIndependent and identically distributed
IoTInternet of Things
MAMoving average
WSSWide-sense stationary

Appendix A. Average Transmission Power and Average Distortion of the Low-Power Alternative

We first recall the basic symmetry property of the DFT.
Lemma A1.
Let x j , y j C for all j { 1 , , n } , with n N . If y n : 1 is the DFT of x n : 1 , i.e., y n : 1 = V n * x n : 1 , the two following assertions are equivalent:
  • x n : 1 R n × 1 .
  • y j = y n j ¯ for all j { 1 , , n 1 } and y n R .
Now we show that α ˇ j and β ˇ j in (3) and (4), respectively, are well defined. If D 0 , λ n E x n : 1 x n : 1 , applying Theorems 1 and 2 from [9], E z j 2 λ n E x n : 1 x n : 1 D > 0 for all j { 1 , , n } . Hence 1 n j = 1 n E z j 2 2 1 n j = 1 n D 2 = D and, consequently, P ˇ n ( D ) 0 . Thus,
σ 2 ( σ 2 + P ˇ n ( D ) ) E z j 2 D σ 2 E z j 2 D σ 2
and, therefore, α ˇ j is well defined in (3). Furthermore, α ˇ j 2 E z j 2 + σ 2 > 0 and, consequently, β ˇ j is well defined in (4).
Next, we show that the average transmission power of the coding scheme shown in Figure 3 is P ˇ n ( D ) given in (5):
1 n E u n : 1 F 2 = 1 n E j = 1 n u j 2 = 1 n j = 1 n E α ˇ j 2 z j 2 = 1 n j = 1 n α ˇ j 2 E z j 2 = 1 n j = 1 n σ 2 ( σ 2 + P ˇ n ( D ) ) E ( z j 2 ) D σ 2 = σ 2 D D σ 2 σ 2 + P ˇ n ( D ) 1 n j = 1 n E z j 2 D = σ 2 D D σ 2 σ 2 + σ 2 D 1 n j = 1 n E z j 2 2 D 1 n j = 1 n E z j 2 D = σ 2 D 1 n j = 1 n E z j 2 2 D = P ˇ n ( D ) .
Finally, we show that 1 n E x n : 1 x n : 1 ˜ F 2 = D . From Figure 3, observe that y j ˜ = y n j ˜ ¯ for j 1 , , n 1 2 , where a represents the largest integer lower than or equal to a R . Furthermore, as E x j ν k = 0 for all j , k { 1 , , n } and given that each z j with j { 1 , , n } is a linear combination of x 1 , , x n , E z j ν k = 0 for all j , k { 1 , , n } . Consequently, since the Frobenius norm is unitarily invariant, we have:
1 n E x n : 1 x n : 1 ˜ F 2 = 1 n E V n * x n : 1 x n : 1 ˜ F 2 = 1 n E y n : 1 y n : 1 ˜ F 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + k = 1 n 1 2 y k y k ˜ 2 + j = n + 1 2 n 1 y j y j ˜ 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + k = 1 n 1 2 y n k ¯ y n k ˜ ¯ 2 + j = n + 1 2 n 1 y j y j ˜ 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + j = n + 1 2 n 1 y j ¯ y j ˜ ¯ 2 + j = n + 1 2 n 1 y j y j ˜ 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + j = n + 1 2 n 1 y j y j ˜ ¯ 2 + j = n + 1 2 n 1 y j y j ˜ 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + 2 j = n + 1 2 n 1 y j y j ˜ 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + 2 j = n + 1 2 n 1 Re y j y j ˜ 2 + Im y j y j ˜ 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + 2 j = n + 1 2 n 1 y j y j ˜ ^ F 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + 2 j = n + 1 2 n 1 U y j ^ y j y j ˜ ^ F 2 = 1 n E j { n 2 , n } N y j y j ˜ 2 + j = n + 1 2 n 1 2 U y j ^ y j ^ 2 U y j ^ y j ˜ ^ F 2 = 1 n E j = 1 n z j z j ˜ 2 = 1 n E j = 1 n z j β ˇ j α ˇ j z j + ν j 2 = 1 n j = 1 n E 1 β ˇ j α ˇ j z j β ˇ j ν j 2 = 1 n j = 1 n 1 β ˇ j α ˇ j 2 E z j 2 + β ˇ j 2 E ν j 2 = 1 n j = 1 n 1 α ˇ j 2 E z j 2 α ˇ j 2 E z j 2 + σ 2 2 E z j 2 + α ˇ j E z j 2 α ˇ j 2 E z j 2 + σ 2 2 σ 2 = 1 n j = 1 n σ 2 2 α ˇ j 2 E z j 2 + σ 2 2 E z j 2 + α ˇ j E z j 2 2 α ˇ j 2 E z j 2 + σ 2 2 σ 2 = 1 n j = 1 n E z j 2 σ 2 α ˇ j 2 E z j 2 + σ 2 α ˇ j 2 E z j 2 + σ 2 2 = 1 n j = 1 n E z j 2 σ 2 1 E ( z j 2 ) σ 2 ( σ 2 + P ˇ n ( D ) ) E ( z j 2 ) D σ 2 E z j 2 + σ 2 = σ 2 D σ 2 + P ˇ n ( D ) 1 n j = 1 n E z j 2 = σ 2 D σ 2 + σ 2 D 1 n j = 1 n E z j 2 2 D 1 n j = 1 n E z j 2 = D 1 n j = 1 n E z j 2 D + 1 n j = 1 n E z j 2 2 D = D .

Appendix B. Average Transmission Power and Average Distortion of the DFT/IDFT Alternative

We first show that α ˜ j and β ˜ j in (6) and (7), respectively, are well defined. If D 0 , λ n E x n : 1 x n : 1 , applying Theorem 1 from [9], E y j 2 λ n E x n : 1 x n : 1 D > 0 for all j { 1 , , n } . Hence 1 n j = 1 n E y j 2 2 1 n j = 1 n D 2 = D and, consequently, P ˜ n ( D ) 0 . Thus,
σ 2 ( σ 2 + P ˜ n ( D ) ) E y j 2 D σ 2 E y j 2 D σ 2
and, therefore, α ˜ j is well defined in (6). Furthermore, α ˜ j 2 E y j 2 + σ 2 > 0 and, consequently, β ˜ j is well defined in (7).
We now show that the output of the encoder, u n : 1 = V n diag α ˜ n , , α ˜ 1 y n : 1 , is real. Observe that u n : 1 is real if and only if V n * u n : 1 = diag α ˜ n , , α ˜ 1 y n : 1 fulfills the conditions given in assertion 2 of Lemma A1, i.e., α ˜ n y n R and α ˜ j y j = α ˜ n j y n j ¯ for j { 1 , , n 1 } . From (6), α ˜ j with j { 1 , , n } are always real numbers. Since y n : 1 is the DFT of the real vector x n : 1 , y n is real, and so is α ˜ n y n . Furthermore, given that y j = y n j ¯ for j { 1 , , n 1 } , E y j 2 = E y n j 2 for j { 1 , , n 1 } . Then, from (6), α ˜ j = α ˜ n j for j { 1 , , n 1 } , and α ˜ n j y n j ¯ = α ˜ n j y n j ¯ = α ˜ j y j for j { 1 , , n 1 } .
Next, we show that the average transmission power of the coding scheme shown in Figure 4 is P ˜ n ( D ) given in (8). Since the Frobenius norm is unitarily invariant, we have:
1 n E u n : 1 F 2 = 1 n E V n diag α ˜ n , , α ˜ 1 V n * x n : 1 F 2 = 1 n E diag α ˜ n , , α ˜ 1 V n * x n : 1 F 2 = 1 n j = 1 n α ˜ j 2 E y j 2 = 1 n j = 1 n σ 2 ( σ 2 + P ˜ n ( D ) ) E y j 2 D σ 2 = σ 2 D D σ 2 σ 2 + P ˜ n ( D ) 1 n j = 1 n E y j 2 D = σ 2 D D σ 2 σ 2 + σ 2 D 1 n j = 1 n E y j 2 2 D 1 n j = 1 n E y j 2 D = σ 2 D 1 n j = 1 n E y j 2 2 D = P ˜ n ( D ) .
Finally, we show that 1 n E x n : 1 x n : 1 ˜ F 2 = D . As E x j ν k = 0 for all j , k { 1 , , n } then E y n : 1 ν n : 1 * = E V n * x n : 1 ν n : 1 * = V n * E x n : 1 ν n : 1 * = 0 n × n , where 0 n × n is the n × n zero matrix. Consequently, we have:
1 n E x n : 1 x n : 1 ˜ F 2 = 1 n E V n * x n : 1 x n : 1 ˜ F 2 = 1 n E y n : 1 y n : 1 ˜ F 2 = 1 n E y n : 1 diag β ˜ n , , β ˜ 1 V n * V n diag α ˜ n , , α ˜ 1 y n : 1 + ν n : 1 F 2 = 1 n E y n : 1 diag β ˜ n , , β ˜ 1 diag α ˜ n , , α ˜ 1 y n : 1 diag β ˜ n , , β ˜ 1 V n * ν n : 1 F 2 = 1 n E diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 y n : 1 diag β ˜ n , , β ˜ 1 V n * ν n : 1 F 2 = 1 n E tr diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 y n : 1 diag β ˜ n , , β ˜ 1 V n * ν n : 1 · diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 y n : 1 diag β ˜ n , , β ˜ 1 V n * ν n : 1 * = 1 n tr E diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 y n : 1 diag β ˜ n , , β ˜ 1 V n * ν n : 1 · diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 y n : 1 diag β ˜ n , , β ˜ 1 V n * ν n : 1 * = 1 n tr diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 E y n : 1 y n : 1 * diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 + diag β ˜ n , , β ˜ 1 V n * E ν n : 1 ν n : 1 V n diag β ˜ n , , β ˜ 1 = 1 n tr diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 E y n : 1 y n : 1 * diag 1 β ˜ n α ˜ n , , 1 β ˜ 1 α ˜ 1 + diag β ˜ n , , β ˜ 1 V n * σ 2 I n V n diag β ˜ n , , β ˜ 1 = 1 n j = 1 n 1 β ˜ j α ˜ j 2 E y j 2 + β ˜ j 2 σ 2 = 1 n j = 1 n 1 α ˜ j 2 E y j 2 α ˜ j 2 E y j 2 + σ 2 2 E y j 2 + α ˜ j E y j 2 α ˜ j 2 E y j 2 + σ 2 2 σ 2 = 1 n j = 1 n σ 2 2 α ˜ j 2 E y j 2 + σ 2 2 E y j 2 + α ˜ j E y j 2 2 α ˜ j 2 E y j 2 + σ 2 2 σ 2 = 1 n j = 1 n E y j 2 σ 2 α ˜ j 2 E y j 2 + σ 2 α ˜ j 2 E y j 2 + σ 2 2 = 1 n j = 1 n E y j 2 σ 2 1 E ( y j 2 ) σ 2 ( σ 2 + P ˜ n ( D ) ) E ( y j 2 ) D σ 2 E y j 2 + σ 2 = σ 2 D σ 2 + P ˜ n ( D ) 1 n j = 1 n E y j 2 = σ 2 D σ 2 + σ 2 D 1 n j = 1 n E y j 2 2 D 1 n j = 1 n E y j 2 = D 1 n j = 1 n E y j 2 D + 1 n j = 1 n E y j 2 2 D = D ,
where tr ( · ) denotes the trace of a matrix.

Appendix C. Proof of Theorem 1

We first need to introduce the following result.
Lemma A2.
Let A R 2 × 2 be a symmetric positive semi-definite matrix. Then,
λ 1 ( A ) + λ 2 ( A ) [ A ] 1 , 1 + [ A ] 2 , 2 .
Proof. 
λ 1 ( A ) + λ 2 ( A ) = λ 1 ( A ) + λ 2 ( A ) 2 = λ 1 ( A ) + λ 2 ( A ) + 2 λ 1 ( A ) λ 2 ( A ) = tr ( A ) + 2 det ( A ) = [ A ] 1 , 1 + [ A ] 2 , 2 + 2 [ A ] 1 , 1 [ A ] 2 , 2 [ A ] 1 , 2 2 [ A ] 1 , 1 + [ A ] 2 , 2 + 2 [ A ] 1 , 1 [ A ] 2 , 2 = [ A ] 1 , 1 + [ A ] 2 , 2 2 = [ A ] 1 , 1 + [ A ] 2 , 2 .
Next, we prove Theorem 1.
Proof. 
P n ( D ) P ˇ n ( D ) holds because of the optimality of the scheme presented in [4].
We now show that P ˇ n ( D ) P ^ n ( D ) . Applying Lemmas A1 and A2 yields
j { n 2 , n } N E y j 2 + j = n + 1 2 n 1 2 E Re y j 2 + j = 1 n 1 2 2 E Im y j 2 j { n 2 , n } N E y j 2 j = n + 1 2 n 1 2 λ 1 E y j ^ y j ^ + 2 λ 2 E y j ^ y j ^ = j = n + 1 2 n 1 2 E Re y j 2 + j = 1 n 1 2 2 E Im y n j 2 j = n + 1 2 n 1 2 λ 1 E y j ^ y j ^ + 2 λ 2 E y j ^ y j ^ = j = n + 1 2 n 1 2 E Re y j 2 + j = n + 1 2 n 1 2 E Im y j 2 j = n + 1 2 n 1 2 λ 1 E y j ^ y j ^ + 2 λ 2 E y j ^ y j ^ = j = n + 1 2 n 1 E Re y j 2 + E Im y j 2 λ 1 E y j ^ y j ^ λ 2 E y j ^ y j ^ 0 .
Hence, from (2) and (5), P ˇ n ( D ) P ^ n ( D ) holds.
Finally, we show that P ^ n ( D ) P ˜ n ( D ) . Reasoning as in Step 6 of the proof of ([5] Theorem 1) we have:
j = 1 n E y j 2 j { n 2 , n } N E y j 2 + j = n + 1 2 n 1 2 E Re y j 2 + j = 1 n 1 2 2 E Im y j 2 .
Hence, from (2) and (8), P ^ n ( D ) P ˜ n ( D ) holds. □

Appendix D. Proof of Theorem 2

Proof. 
We first prove that lim n P n ( D ) = σ 2 D 1 2 π 0 2 π f ( ω ) d ω 2 D . From (1) and ([10] Theorem 6.6),
lim n P n ( D ) = lim n σ 2 D 1 n j = 1 n λ j E x n : 1 x n : 1 2 D = σ 2 D lim n 1 n j = 1 n λ j E x n : 1 x n : 1 2 D = σ 2 D 1 2 π 0 2 π f ( ω ) d ω 2 D .
To prove the remaining equalities, from Theorem 1, we have to show that
lim n P ˜ n ( D ) P n ( D ) = 0 .
To that end, we need to introduce some notation. If A n is an n × n matrix, C A n is the n × n circulant matrix
C A n = V n diag V n * A n V n 1 , 1 , , V n * A n V n n , n V n * .
Moreover, we denote by C n ( f ) the n × n circulant matrix defined as
C n ( f ) = V n diag f 0 , f 2 π n , , f 2 π ( n 1 ) n V n * .
Due to the optimality of the scheme proposed in [4], and from the proof of assertion 4 of Lemma 1 in [6],
0 P ˜ n ( D ) P n ( D ) = σ 2 D 1 n j = 1 n E y j 2 2 1 n j = 1 n λ j E x n : 1 x n : 1 2 σ 2 D λ 1 E x n : 1 x n : 1 λ n E x n : 1 x n : 1 E x n : 1 x n : 1 C x n : 1 F n σ 2 D E x n : 1 x n : 1 2 inf m N λ m E x m : 1 x m : 1 E x n : 1 x n : 1 C x n : 1 F n
where C x n : 1 = C E x n : 1 x n : 1 . Observe that, from Definition 3, E x n : 1 x n : 1 2 is bounded. Hence, to finish the proof we only have to show that
lim n E x n : 1 x n : 1 C x n : 1 F n = 0 .
By combining Equation (16) in [11], Definition 3 and Lemma 6.1 in [10], (A2) holds. □

Appendix E. Proof of Theorem 3

Proof. 
Since ( T n ( a ) ) 1 2 K 1 for all n N , σ n ( T n ( a ) ) 1 K 1 > 0 for every n N . As a ( ω ) 0 for all ω R , from Theorem 8 in [12], { x n } n N is AWSS with PSD σ w 2 | b | 2 | a | 2 .
Next, we show that inf n N λ n E x n : 1 x n : 1 > 0 . Observe that (10) can be rewritten as
T n ( a ) x n : 1 = T n ( b ) w n : 1 n N .
Then,
E x n : 1 x n : 1 = σ w 2 T n ( a ) 1 T n ( b ) T n ( b ) T n ( a ) 1
and
E x n : 1 x n : 1 1 = 1 σ w 2 ( T n ( a ) ) T n ( b ) ( T n ( b ) ) 1 T n ( a ) n N .
Applying Theorem 4.3 from [10] yields
λ n E x n : 1 x n : 1 = 1 E x n : 1 x n : 1 1 2 = 1 1 σ w 2 ( T n ( a ) ) T n ( b ) ( T n ( b ) ) 1 T n ( a ) 2 σ w 2 ( T n ( a ) ) 2 2 T n ( b ) 1 2 2 σ w 2 max ω [ 0 , 2 π ] | a ( ω ) | 2 K 2 2 > 0 n N .
Finally, we prove (11). From (A1) we need to show that E x n : 1 x n : 1 C x n : 1 F is bounded. To that end, from Equation (16) in [11] we need to show that E x n : 1 x n : 1 T n σ w 2 | b | 2 | a | 2 F and T n σ w 2 | b | 2 | a | 2 C n σ w 2 | b | 2 | a | 2 F are bounded. This is shown in the following two steps.
Step 1. Using Lemma 4.2 and Theorem 4.3 from [10] we have:
E x n : 1 x n : 1 T n σ w 2 | b | 2 | a | 2 F = σ w 2 T n ( a ) 1 T n ( b ) T n ( b ) T n ( a ) 1 T n σ w 2 | b | 2 | a | 2 F T n ( a ) 1 2 2 σ w 2 T n ( b ) T n ( b ) T n ( a ) T n σ w 2 | b | 2 | a | 2 T n ( a ) F σ w 2 K 1 2 T n ( b ) T n ( b ) T n ( a ) T n | b | 2 | a | 2 + T n | b | 2 T n 1 | a | 2                   T n | b | 2 T n 1 | a | 2 T n ( a ) F σ w 2 K 1 2 T n ( a ) 2 Λ 1 T n ( a ) 2 T n ( b ) T n ( b ) T n ( a ) T n | b | 2 T n 1 | a | 2 T n ( a ) F + T n ( b ) T n ( b ) T n ( a ) T n ( | b | 2 ) T n 1 | a | 2 T n ( a ) F σ w 2 K 1 2 T n ( a ) 2 2 Λ 1 + Λ 2 T n 1 | a | 2 F + T n | b | 2 T n ( a ) T n ( | b | 2 ) T n 1 | a | 2 T n ( a ) F σ w 2 K 1 2 T n ( a ) 2 2 Λ 1 + Λ 2 + T n | b | 2 T n ( a ) T n | b | 2 T n 1 a F + T n ( a ) T n | b | 2 T n 1 a T n ( a ) T n | b | 2 T n 1 | a | 2 T n ( a ) F σ w 2 K 1 2 T n ( a ) 2 2 Λ 1 + Λ 2 + Λ 3 + T n ( a ) 2 T n | b | 2 2 Λ 4 + Λ 5 T n 1 a 2 σ w 2 K 1 2 max ω [ 0 , 2 π ] | a ( ω ) | 2 Λ 1 + Λ 2 + Λ 3 + max ω [ 0 , 2 π ] | a ( ω ) | max ω [ 0 , 2 π ] | b ( ω ) | 2 Λ 4 + Λ 5 min ω [ 0 , 2 π ] | a ( ω ) | ,
where
Λ 1 = T n | b | 2 T n 1 | a | 2 T n | b | 2 | a | 2 F , Λ 2 = T n ( b ) T n ( b ) T n | b | 2 F , Λ 3 = T n | b | 2 T n a | b | 2 T n 1 a F , Λ 4 = T n 1 a T n 1 | a | 2 T n a ¯ F , Λ 5 = T n a | b | 2 T n ( a ) T n | b | 2 F .
From Lemma 2 in [13], E x n : 1 x n : 1 T n σ w 2 | b | 2 | a | 2 F is bounded.
Step 2. Using Theorem 4.4, Lemmas 5.2 and 5.3 from [10] we have:
T n σ w 2 | b | 2 | a | 2 C n σ w 2 | b | 2 | a | 2 F σ w 2 Γ 1 + T n | b | 2 T n 1 | a | 2 C n | b | 2 C n 1 | a | 2 F σ w 2 Γ 1 + T n | b | 2 T n 1 | a | 2 C n | b | 2 T n 1 | a | 2 F + C n | b | 2 T n 1 | a | 2 C n | b | 2 C n 1 | a | 2 F σ w 2 Γ 1 + T n | b | 2 C n | b | 2 F T n 1 | a | 2 2 + C n | b | 2 2 T n 1 | a | 2 C n 1 | a | 2 F σ w 2 Γ 1 + Γ 2 T n 1 | a | 2 2 + C n | b | 2 2 Γ 3 + C n | b | 2 2 T n | a | 2 1 C n 1 | a | 2 F σ w 2 Γ 1 + Γ 2 T n 1 | a | 2 2 + C n | b | 2 2 Γ 3 + C n | b | 2 2 T n | a | 2 1 2 I n T n | a | 2 C n | a | 2 1 F σ w 2 Γ 1 + Γ 2 T n 1 | a | 2 2 + C n | b | 2 2 Γ 3 + C n | b | 2 2 T n | a | 2 1 2 C n | a | 2 T n | a | 2 F C n | a | 2 1 2 = σ w 2 Γ 1 + Γ 2 T n 1 | a | 2 2 + C n | b | 2 2 Γ 3 + C n | b | 2 2 T n | a | 2 1 2 Γ 4 C n | a | 2 1 2 σ w 2 Γ 1 + Γ 2 min ω [ 0 , 2 π ] | a ( ω ) | 2 + max ω [ 0 , 2 π ] | b ( ω ) | 2 Γ 3 + max ω [ 0 , 2 π ] | b ( ω ) | 2 min ω [ 0 , 2 π ] | a ( ω ) | 2 2 Γ 4 ,
where
Γ 1 = T n | b | 2 | a | 2 T n | b | 2 T n 1 | a | 2 F , Γ 2 = T n | b | 2 C n | b | 2 F , Γ 3 = T n 1 | a | 2 T n | a | 2 1 F , Γ 4 = C n | a | 2 T n | a | 2 F .
From Lemmas 2 and 3 in [13] and Lemma 5.4 in [10], T n σ w 2 | b | 2 | a | 2 C n σ w 2 | b | 2 | a | 2 F is bounded. □

Appendix F. Proof of Theorem 6

Proof. 
Fix n > 2 m . From ([5] p. 1756) we have
σ 2 D 1 n j = 1 n E y j 2 2 1 n j = 1 n λ j T n ( f ) 2 2 σ 2 D n max ( f ) min ( f ) k = 1 m k f k 2 + f k 2 .
Consequently, applying (1) and (8), we conclude that
P ˜ n ( D ) P n ( D ) 2 σ 2 D n max ( f ) min ( f ) k = 1 m k f k 2 + f k 2 .

References

  1. Fresnedo, O.; Vazquez-Araujo, F.J.; Castedo, L.; Garcia-Frias, J. Low-Complexity Near-Optimal Decoding for Analog Joint Source Channel Coding Using Space-Filling Curves. IEEE Commun. Lett. 2013, 17, 745–748. [Google Scholar] [CrossRef]
  2. Sadhu, V.; Zhao, X.; Pompili, D. Energy-Efficient Analog Sensing for Large-Scale and High-Density Persistent Wireless Monitoring. IEEE Internet Things J. 2020, 7, 6778–6786. [Google Scholar] [CrossRef] [Green Version]
  3. Mouris, B.A.; Stavrou, P.A.; Thobaben, R. Optimizing Low-Complexity Analog Mappings for Low-Power Sensors with Energy Scheduling Capabilities. IEEE Internet Things J. 2022, 1. [Google Scholar] [CrossRef]
  4. Lee, K.H.; Petersen, D.P. Optimal Linear Coding for Vector Channels. IEEE Trans. Commun. 1976, 24, 1283–1290. [Google Scholar]
  5. Insausti, X.; Crespo, P.M.; Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M. Low-Complexity Analog Linear Coding Scheme. IEEE Commun. Lett. 2018, 22, 1754–1757. [Google Scholar] [CrossRef]
  6. Gutiérrez-Gutiérrez, J.; Villar-Rosety, F.M.; Zárraga-Rodríguez, M.; Insausti, X. A Low-Complexity Analog Linear Coding Scheme for Transmitting Asymptotically WSS AR Sources. IEEE Commun. Lett. 2019, 23, 773–776. [Google Scholar] [CrossRef]
  7. Gray, R.M. On the Asymptotic Eigenvalue Distribution of Toeplitz Matrices. IEEE Trans. Inf. Theory 1972, 18, 725–730. [Google Scholar] [CrossRef]
  8. Gray, R.M. Toeplitz and Circulant Matrices: A review. Found. Trends Commun. Inf. Theory 2006, 2, 155–239. [Google Scholar] [CrossRef]
  9. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Villar-Rosety, F.M.; Insausti, X. Rate-Distortion Function Upper Bounds for Gaussian Vectors and Their Applications in Coding AR Sources. Entropy 2018, 20, 399. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Block Toeplitz Matrices: Asymptotic Results and Applications. Found. Trends Commun. Inf. Theory 2011, 8, 179–257. [Google Scholar] [CrossRef] [Green Version]
  11. Zárraga-Rodríguez, M.; Gutiérrez-Gutiérrez, J.; Insausti, X. A Low-Complexity and Asymptotically Optimal Coding Strategy for Gaussian Vector Sources. Entropy 2019, 21, 965. [Google Scholar] [CrossRef] [Green Version]
  12. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically Equivalent Sequences of Matrices and Multivariate ARMA Processes. IEEE Trans. Inf. Theory 2011, 57, 5444–5454. [Google Scholar] [CrossRef]
  13. Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Insausti, X. On the Asymptotic Optimality of a Low-Complexity Coding Strategy for WSS, MA, and AR Vector Sources. Entropy 2020, 22, 1378. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Linear coding scheme.
Figure 1. Linear coding scheme.
Entropy 24 00669 g001
Figure 2. Linear coding scheme based on an n × n real orthogonal matrix W n .
Figure 2. Linear coding scheme based on an n × n real orthogonal matrix W n .
Entropy 24 00669 g002
Figure 3. Low-power scheme.
Figure 3. Low-power scheme.
Entropy 24 00669 g003
Figure 4. DFT/IDFT scheme.
Figure 4. DFT/IDFT scheme.
Entropy 24 00669 g004
Figure 5. Theoretical average transmission powers of the considered schemes.
Figure 5. Theoretical average transmission powers of the considered schemes.
Entropy 24 00669 g005
Figure 6. Interval between the 10th and the 90th percentile for the actual transmission power and the actual distortion (shaded) and theoretical average transmission power and distortion (solid line) in each of the considered schemes. (a) Optimal linear coding scheme (Section 2.2.1). (b) Low-power alternative (Section 3.1). (c) DFT-based alternative (Section 2.2.2). (d) DFT/IDFT alternative (Section 3.2).
Figure 6. Interval between the 10th and the 90th percentile for the actual transmission power and the actual distortion (shaded) and theoretical average transmission power and distortion (solid line) in each of the considered schemes. (a) Optimal linear coding scheme (Section 2.2.1). (b) Low-power alternative (Section 3.1). (c) DFT-based alternative (Section 2.2.2). (d) DFT/IDFT alternative (Section 3.2).
Entropy 24 00669 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gutiérrez-Gutiérrez, J.; Villar-Rosety, F.M.; Insausti, X.; Zárraga-Rodríguez, M. Low-Complexity Alternatives to the Optimal Linear Coding Scheme for Transmitting ARMA Sources. Entropy 2022, 24, 669. https://doi.org/10.3390/e24050669

AMA Style

Gutiérrez-Gutiérrez J, Villar-Rosety FM, Insausti X, Zárraga-Rodríguez M. Low-Complexity Alternatives to the Optimal Linear Coding Scheme for Transmitting ARMA Sources. Entropy. 2022; 24(5):669. https://doi.org/10.3390/e24050669

Chicago/Turabian Style

Gutiérrez-Gutiérrez, Jesús, Fernando M. Villar-Rosety, Xabier Insausti, and Marta Zárraga-Rodríguez. 2022. "Low-Complexity Alternatives to the Optimal Linear Coding Scheme for Transmitting ARMA Sources" Entropy 24, no. 5: 669. https://doi.org/10.3390/e24050669

APA Style

Gutiérrez-Gutiérrez, J., Villar-Rosety, F. M., Insausti, X., & Zárraga-Rodríguez, M. (2022). Low-Complexity Alternatives to the Optimal Linear Coding Scheme for Transmitting ARMA Sources. Entropy, 24(5), 669. https://doi.org/10.3390/e24050669

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop