Next Article in Journal
Joint Fluctuation Theorems for Sequential Heat Exchange
Next Article in Special Issue
Asymptotic Capacity Results on the Discrete-Time Poisson Channel and the Noiseless Binary Channel with Detector Dead Time
Previous Article in Journal
fNIRS Complexity Analysis for the Assessment of Motor Imagery and Mental Arithmetic Tasks
Previous Article in Special Issue
Some Useful Integral Representations for Information-Theoretic Analyses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments

by
Yunus Can Gültekin
*,
Alex Alvarado
and
Frans M. J. Willems
Information and Communication Theory Lab, Signal Processing Systems Group, Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(7), 762; https://doi.org/10.3390/e22070762
Submission received: 3 April 2020 / Revised: 6 July 2020 / Accepted: 8 July 2020 / Published: 11 July 2020
(This article belongs to the Special Issue Information Theory for Communication Systems)

Abstract

:
Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s error exponent approach. In particular, it has been shown that PAS achieves the capacity of the additive white Gaussian noise channel (Böcherer, 2018). In this work, we revisit the capacity-achieving property of PAS and derive AIRs using weak typicality. Our objective is to provide alternative proofs based on random sign-coding arguments that are as constructive as possible. Accordingly, in our proofs, only some signs of the channel inputs are drawn from a random code, while the remaining signs and amplitudes are produced constructively. We consider both symbol-metric and bit-metric decoding.

1. Introduction

Coded modulation (CM) refers to the design of forward error correction (FEC) codes and high-order modulation formats, which are combined to reliably transmit more than one bit per channel use. Examples of CM strategies include multilevel coding (MLC) [1,2] in which each address bit of the signal point is protected by an individual binary FEC code, and trellis CM [3], which combines the functions of a trellis-based channel code and a modulator. Among many CM strategies, bit-interleaved CM (BICM) [4,5], which combines a high-order modulation format with a binary FEC code using a binary labeling strategy and uses bit-metric decoding (BMD) at the receiver, is the de-facto standard for CM. BICM is included in multiple wireless communication standards such as the IEEE 802.11 [6] and the DVB-S2 [7]. BICM is also currently the de-facto CM alternative for fiber optical communications.
Proposed in [8], probabilistic amplitude shaping (PAS) integrates constellation shaping into existing BICM systems. The shaping gap that exists for the additive white Gaussian noise (AWGN) channel [9] (Ch. 9) can be closed with PAS. To this end, an amplitude shaping block converts binary information strings into shaped amplitude sequences in an invertible manner. Then, a systematic FEC code produces parity bits encoding the binary labels of these amplitudes. These parity bits are used to select the signs, and the combination of the amplitudes and the signs, i.e., probabilistically shaped channel inputs, are transmitted over the channel. PAS has attracted considerable attention in fiber optical communications due to its availability of providing rate adaptivity [10,11].
Achievable information rates (AIRs) of PAS have been investigated in the literature [12,13,14]. It has been shown that the capacity of the AWGN channel can be achieved with PAS, e.g., in [13] (Example 10.4). The achievability proofs in the literature are based on Gallager’s error exponent approach [15] (Ch. 5) or on strong typicality [16] (Ch. 1).
In this work, we provide a random sign-coding framework based on weak-typicality that contains the achievability proofs relevant for the PAS architecture. We also revisit the capacity-achieving property of PAS for the AWGN channel. As explained in Section 2.5, the first main contribution of this paper is to provide a framework that combines the constructive approach to amplitude shaping with randomly-chosen error-correcting codes, where the randomness is concentrated only in the choice of the signs. The second contribution is to provide a unifying framework of achievability proofs to bring together PAS results that are somewhat scattered in the literature, using a single proof technique, which we call the random sign-coding arguments.
This work is organized as follows. In Section 2, we briefly summarize the related literature on CM, AIRs, and PAS and state our contribution. In Section 3, we provide some background information on typical sequences and define a modified (weakly) typical set. In Section 4, we explain the random sign-coding setup. Finally in Section 5, we provide random sign-coding arguments to derive AIRs for PAS and, consequently, show that it achieves the capacity of a discrete-input memoryless channel with a symmetric capacity-achieving distribution. Conclusions are drawn in Section 6.

2. Related Work and Our Contribution

2.1. Notation

Capital letters X are used to denote random variables, while lower case letters x are used to denote their realizations. Underlined capital and lower case letters X ̲ and x ̲ are used to denote random vectors and their realizations, respectively. Boldface capital and lower case letters X and x are used to denote collections of random variables and their realizations, respectively. Underlined boldface capital and lower case letters X ̲ and x ̲ are used to denote collections of random vectors and their realizations, respectively. Element-wise multiplication of x ̲ and y ̲ is denoted by x ̲ y ̲ . Calligraphic letters X represent sets, while X Y = { x y : x X , y Y } . We denote by X n the n-fold Cartesian product of X with itself, while X × Y is the Cartesian product of X and Y . Probability density and mass functions over X are denoted by p ( x ) . We use 𝟙 [ · ] to indicate the indicator function, which is one when its argument is true and zero otherwise. The entropy of X is denoted by H ( X ) (in bits), the expected value of X by E [ X ] .

2.2. Achievable Information Rates

For a memoryless channel that is characterized by an input alphabet X , input distribution p ( x ) , and channel law p ( y | x ) , the maximum AIR is the mutual information (MI) I ( X ; Y ) of the channel input X and output Y. Consequently, the capacity of this channel is defined as I ( X ; Y ) maximized over all possible input distributions p ( x ) , typically under an average power constraint, e.g., in [9] (Section 9.1). The MI can be achieved, e.g., with MLC and multi-stage decoding [1,2].
In BICM systems, channel inputs are uniquely labeled with log 2 | X | = ( m + 1 ) -bit binary strings. Here, we assume that | X | is an integer power of two. At the transmitter, the output of a binary FEC code is mapped to channel inputs using this labeling strategy. At the receiver, BMD is employed, i.e., binary labels C = ( C 1 , C 2 , , C m + 1 ) are assumed to be independent, and consequently, the symbol-wise decoding metric is written as the product of bit-metrics:
𝕢 ( x , y ) = i = 1 m + 1 𝕢 i ( c i , y ) .
Since the metric in (1) is in general not proportional to p ( y | x ) , i.e., there is a mismatch between the actual channel law and the one assumed at the receiver, this setup is called mismatched decoding.
Different AIRs have been derived for this so-called mismatched decoding setup. One of these is the generalized MI (GMI) [17,18]:
GMI p ( x ) = max s 0 E log 𝕢 ( X , Y ) s x X p ( x ) 𝕢 ( x , Y ) s ,
which reduces to [19] (Thm. 4.11, Coroll. 4.12) and [20]:
GMI p ( c 1 ) p ( c 2 ) p ( c m + 1 ) = i = 1 m + 1 I ( C i ; Y )
when the bit levels are independent at the transmitter, i.e., p ( x ) = p ( c ) = p ( c 1 ) p ( c 2 ) p ( c m + 1 ) where c = ( c 1 , c 2 , , c m + 1 ) , and:
𝕢 i ( c i , y ) = p ( y | c i ) .
The rate (3) is achievable for both uniform and shaped bit levels [5,21]. The problem of computing the bit level distributions that maximize the GMI in (3) was shown to be nonconvex in [22]. The parameter that maximizes (2) to obtain (3) is s = 1 .
Another AIR for mismatched decoding is the LM (lower bound on the mismatch capacity) rate [18,23]:
LM p ( x ) = max s 0 , r ( · ) E log 𝕢 ( X , Y ) s r X x X p ( x ) 𝕢 ( x , Y ) s r x ,
where r ( · ) is a real-valued cost function defined on X . The expectations in (2) and (5) are taken with respect to p ( x , y ) .
When there is dependence among bit levels, i.e., p ( x ) = p ( c ) p ( c 1 ) p ( c 2 ) p ( c m + 1 ) , the rate [24,25]:
R BMD p ( x ) = H C i = 1 m + 1 H ( C i | Y )
has been shown to be achievable by BMD for any joint input distribution p ( c ) = p ( c 1 , c 2 , , c m + 1 ) . In [24,25], the achievability of (6) was derived using random coding arguments based on strong typicality [16] (Ch. 1). Later in [26] (Lemma 1), it was shown that (6) is an instance of the so-called LM rate (5) for s = 1 , the symbol decoding metric (1), bit decoding metrics (4), and the cost function:
r ( c 1 , c 2 , , c m + 1 ) = i = 1 m + 1 p ( c i ) p ( c 1 , c 2 , , c m + 1 ) .
We note here that R BMD in (6) can be negative as discussed in [26] (Section II-B). In such cases, R BMD cannot be considered as an achievable rate. To avoid this, R BMD is defined as the maximum of (6) and zero in [26] (Equation (1)).

2.3. Probabilistic Amplitude Shaping: Model

PAS [8] is a capacity-achieving CM strategy in which constellation shaping and FEC coding are combined as shown in Figure 1. In PAS, first an amplitude shaping block maps k-bit information strings to n-amplitude shaped sequences a ̲ = ( a 1 , a 2 , , a n ) in an invertible manner. These amplitudes are drawn from a 2 m -ary alphabet A . The amplitude shaping block can be realized using constant composition distribution matching [27], multiset-partition distribution matching [28], shell mapping [29], enumerative sphere shaping [30], etc.
After n amplitudes are generated, binary labels c ̲ 1 c ̲ 2 c ̲ m of the amplitudes a ̲ and an additional γ n -bit information string s ̲ i = ( s 1 , s 2 , , s γ n ) are fed to a rate ( m + γ ) / ( m + 1 ) systematic FEC encoder. The encoder produces ( 1 γ ) n parity bits s ̲ p = ( s γ n + 1 , s γ n + 2 , , s n ) . The additional data bits s ̲ i and the parity bits s ̲ p are used as the signs s ̲ = ( s 1 , s 2 , , s n ) for the amplitudes a ̲ . Finally, probabilistically shaped channel inputs x ̲ = s ̲ a ̲ are transmitted through the channel. Here, γ is the rate of the additional information in bits per symbol (bit/1D) or, equivalently, the fraction of signs that are selected directly by data bits. The transmission rate of PAS is R = k / n + γ in bit/1D.

2.4. Probabilistic Amplitude Shaping: Achievable Rates

Based on Gallager’s error exponent approach [15] (Ch. 5), AIRs of PAS were investigated in [12,13,14]. In [12], a random code ensemble was considered from which the channel inputs x ̲ were drawn. Then, the AIR in [12] (Equations (32)–(34)) was derived for a general memoryless decoding metric 𝕢 ( x , y ) . It was shown that by properly selecting 𝕢 ( x , y ) , I ( X ; Y ) and the rate (6) can be recovered from the derived AIR, and consequently, they can be achieved with PAS.
Computing error exponents for PAS was also the main concern of the work presented in [13] (Ch. 10). The difference from [12] was in the random coding setup. In [13] (Ch. 10), a random code ensemble was considered from which only the signs s ̲ of the channel inputs were drawn at random. We call this the random sign-coding setup. The error exponent [13] (Equation (10.42)) was then derived again for a general memoryless decoding metric. Error exponents of PAS have also been examined based on the joint source-channel coding (JSCC) setup in [14,31]. Random sign-coding was considered in [14,31], but only with symbol-metric decoding (SMD) and only for the specific case where γ = 0 .

2.5. Our Contribution

In this work, we derive AIRs of PAS in a random sign-coding framework based on weak typicality [9] (Section 3.1, Section 7.6 and Section 15.2). We first consider basic sign-coding in which amplitudes of the channel inputs are generated constructively while the signs are drawn from a randomly generated code. Basic sign-coding corresponds to PAS with γ = 0 . Then, we consider modified sign-coding in which only some of the signs are drawn from the random code while the remaining are chosen directly by information bits. Modified sign-coding corresponds to PAS with 0 < γ < 1 . We compute AIRs for both SMD and BMD.
Our first objective is to provide alternative proofs of achievability in which the codes are generated as constructively as possible. In our random sign-coding experiment, both the amplitude sequences ( a ̲ ) and the sign sequence parts ( s ̲ i ) that are information bits are constructively produced, and only the remaining signs ( s ̲ p ) are randomly generated as illustrated in Figure 2. In most proofs of Shannon’s channel coding theorem, channel input sequences ( x ̲ ) are drawn at random, and the existence of a good code is demonstrated. Therefore, these proofs are not constructive and cannot be used to identify good codes as discussed, e.g., in [32] (Section I) and the references therein. On the other hand, in our proofs using random sign-coding arguments, it is self-evident how—at least a part of—the code should be constructed. Our second objective is to provide a unified framework in which all possible PAS scenarios are considered, i.e., SMD or BMD at the receiver with 0 γ < 1 , and corresponding AIRs are determined using a single technique, i.e., the random sign-coding argument.
Note that our approach differs from the random sign-coding setup considered in [13,14] where all signs ( s ̲ i and s ̲ p ) were generated randomly, which was called partially systematic encoding in [13] (Ch. 10). We will show later that only s ̲ p needs to be chosen randomly. Furthermore, we define a special type of typicality ( B -typicality; see Definition 1 below) that allows us to avoid the mismatched JSCC approach of [14].

3. Preliminaries

3.1. Memoryless Channels

We consider communication over a memoryless channel with discrete input X X and discrete output Y Y . The channel law is given by:
p ( y ̲ | x ̲ ) = i = 1 n p ( y i | x i ) .
Later in Example 1, we will also discuss the AWGN channel Y = X + Z where Z is zero-mean Gaussian with variance σ 2 . In this case, we assume that the channel output Y is a quantized version of the continuous channel output X + Z . Furthermore, we assume that this quantization has a resolution high enough that the discrete-output channel is an accurate model for the underlying continuous-output channel. Therefore, the achievability results we will obtain for discrete memoryless channels carry over to the discrete-input AWGN channel.

3.2. Typical Sequences

We will provide achievability proofs based on weak typicality. In this section, which is based on [9] (Section 3.1, Section 7.6, and Section 15.2), we formally define weak typicality and list its properties that will be used in this paper.
Let ε > 0 and n be a positive integer. Consider the random variable X with probability distribution p ( x ) . Then, the (weak) typical set A ε n ( X ) of length-n sequences with respect to p ( x ) is defined as:
A ε n ( X ) x ̲ X n : 1 n log p ( x ̲ ) H ( X ) ε ,
where:
p ( x ̲ ) i = 1 n p ( x i ) .
The cardinality of the typical set A ε n ( X ) satisfies [9] (Thm. 3.1.2):
( 1 ε ) 2 n ( H ( X ) ε ) ( a ) A ε n ( X ) ( b ) 2 n ( H ( X ) + ε ) ,
where (a) holds for n sufficiently large and (b) holds for all n. For x ̲ A ε n ( X ) , the probability of occurrence can be bounded as [9] (Equation (3.6)):
2 n ( H ( X ) + ε ) p ( x ̲ ) 2 n ( H ( X ) ε ) .
The idea of typical sets can be generalized for pairs of n-sequences. Now, consider the pair of random variables ( X , Y ) with probability distribution p ( x , y ) . Then, the typical set A ε n ( X Y ) of pairs of length-n sequences with respect to p ( x , y ) is defined as:
A ε n ( X Y ) { ( x ̲ , y ̲ ) X n × Y n : 1 n log p ( x ̲ ) H ( X ) ε , 1 n log p ( y ̲ ) H ( Y ) ε , 1 n log p ( x ̲ , y ̲ ) H ( X , Y ) ε }
where:
p ( x ̲ , y ̲ ) i = 1 n p ( x i , y i ) ,
and where p ( x ) and p ( y ) are the marginal distributions that correspond to p ( x , y ) . The cardinality of the typical set A ε n ( X Y ) satisfies [9] (Thm. 7.6.1):
A ε n ( X Y ) 2 n ( H ( X , Y ) + ε )
for all n. For ( x ̲ , y ̲ ) A ε n ( X Y ) , the probability of occurrence can be bounded in a similar manner to (12) as:
2 n ( H ( X , Y ) + ε ) p ( x ̲ , y ̲ ) 2 n ( H ( X , Y ) ε ) .
Along the same lines, joint typicality can be extended for collections of n-sequences ( X ̲ 1 , X ̲ 2 , , X ̲ m ) and the corresponding typical set A ε n ( X 1 X 2 X m ) can be defined similar to how (9) was extended to (13). Then, for ( x ̲ 1 , x ̲ 2 , , x ̲ m ) A ε n ( X 1 X 2 X m ) , the probability of occurrence can be bounded in a similar manner to (16) as:
2 n ( H ( X ) + ε ) p ( x ̲ 1 , x ̲ 2 , , x ̲ m ) 2 n ( H ( X ) ε ) ,
where X = ( X 1 , X 2 , , X m ) .
Finally, we fix x ̲ . The conditional (weak) typical set A ε n ( Y | x ̲ ) of length-n sequences is defined as:
A ε n ( Y | x ̲ ) = y ̲ : ( x ̲ , y ̲ ) A ε n ( X Y ) .
In other words, A ε n ( Y | x ̲ ) is the set of all y ̲ sequences that are jointly typical with x ̲ . For x ̲ A ε n ( X ) and for sufficiently large n, the cardinality of the conditional typical set A ε n ( Y | x ̲ ) satisfies [9] (Thm. 15.2.2):
| A ε n ( Y | x ̲ ) | 2 n ( H ( Y | X ) + 2 ε ) .
Definition 1
( B -typicality). Let the input probability distribution p ( u ) together with the transition probability distribution p ( v | u ) determine the joint probability distribution p ( u , v ) = p ( u ) p ( v | u ) . Now, we define:
B V , ε n ( U ) = Δ u ̲ : u ̲ A ε n ( U ) and Pr ( u ̲ , V ̲ ) A ε n ( U V ) U ̲ = u ̲ ) 1 ε ,
where V ̲ is the output sequence of a “channel” p ( v | u ) when sequence u ̲ is input.
The set B V , ε n ( U ) in (20) guarantees that a sequence u ̲ in this B -typical set will with high probability lead to a sequence v ̲ that is jointly typical with u ̲ . We note that U and/or V can be composite. The set B V , ε n ( U ) has three properties, as stated in Lemma 1, the proof of which is given in Appendix A.
Lemma 1
( B -typicality properties). The set B V , ε n ( U ) in Definition 1 has the following properties:
P1:
For u ̲ B V , ε n ( U ) ,
2 n ( H ( U ) + ε ) p ( u ̲ ) 2 n ( H ( U ) ε ) .
P2:
For n large enough,
u ̲ B V , ε n ( U ) p ( u ̲ ) ε .
P3:
| B V , ε n ( U ) | 2 n ( H ( U ) + ε ) holds for all n, while | B V , ε n ( U ) | ( 1 ε ) 2 n ( H ( U ) ε ) holds for n large enough.

4. Random Sign-Coding Experiment

We consider 2 m + 1 -ary amplitude shift keying (M-ASK) alphabets X = { M + 1 , M + 3 , , M 1 } where M = 2 m + 1 . We note that X is symmetric around the origin and can be factorized as X = S A . Here, S = { 1 , + 1 } and A = { + 1 , + 3 , , M 1 } are the sign and amplitude alphabets, respectively. Accordingly, any channel input x X can be written as the multiplication of a sign and an amplitude, i.e., x = s a .

4.1. Random Sign-Coding Setup

We cast the PAS structure shown in Figure 1 as a sign-coding structure as in Figure 3. The sign-coding setup consists of two layers: a shaping layer and a coding layer.
Definition 2
(Sign-coding). For every message index pair ( m a , m s ) , with uniform m a { 1 , 2 , , M a } and uniform m s { 1 , 2 , , M s } , a sign-coding structure as shown in Figure 3 consists of the following.
  • A shaping layer that produces for every message index m a , a length-n shaped amplitude sequence a ̲ ( m a ) where the mapping is one-to-one. The set of amplitude sequences is assumed to be shaped, but uncoded.
  • An additional n 1 -bit (uniform) information string in the form of a sign sequence part s ̲ ( m s ) = ( s 1 ( m s ) , s 2 ( m s ) , , s n 1 ( m s ) ) for every message index m s .
  • A coding layer that extends the sign sequence part s ̲ ( m s ) by adding a second (uniform) sign sequence part s ̲ ( m a , m s ) = ( s n 1 + 1 ( m a , m s ) , s n 1 + 2 ( m a , m s ) , , s n ( m a , m s ) ) of length- n 2 for all m a and m s . This is obtained by using an encoder that produces redundant signs in the set S from a ̲ ( m a ) and s ̲ ( m s ) . Here, n 1 + n 2 = n .
Finally, the transmitted sequence is x ̲ ( m a , m s ) = a ̲ ( m a ) s ̲ ( m a , m s ) , where s ̲ ( m a , m s ) = ( s ̲ ( m s ) , s ̲ ( m a , m s ) ) . The sign-coding setup with n 1 = 0 ( γ = 0 ) is called basic sign-coding, while the setup with n 1 > 0 ( γ > 0 ) is called modified sign-coding.

4.2. Shaping Layer

When SMD is employed at the receiver, the shaping layer is as shown in Figure 4. Here, let A be distributed with p ( a ) over a A . Then, the shaper produces for every message index m a a length-n amplitude sequence a ̲ ( m a ) B S Y , ε n ( A ) . We note that for this sign-coding setup, the rate is:
R = 1 n log 2 | M a M s | = γ + 1 n log 2 | B S Y , ε n ( A ) | H ( A ) + γ 2 ε
where the inequality in (22) follows for n large enough from P3.
On the other hand, when BMD is used at the receiver, the shaping layer is as shown in Figure 5. Here, let B = ( B 1 , B 2 , , B m ) be distributed with p ( b ) = p ( b 1 , b 2 , , b m ) over ( b 1 , b 2 , , b m ) { 0 , 1 } m . The shaper produces for every message index m a an n-sequence of m-tuples b ̲ ( m a ) = ( b ̲ 1 ( m a ) , b ̲ 2 ( m a ) , , b ̲ m ( m a ) ) B S Y , ε n ( B 1 B 2 B m ) . Then, each m-tuple is mapped to an amplitude sequence a ̲ ( m a ) by a symbol-wise mapping function f ( · ) . We note that for this sign-coding setup, the rate is:
R = 1 n log 2 | M a M s | = γ + 1 n log 2 | B S Y , ε n ( B ) | H ( B ) + γ 2 ε
where the inequality in (23) follows for n large enough from P3.
To realize f ( · ) , we label the channel inputs with ( m + 1 ) -bit strings. The amplitude is addressed by m amplitude bits ( B 1 , B 2 , , B m ) , while the sign is addressed by a sign bit S. The symbol-wise mapping function f ( · ) in Figure 5 uses the addressing ( B 1 , B 2 , , B m ) A . We emphasize that unlike the case in Section 2.2, we use ( S , B 1 , B 2 , , B m ) to denote a channel input instead of ( C 1 , C 2 , , C m + 1 ) . Amplitudes and signs of x X are tabulated for 8-ASK in Table 1 along with an example of the mapping function f ( b 1 , b 2 ) , namely the binary reflected Gray code [19] (Defn. 2.10).

4.3. Decoding Rules

At the receiver, SMD finds the unique message index pair ( m ^ a , m ^ s ) such that the corresponding amplitude-sign sequence is jointly typical with the received output sequence y ̲ , i.e., ( a ̲ ( m ^ a ) , s ̲ ( m ^ a , m ^ s ) , y ̲ ) A ε n ( A S Y ) .
On the other hand, BMD finds the unique message index pair ( m ^ a , m ^ s ) such that the corresponding bit and sign sequences are (individually) jointly typical with the received output sequence y ̲ , i.e., ( s ̲ ( m ^ a , m ^ s ) , y ̲ ) A ε n ( S Y ) and ( b ̲ j ( m ^ a ) , y ̲ ) A ε n ( B j Y ) for j = 1 , 2 , , m . We note that the decoder can use bit metrics p ( b j i = 1 | y i ) = 1 p ( b j i = 0 | y i ) for j = 1 , 2 , , m and i = 1 , 2 , , n to find p ( b ̲ j | y ̲ ) . Here, b j i is the j th bit of the i th symbol. Together with p ( y ̲ ) and p ( b ̲ j ) , the decoder can check whether ( b ̲ j , y ̲ ) A ε n ( B j Y ) . We note that B j is in general not uniform. A similar statement holds for the uniform sign S.

5. Achievable Information Rates of Sign-Coding

Here, we investigate AIRs of the sign-coding architecture in Figure 3. We consider both SMD and BMD at the receiver. In what follows, four AIRs are presented. The proofs are based on B -typicality, a variation of weak typicality, and random sign-coding arguments and are given in Appendix B. As indicated in Definition 2, signs S are assumed to be uniform in the proofs. We have not applied weak typicality for continuous random variables, discussed in [9] (Section 8.2) and [33] (Section 10.4), since our channels are discrete-input. However, it is also possible to develop a hybrid version of weak typicality that matches with discrete-input continuous-output channels.
In the following, the concept of AIR is formally defined in the sign-coding context.
Definition 3
(Achievable information rate). A rate R is said to be achievable if for every δ > 0 and n large enough, there exists a sign-coding encoder and a decoder such that ( 1 / n ) log 2 M a M s R δ and error probability P e δ .

5.1. Sign-Coding with Symbol-Metric Decoding

Theorem 1
(Basic sign-coding with SMD). For a memoryless channel { X , p ( y | x ) , Y } with amplitude shaping and basic sign-coding, the rate:
R SMD γ = 0 = max p ( a ) : H ( A ) I ( S A ; Y ) H ( A )
is achievable using SMD.
Theorem 1 implies that for a memoryless channel, the rate R = H ( A ) is achievable with basic sign-coding, as long as H ( A ) I ( S A ; Y ) = I ( X ; Y ) is satisfied. For the AWGN channel, this means that a range of rate-SNR pairs are achievable. Here, SNR denotes the signal-to-noise ratio. One of these points, H ( A ) = I ( S A ; Y ) , is on the capacity-SNR curve. Note that here, “capacity” indicates the largest achievable rate using X as the channel input alphabet under the average power constraint. It can be observed from Figure 6 discussed in Example 1 that there indeed exists an amplitude distribution p ( a ) for which H ( A ) = I ( S A ; Y ) .
Theorem 2
(Modified sign-coding with SMD). For a memoryless channel { X , p ( y | x ) , Y } with amplitude shaping and modified sign-coding, the rate:
R SMD γ > 0 = max p ( a ) , γ : H ( A ) + γ I ( S A ; Y ) H ( A ) + γ
is achievable using SMD for γ < 1 .
Theorem 2 implies that for a memoryless channel, the rate H ( A ) + γ is achievable with modified sign-coding, as long as R = H ( A ) + γ I ( S A ; Y ) = I ( X ; Y ) is satisfied. For the AWGN channel, this means that all points on the capacity-SNR curve for which H ( X | Y ) 1 γ are achievable. This follows from:
H ( A ) + γ I ( S A ; Y ) = H ( S A ) H ( S A | Y ) = H ( A ) + 1 H ( X | Y ) ,
i.e., the constraint in the maximization in (25).
Example 1.
We consider the AWGN channel with average power constraint E [ X 2 ] P . Figure 6 shows the capacity of 4-ASK:
C 4 - ASK = max p ( x ) : X = { 3 , 1 , + 1 , + 3 } , E X 2 P I ( X ; Y )
together with the amplitude entropy H ( A ) of the distribution that achieves this capacity. Here, S N R = E [ X 2 ] / σ 2 , and σ 2 is the noise variance. Basic sign-coding achieves capacity only for S N R = 0.72 dB, i.e., at the point where H ( A ) = I ( X ; Y ) , which is C 4 - A S K = 0.562 bit/1D. We see from Figure 6 that the shaping gap is negligible around this point, i.e., the capacity C 4 - A S K of 4-ASK and the MI I ( X ; Y ) for uniform p ( x ) are virtually the same. On the other hand, this gap is significant for larger rates, e.g., it is around 0.42 dB at 1.6 bit/1D. To achieve rates larger than 0.562 bit/1D on the capacity-SNR curve, modified sign-coding ( γ > 0 ) is required. At a given SNR, C 4 - A S K can be written as C 4 - A S K = H ( A ) + γ , i.e., when the H ( A ) curve is shifted above by γ, the crossing point is again at C 4 - A S K for that SNR. We also plot the additional rate γ = C 4 - A S K H ( A ) in Figure 6. As an example, at S N R = 9.74 dB, C A S K = H ( A ) + γ = 1.6 can be achieved with modified sign-coding where H ( A ) = 0.9 and γ = 0.7 . We observe that sign-coding achieves the capacity of 4-ASK for S N R 0.72 dB.

5.2. Sign-Coding with Bit-Metric Decoding

The following theorems give AIRs for sign-coding with BMD.
Theorem 3
(Basic sign-coding with BMD). For a memoryless channel { X , p ( y | x ) , Y } with amplitude shaping using M-ASK and basic sign-coding, the rate:
R BMD γ = 0 = max p ( b ) : H ( B ) R BMD ( p ( x ) ) H ( B )
is achievable using BMD. Here, B = ( B 1 , B 2 , , B m ) , p ( b ) = p ( b 1 , b 2 , , b m ) , and p ( x ) = p ( s , b 1 , b 2 , , b m ) , and R BMD ( p ( x ) ) is as defined in (6).
Theorem 4
(Modified sign-coding with BMD). For a memoryless channel { X , p ( y | x ) , Y } with amplitude shaping using M-ASK and modified sign-coding, the rate:
R BMD γ > 0 = max p ( b ) , γ : H ( B ) + γ R BMD ( p ( x ) ) H ( B ) + γ
is achievable using BMD for γ < 1 .
Theorems 3 and 4 imply that for a memoryless channel, the rate R = H ( B ) + γ = H ( A ) + γ is achievable with sign-coding and BMD, as long as R R BMD is satisfied.
Remark 1
(Random sign-coding with binary linear codes). An amplitude can be represented by m bits. We can uniformly generate a code matrix with m n rows of length n. This matrix can be used to produce the sign sequences. This results in the pairwise independence of any two different sign sequences, as is explained in the proof of [15] (Theorem 6.2.1). Inspection of the proof of our Theorem 1 shows that only the pairwise independence of sign sequences is needed. Therefore, achievability can also be obtained with a binary linear code. Note that our linear code can also be seen as a systematic code that generates parity. The code rate of the corresponding systematic code is m / ( m + 1 ) . For BMD, a similar reasoning shows that linear codes lead to achievability, and also for modified sign-coding, achievability follows for binary linear codes. The rate of the systematic code that corresponds to the modified setting is ( m + γ ) / ( m + 1 ) .

6. Conclusions

In this paper, we studied achievable information rates (AIRs) of probabilistic amplitude shaping (PAS) for discrete-input memoryless channels. In contrast to the existing literature in which Gallager’s error exponent approach was followed, we used a weak typicality framework. Random sign-coding arguments based on weak typicality were introduced to upper-bound the probability of error of a so-called sign-coding structure. The achievability of the mutual information was demonstrated for uniform signs, which were independent of the amplitudes. Sign-coding combined with amplitude shaping corresponded to PAS, and consequently, PAS achieved the capacity of a discrete-input memoryless channel with a symmetric capacity-achieving distribution.
Our approach was different than the random coding arguments considered in the literature, in the sense that our motivation was to provide achievability proofs that were as constructive as possible. To this end, in our random sign-coding setup, both the amplitudes and the signs of the channel inputs that were directly selected by information bits were constructively produced. Only the remaining signs were drawn at random. A study on the achievability of capacity for channels with asymmetric capacity-achieving distributions with a type of sign-coding is left for possible future research.

Author Contributions

Conceptualization, Y.C.G. and F.M.J.W.; formal analysis, Y.C.G., A.A., and F.M.J.W.; software, Y.C.G.; writing, original draft, Y.C.G. and F.M.J.W.; writing, review and editing, Y.C.G., A.A., and F.M.J.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Y.C.G. and A.A. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 757791).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

Appendix A.1. Proof of P1

We see from [9] (Equation (3.6)) that for u ̲ A ε n ( U ) ,
2 n ( H ( U ) + ε ) p ( u ̲ ) 2 n ( H ( U ) ε ) .
Due to Definition 1, each u ̲ B V , ε n ( U ) is also in A ε n ( U ) ; more specifically, B V , ε n ( U ) A ε n ( U ) . Consequently, (A1) also holds for u ̲ B V , ε n ( U ) , which completes the proof of P1.

Appendix A.2. Proof of P2

Let ( U ̲ , V ̲ ) be independent and identically distributed with respect to p ( u , v ) . Then:
Pr { ( U ̲ , V ̲ ) A ε n ( U V ) } = u ̲ p ( u ̲ ) v ̲ : ( u ̲ , v ̲ ) A ε n ( U V ) p ( v ̲ | u ̲ ) = u ̲ B V , ε n ( U ) p ( u ̲ ) v ̲ : ( u ̲ , v ̲ ) A ε n ( U V ) p ( v ̲ | u ̲ )
+ u ̲ B V , ε n ( U ) p ( u ̲ ) v ̲ : ( u ̲ , v ̲ ) A ε n ( U V ) p ( v ̲ | u ̲ )
u ̲ B V , ε n ( U ) p ( u ̲ ) + u ̲ B V , ε n ( U ) p ( u ̲ ) ( 1 ε )
= 1 ε + ε u ̲ B V , ε n ( U ) p ( u ̲ )
= 1 ε + ε Pr { U ̲ B V , ε n ( U ) } .
Here, (A4) follows from Definition 1, which states that Pr ( u ̲ , V ̲ ) A ε n ( U V ) | U ̲ = u ̲ < 1 ε for u ̲ A ε n ( U ) , if u ̲ B V , ε n ( U ) . Then, from (A6), we obtain:
Pr { U ̲ B V , ε n ( U ) } Pr { ( U , V ) A ε n ( U V ) } 1 + ε ε
= 1 Pr { ( U , V ) A ε n ( U V ) } ε
1 ε .
for large enough n. Here, (A9) follows from [9] (Thm. 7.6.1), which states that Pr { ( U ̲ , V ̲ ) A ε n ( U V ) } 1 as n . This implies that Pr { ( U ̲ , V ̲ ) A ε n ( U V ) } ε 2 for positive ε and large enough n, which completes the proof.

Appendix A.3. Proof of P3

We see from [9] (Thm. 3.1.2) that:
| A ε n ( U ) | 2 n ( H ( U ) + ε ) .
Since B V , ε n ( U ) A ε n ( U ) , again by Definition 1, (A10) also holds for | B V , ε n ( U ) | . This proves the upper bound in P3. To prove the lower bound, we obtain from (A9) for n sufficiently large that:
1 ε Pr { U ̲ B V , ε n ( U ) }
u ̲ B V , ε n ( U ) 2 n ( H ( U ) ε )
= | B V , ε n ( U ) | 2 n ( H ( U ) ε ) ,
where (A12) follows from (A1).

Appendix B. Proofs of Theorems 1, 2, 3, and 4

To derive AIRs, we will follow the classical approach, e.g., as in [9] (Section 7.7), and upper-bound the average of the probability of error P ¯ e over a random choice of sign-codebooks. This way, we will demonstrate the existence of at least one good sign-code. Again as in [9] (Section 7.7) and as explained in Section 4.3, we decode by joint typicality: the decoder looks for a unique message index pair ( m ^ a , m ^ s ) for which the corresponding amplitude-sign sequence ( a ̲ , s ̲ ) is jointly typical with the received sequence y ̲ .
By the properties of weak typicality and B -typicality, the transmitted amplitude-sign sequence and the received sequence are jointly typical with high probability for n large enough. We call the event for which the transmitted amplitude-sign sequence is not jointly typical with the received sequence the first error event with average probability P ¯ e ( 1 ) . Furthermore, the probability that any other (not transmitted) amplitude-sign sequence is jointly typical with the received sequence vanishes for asymptotically large n. We call the event that there is another amplitude-sign sequence that is jointly typical with the received sequence the second error event with average probability P ¯ e ( 2 ) . Observing that these events are not disjoint, we can write [9] (Equation (7.75)):
P ¯ e P ¯ e ( 1 ) + P ¯ e ( 2 ) .

Appendix B.1. Proof of Theorem 1

For the error of the first kind, we can write:
P ¯ e ( 1 ) = m a = 1 M a 1 M a s ̲ S n p ( s ̲ ) y ̲ Y n p ( y ̲ | a ̲ ( m a ) , s ̲ ) 𝟙 [ ( a ̲ ( m a ) , s ̲ , y ̲ ) A ε n ( A S Y ) ]
= m a 1 M a s ̲ y ̲ p ( s ̲ , y ̲ | a ̲ ( m a ) ) 𝟙 [ ( a ̲ ( m a ) , s ̲ , y ̲ ) A ε n ]
= m a 1 M a Pr ( a ̲ ( m a ) , S ̲ , Y ̲ ) A ε n | A ̲ = a ̲ ( m a )
m a ε M a
= ε ,
where we simplified the notation by replacing m a = 1 , 2 , , M a by m a , s ̲ S n by s ̲ , and y ̲ Y n by y ̲ in (A16). Furthermore, we dropped the index of the typical set A ε n ( A S Y ) and used A ε n instead. We will follow these notations for summations and for the typical sets for the rest of the paper, assuming for the latter that the index of the typical set will be clear from the context. To obtain (A16), we used p ( s ̲ ) p ( y ̲ | a ̲ ( m a ) , s ̲ ) = p ( s ̲ , y ̲ | a ̲ ( m a ) ) . Then, (A18) is a direct consequence of Definition 1 since a ̲ ( m a ) B S Y , ε n ( A ) for m a = 1 , 2 , , M a .
For the error of the second kind, we can write:
P ¯ e ( 2 ) m a 1 M a s ̲ p ( s ̲ ) y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ) k a = 1 , k a m a M a s ˜ ̲ S n p ( s ˜ ̲ ) 𝟙 [ ( a ̲ ( k a ) , s ˜ ̲ , y ̲ ) A ε n ]
= M a m a s ̲ p ( s ̲ ) M a y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ) k a m a s ˜ ̲ p ( s ˜ ̲ ) M a 𝟙 [ ( a ̲ ( k a ) , s ˜ ̲ , y ̲ ) A ε n ] M a 2 6 n ε m a s ̲ p ( a ̲ ( m a ) ) p ( s ̲ ) y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ )
· k a m a s ˜ ̲ p ( a ̲ ( k a ) ) p ( s ˜ ̲ ) 𝟙 [ ( a ̲ ( k a ) , s ˜ ̲ , y ̲ ) A ε n ]
M a 2 6 n ε a ̲ A n s ̲ p ( a ̲ ) p ( s ̲ ) y ̲ p ( y ̲ | a ̲ , s ̲ ) a ˜ ̲ A n s ˜ ̲ p ( a ˜ ̲ ) p ( s ˜ ̲ ) 𝟙 [ ( a ˜ ̲ , s ˜ ̲ , y ̲ ) A ε n ]
= M a 2 6 n ε ( y ̲ , x ˜ ̲ ) A ε n p ( x ˜ ̲ ) p ( y ̲ )
2 n ( H ( A ) + ε ) 2 6 n ε | A ε n ( X Y ) | 2 n ( H ( X ) ε ) 2 n ( H ( Y ) ε )
2 n ( H ( A ) + 7 ε ) 2 n ( H ( X , Y ) + ε ) 2 n ( H ( X ) ε ) 2 n ( H ( Y ) ε )
= 2 n ( H ( A ) I ( S A ; Y ) + 10 ε ) ,
where we simplified the notation by replacing k a = 1 , 2 , , M a : k a m a by k a m a , and s ˜ ̲ S n by s ˜ ̲ in (A21). We will follow these notations for the rest of the paper. Then:
(A22)
follows for n sufficiently large and for a ̲ B S Y , ε n ( A ) from:
1 M a = 1 | B S Y , ε n ( A ) | 2 n ( H ( A ) ε ) ) 1 ε
= 2 2 n ε 1 ε 2 n ( H ( A ) + ε )
2 2 n ε 1 ε p ( a ̲ )
2 3 n ε p ( a ̲ ) ,
where (A28) follows from the B -typicality property P3, (A30) follows from the B -typicality property P1, and (A31) holds for all large enough n.
(A23)
follows from summing over a ̲ A n instead of over a ̲ ( m a ) B ε n and over a ˜ ̲ A n instead of a ̲ ( k a ) B ε n for k a m a .
(A24)
is obtained by working out the summations over a ̲ and s ̲ and by replacing a ˜ ̲ s ˜ ̲ with x ˜ ̲ .
(A25)
follows from M a = | B ε n ( A ) | 2 n ( H ( A ) + ε ) , i.e., the B -typicality property P3, and from (12).
(A26)
follows from (15).
The conclusion from (A27) is that for H ( A ) < I ( X ; Y ) 10 ε , the error probability of the second kind:
P ¯ e ( 2 ) ε
for n large enough. Using (A19) and (A32) in (A14), we find that the total error probability averaged over all possible sign-codes P ¯ e 2 ε for n large enough. This implies the existence of a basic sign-code with total error probability P e = Pr { M a ^ M a } 2 ε . This holds for all ε > 0 , and therefore, the rate:
R = H ( A ) I ( X ; Y ) ,
is achievable with basic sign-coding, which concludes the proof of Theorem 1.

Appendix B.2. Proof of Theorem 2

For the error of the first kind, we can write:
P ¯ e ( 1 ) = m a 1 M a m s = 1 M s 1 2 n 1 s ̲ S n 2 p ( s ̲ ) y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) 𝟙 [ ( a ̲ ( m a ) , s ̲ ( m s ) s ̲ , y ̲ ) A ε n ]
= m a 1 M a m s s ̲ 2 n y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) 𝟙 [ ( a ̲ ( m a ) , s ̲ ( m s ) s ̲ , y ̲ ) A ε n ]
= m a 1 M a m s s ̲ y ̲ p ( s ̲ ( m s ) s ̲ , y ̲ | a ̲ ( m a ) ) 𝟙 [ ( a ̲ ( m a ) , s ̲ ( m s ) s ̲ , y ̲ ) A ε n ]
= m a 1 M a Pr ( a ̲ ( m a ) , S ̲ , Y ̲ ) A ε n | A ̲ = a ̲ ( m a )
m a ε M a
= ε ,
where we simplified the notation by replacing s ̲ S n 2 by s ̲ and m s = 1 , 2 , , M s by m s in (A35). We will follow these notations for the rest of the paper. To obtain (A35), we used the fact that S ̲ is uniform; more precisely p ( s ̲ ) = 2 n 2 . To obtain (A36), we used the fact that S ̲ is also uniform, and then, 2 n p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) = p ( s ̲ ( m s ) s ̲ , y ̲ | a ̲ ( m a ) ) . Then, (A38) is a direct consequence of Definition 1 since a ̲ ( m a ) B S Y , ε n ( A ) for m a = 1 , 2 , , M a .
For the error of the second kind, we obtain:
P ¯ e ( 2 ) m a 1 M a m s 1 2 n 1 s ̲ p ( s ̲ ) y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · ( k a , k s ) ( m a , m s ) s ˜ ̲ p ( s ˜ ̲ ) 𝟙 [ ( a ̲ ( k a ) , s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ] = M a 2 n 1 m a , m s , s ̲ 2 n M a y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · ( k a , k s ) ( m a , m s ) s ˜ ̲ 2 n M a 𝟙 [ ( a ̲ ( k a ) , s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ]
= M a 2 n 1 m a , m s , s ̲ 2 n M a y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) k a m a , k s , s ˜ ̲ 2 n M a 𝟙 [ ( a ̲ ( k a ) , s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ] + 2 n 1 m a , m s , s ̲ 2 n M a y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) k s m s , s ˜ ̲ 2 n 𝟙 [ ( a ̲ ( m a ) , s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ] .
Here, we replaced nested summations over m a , m s , and s ̲ by a single summation over ( m a , m s , s ̲ ) for the sake of better readability. We will use this notation for the rest of the paper. Then:
(A40)
follows from n = n 1 + n 2 and from the fact that S ̲ is uniform; more precisely, p ( s ̲ ) = 2 n 2 .
(A41)
is obtained by splitting ( k a , k s ) ( m a , m s ) into k a m a , k s and k a = m a , k s m s .
From (A41), we obtain:
P ¯ e ( 2 ) M a 2 n 1 2 6 n ε m a , m s , s ̲ p ( a ̲ ( m a ) ) p ( s ̲ ( m s ) s ̲ ) y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · k a m a , k s , s ˜ ̲ p ( a ̲ ( k a ) ) p ( s ̲ ( k s ) s ˜ ̲ ) 𝟙 [ ( a ̲ ( k a ) , s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ] + 2 n 1 2 3 n ε m a , m s , s ̲ p ( a ̲ ( m a ) ) p ( s ̲ ( m s ) s ̲ ) y ̲ p ( y ̲ | a ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · k s m s , s ˜ ̲ p ( s ̲ ( k s ) s ˜ ̲ ) 𝟙 [ ( a ̲ ( m a ) , s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ]
M a 2 n 1 2 6 n ε a ̲ , s ̲ s ̲ p ( a ̲ ) p ( s ̲ s ̲ ) y ̲ p ( y ̲ | a ̲ , s ̲ s ̲ ) a ˜ ̲ , s ̲ ˜ s ˜ ̲ p ( a ˜ ̲ ) p ( s ̲ ˜ s ˜ ̲ ) 𝟙 [ ( a ˜ ̲ , s ̲ ˜ s ˜ ̲ , y ̲ ) A ε n ] + 2 n 1 2 3 n ε a ̲ , s ̲ s ̲ p ( a ̲ ) p ( s ̲ s ̲ ) y ̲ p ( y ̲ | a ̲ , s ̲ s ̲ ) s ̲ ˜ s ˜ ̲ p ( s ̲ ˜ s ˜ ̲ ) 𝟙 [ ( a ̲ , s ̲ ˜ s ˜ ̲ , y ̲ ) A ε n ]
= M a 2 n 1 2 6 n ε a ̲ , s ̲ p ( a ̲ ) p ( s ̲ ) y ̲ p ( y ̲ | a ̲ , s ̲ ) a ˜ ̲ , s ˜ ̲ p ( a ˜ ̲ ) p ( s ˜ ̲ ) 𝟙 [ ( a ˜ ̲ , s ˜ ̲ , y ̲ ) A ε n ] + 2 n 1 2 3 n ε a ̲ , s ̲ p ( a ̲ ) p ( s ̲ ) y ̲ p ( y ̲ | a ̲ , s ̲ ) s ˜ ̲ p ( s ˜ ̲ ) 𝟙 [ ( a ̲ , s ˜ ̲ , y ̲ ) A ε n ] ,
where:
(A42)
follows for n sufficiently large and for a ̲ B S Y , ε n ( A ) from:
1 M a ( A31 ) 2 3 n ε p ( a ̲ )
and from p ( s ̲ s ̲ ) = 2 n ,
(A43)
follows from summing over a ̲ A n instead of over a ̲ ( m a ) B ε n and over a ˜ ̲ A n instead of a ̲ ( k a ) B ε n for k a m a . Moreover, it follows from summing over s ̲ S n 1 instead of s ̲ ( k s ) for k s = 1 , 2 , , M s and k s m s .
(A44)
follows from substituting s ̲ for s ̲ s ̲ and s ˜ ̲ for s ̲ ˜ s ˜ ̲ .
Finally, from (A44), we obtain:
P ¯ e ( 2 ) = M a 2 n 1 2 6 n ε y ̲ p ( y ̲ ) x ˜ ̲ p ( x ˜ ̲ ) 𝟙 [ ( x ˜ ̲ , y ̲ ) A ε n ] + 2 n 1 2 3 n ε a ̲ , y ̲ p ( a ̲ , y ̲ ) s ˜ ̲ p ( s ˜ ̲ ) 𝟙 [ ( a ̲ , s ˜ ̲ , y ̲ ) A ε n ]
2 n ( H ( A ) + ε ) 2 n γ 2 6 n ε | A ε n ( X Y ) | 2 n ( H ( X ) ε ) 2 n ( H ( Y ) ε ) + 2 n γ 2 3 n ε | A ε n ( S A Y ) | 2 n ( H ( A , Y ) ε ) 2 n ( H ( S ) ε )
2 n ( H ( A ) + 7 ε ) 2 n γ 2 n ( H ( X , Y ) + ε ) 2 n ( H ( X ) ε ) 2 n ( H ( Y ) ε ) + 2 n γ 2 3 n ε 2 n ( H ( S , A , Y ) + ε ) 2 n ( H ( A , Y ) ε ) 2 n ( H ( S ) ε )
= 2 n ( H ( A ) + γ + 10 ε I ( X ; Y ) ) + 2 n ( γ + 6 ε I ( S ; A , Y ) ) .
Here, we substituted n 1 = n γ in (A47). Then:
(A46)
is obtained by working out the summations over a ̲ , s ̲ in the first part and s ̲ in the second part. Moreover, we replaced a ˜ ̲ s ˜ ̲ with x ˜ ̲ .
(A47)
is obtained using for the first part that M a = | B ε n ( A ) | 2 n ( H ( A ) + ε ) , i.e., the B -typicality property P3, and (12). For the second part, we used (12) for p ( s ̲ ) and (16) for p ( a ̲ , y ̲ ) .
(A48)
follows from (15), and its extension to jointly typical triplets; more precisely, | A ε n ( S A Y ) | 2 n ( H ( S , A , Y ) + ε ) .
The conclusion from (A49) is that for H ( A ) + γ < I ( X ; Y ) 10 ε and γ < I ( S ; A , Y ) 6 ε , the error probability of the second kind:
P ¯ e ( 2 ) ε ,
for n large enough. The first constraint, i.e., H ( A ) + γ < I ( X ; Y ) 10 ε , already implies the second constraint, i.e., γ < I ( S ; A , Y ) 6 ε , since:
γ < I ( X ; Y ) H ( A ) 10 ε I ( S , A ; Y ) I ( A ; Y ) 10 ε
= I ( S ; Y | A ) 10 ε
I ( S ; Y | A ) + I ( S ; A ) 10 ε
= I ( S ; A , Y ) 10 ε ,
where we substituted ( S , A ) for X in (A51). Here, (A51) follows from [9] (Thm. 2.4.1), and both (A52) and (A54) follow from the chain rule for MI [9] (Thm. 2.5.2).
Using (A39) and (A50) in (A14), we find that the total error probability averaged over all possible modified sign-codes P ¯ e 2 ε for n large enough. This implies the existence of a modified sign-code with total error probability P e = Pr { ( M ^ a , M ^ s ) ( M a , M s ) } 2 ε . This holds for all ε > 0 , and thus, the rate:
R = H ( A ) + γ I ( X ; Y ) ,
is achievable with modified sign-coding, which concludes the proof of Theorem 2.

Appendix B.3. Proof of Theorem 3

For the error of the first kind, we can write:
P ¯ e ( 1 ) = m a 1 M a s ̲ p ( s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ) · 𝟙 [ ( ( b ̲ 1 ( m a ) , y ̲ ) A ε n ) ( ( b ̲ 2 ( m a ) , y ̲ ) A ε n ) ( ( b ̲ m ( m a ) , y ̲ ) A ε n ) ( ( s ̲ , y ̲ ) A ε n ) ]
m a 1 M a s ̲ y ̲ p ( s ̲ , y ̲ | b ̲ ( m a ) ) 𝟙 [ ( b ̲ ( m a ) , s ̲ , y ̲ ) A ε n ]
= m a 1 M a Pr ( b ̲ ( m a ) , S ̲ , Y ̲ ) A ε n | B ̲ = b ̲ ( m a )
m a ε M a
= ε ,
where we used b ̲ ( m a ) to denote ( b ̲ 1 ( m a ) , b ̲ 2 ( m a ) , , b ̲ m ( m a ) ) in (A56) and B ̲ to denote ( B ̲ 1 , B ̲ 2 , , B ̲ m ) in (A58). Then, we used p ( s ̲ ) p ( y ̲ | b ̲ ( m a ) , s ̲ ) = p ( s ̲ , y ̲ | b ̲ ( m a ) ) in (A57). Here, (A57) follows from the fact that if at least one of b ̲ 1 ( m a ) , b ̲ 2 ( m a ) , , b ̲ m ( m a ) or s ̲ is not jointly typical with y ̲ , then ( b ̲ ( m a ) , s ̲ , y ̲ ) is not jointly typical. Then, (A59) is a direct consequence of Definition 1 since b ̲ ( m a ) B S Y , ε n ( B 1 B 2 B m ) for m a = 1 , 2 , , M a .
For the error of the second kind, we can write:
P ¯ e ( 2 ) m a 1 M a s ̲ p ( s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ) · k a m a s ˜ ̲ p ( s ˜ ̲ ) 𝟙 [ ( b ̲ 1 ( k a ) , y ̲ ) A ε n , ( b ̲ 2 ( k a ) , y ̲ ) A ε n , , ( b ̲ m ( k a ) , y ̲ ) A ε n , ( s ˜ ̲ , y ̲ ) A ε n ] = M a m a s ̲ p ( s ̲ ) M a y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ) · k a m a s ˜ ̲ p ( s ˜ ̲ ) M a 𝟙 [ ( b ̲ 1 ( k a ) , y ̲ ) A ε n , ( b ̲ 2 ( k a ) , y ̲ ) A ε n , , ( b ̲ m ( k a ) , y ̲ ) A ε n , ( s ˜ ̲ , y ̲ ) A ε n ] M a 2 6 n ε m a s ̲ p ( b ̲ ( m a ) ) p ( s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ )
· k a m a s ˜ ̲ p ( s ˜ ̲ ) p ( b ̲ ( k a ) ) 𝟙 [ ( b ̲ 1 ( k a ) , y ̲ ) A ε n , ( b ̲ 2 ( k a ) , y ̲ ) A ε n , , ( b ̲ m ( k a ) , y ̲ ) A ε n , ( s ˜ ̲ , y ̲ ) A ε n ] M a 2 6 n ε b ̲ { 0 , 1 } m n s ̲ p ( b ̲ ) p ( s ̲ ) y ̲ p ( y ̲ | b ̲ , s ̲ )
· b ̲ ˜ { 0 , 1 } m n s ˜ ̲ p ( s ˜ ̲ ) p ( b ̲ ˜ ) 𝟙 [ ( b ˜ ̲ 1 , y ̲ ) A ε n , ( b ˜ ̲ 2 , y ̲ ) A ε n , , ( b ˜ ̲ m , y ̲ ) A ε n , ( s ˜ ̲ , y ̲ ) A ε n ] = M a 2 6 n ε y ̲ p ( y ̲ ) b ̲ ˜ , s ˜ ̲ p ( b ̲ ˜ , s ˜ ̲ ) 𝟙 [ ( b ˜ ̲ 1 , y ̲ ) A ε n , ( b ˜ ̲ 2 , y ̲ ) A ε n , , ( b ˜ ̲ m , y ̲ ) A ε n , ( s ˜ ̲ , y ̲ ) A ε n ]
2 n ( H ( B ) + 7 ε ) | A ε n ( Y ) | 2 n ( H ( Y ) ε ) · | A ε n ( B 1 | y ̲ ) | | A ε n ( B 2 | y ̲ ) | · · | A ε n ( B m | y ̲ ) | | A ε n ( S | y ̲ ) | 2 n ( H ( B , S ) ε )
2 n ( H ( B ) + 7 ε ) 2 n ( H ( Y ) + ε ) 2 n ( H ( Y ) ε ) · 2 n ( H ( B 1 | Y ) + H ( B 2 | Y ) + + H ( B m | Y ) + H ( S | Y ) + 2 ( m + 1 ) ε ) 2 n ( H ( B , S ) ε )
= 2 n ( H ( B ) H ( B , S ) + H ( B 1 | Y ) + H ( B 2 | Y ) + + H ( B m | Y ) + H ( S | Y ) + ( 12 + 2 m ) ε ) ,
where we used b ̲ to denote ( b ̲ 1 , b ̲ 2 , , b ̲ m ) and b ̲ ˜ to denote ( b ˜ ̲ 1 , b ˜ ̲ 2 , , b ˜ ̲ m ) in (A62). We also used B to denote ( B 1 , B 2 , , B m ) in (A64). Finally, we simplified the notation by replacing b ̲ ˜ { 0 , 1 } m n by b ̲ ˜ in (A63). Then:
(A61)
follows for n sufficiently large and for b ̲ B S Y , ε n ( B ) from 1 / M a 2 3 n ε p ( b ̲ ) , which can be shown in a similar way as (A31) was derived.
(A62)
follows from summing over b ̲ { 0 , 1 } m n instead of over b ̲ ( m a ) B ε n and over b ̲ ˜ { 0 , 1 } m n instead of over b ̲ ( k a ) B ε n for k a m a .
(A63)
is obtained by working out the summations over b ̲ 1 , b ̲ 2 , , b ̲ m , and s ̲ .
(A64)
follows from M a = | B ε n ( B ) | 2 n ( H ( B ) + ε ) , i.e., the B -typicality property P3, from (12), and from (17).
(A65)
follows from (11) and (19).
The conclusion from (A66) is that for:
H ( B ) < H ( B , S ) H ( S | Y ) i = 1 m H ( B i | Y ) ( 12 + 2 m ) ε = R BMD ( p ( b , s ) ) ( 12 + 2 m ) ε ,
the error probability of the second kind:
P ¯ e ( 2 ) ε
for n large enough. Using (A60) and (A67) in (A14), we find that the total error probability averaged over all possible sign-codes P ¯ e 2 ε for n large enough. This implies the existence of a sign-code with total error probability P e = Pr { M ^ a M a } 2 ε . This holds for all ε > 0 , and thus, the rate:
R = H ( B ) R BMD
is achievable with sign-coding and BMD, which concludes the proof of Theorem 3.

Appendix B.4. Proof of Theorem 4

For the error of first kind, we can write:
P ¯ e ( 1 ) = m a 1 M a m s 1 2 n 1 s ̲ p ( s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · 𝟙 i = 1 m ( ( b ̲ i ( m a ) , y ̲ ) A ε n ) ( ( s ̲ ( m s ) s ̲ , y ̲ ) A ε n ) = m a 1 M a m s s ̲ 2 n y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · 𝟙 i = 1 m ( ( b ̲ i ( m a ) , y ̲ ) A ε n ) ( ( s ̲ ( m s ) s ̲ , y ̲ ) A ε n )
m a 1 M a m s s ̲ y ̲ p ( s ̲ ( m s ) s ̲ , y ̲ | b ̲ ( m a ) ) 𝟙 [ ( b ̲ ( m a ) , s ̲ ( m s ) s ̲ , y ̲ ) A ε n ]
= m a 1 M a Pr { ( b ̲ ( m a ) , S ̲ , Y ̲ ) A ε n | B ̲ = b ̲ ( m a ) } m a ε M a
= ε .
Here, to obtain (A69), we used the fact that S ̲ is uniform; more precisely, p ( s ̲ ) = 2 n 2 . Then, we used 2 n p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) = p ( s ̲ ( m s ) s ̲ , y ̲ | b ̲ ( m a ) ) in (A70). Furthermore, (A70) also follows from the fact that if at least one of b ̲ 1 ( m a ) , b ̲ 2 ( m a ) , , b ̲ m ( m a ) or s ̲ ( m s ) s ̲ is not jointly typical with y ̲ , then ( b ̲ ( m a ) , s ̲ ( m s ) s ̲ , y ̲ ) is not jointly typical. Then, (A71) is a direct consequence of Definition 1 since b ̲ ( m a ) B S Y , ε n ( B 1 B 2 B m ) for m a = 1 , 2 , , M a .
For the error of second kind, we can write:
P ¯ e ( 2 ) m a 1 M a m s 1 2 n 1 s ̲ p ( s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · ( k a , k s ) ( m a , m s ) s ˜ ̲ p ( s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ i ( k a ) , y ̲ ) A ε n ) ( ( s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ) = M a 2 n 1 m a , m s , s ̲ 2 n M a y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · ( k a , k s ) ( m a , m s ) s ˜ ̲ 2 n M a 𝟙 i = 1 m ( ( b ̲ i ( k a ) , y ̲ ) A ε n ) ( ( s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n )
= M a 2 n 1 m a , m s , s ̲ 2 n M a y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · k a m a , k s , s ˜ ̲ 2 n M a 𝟙 i = 1 m ( ( b ̲ i ( k a ) , y ̲ ) A ε n ) ( ( s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ) + 2 n 1 m a , m s , s ̲ 2 n M a y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · k s m s , s ˜ ̲ 2 n 𝟙 i = 1 m ( ( b ̲ i ( m a ) , y ̲ ) A ε n ) ( ( s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ) ,
where (A73) follows from n = n 1 + n 2 and from the fact that S ̲ is uniform; more precisely, p ( s ̲ ) = 2 n 2 . Then, (A74) is obtained by splitting ( k a , k s ) ( m s , m s ) into k a m s , k s and k a = m a , k s m s .
From (A74), we obtain:
P ¯ e ( 2 ) M a 2 n 1 2 6 n ε m a , m s , s ̲ p ( b ̲ ( m a ) ) p ( s ̲ ( m s ) s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · k a m a , k s , s ˜ ̲ p ( b ̲ ( k a ) ) p ( s ̲ ( k s ) s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ i ( k a ) , y ̲ ) A ε n ) ( ( s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n ) + 2 n 1 2 3 n ε m a , m s , s ̲ p ( b ̲ ( m a ) ) p ( s ̲ ( m s ) s ̲ ) y ̲ p ( y ̲ | b ̲ ( m a ) , s ̲ ( m s ) s ̲ ) · k s m s , s ˜ ̲ p ( s ̲ ( k s ) s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ i ( m a ) , y ̲ ) A ε n ) ( ( s ̲ ( k s ) s ˜ ̲ , y ̲ ) A ε n )
M a 2 n 1 2 6 n ε b ̲ , s ̲ s ̲ p ( b ̲ ) p ( s ̲ s ̲ ) y ̲ p ( y ̲ | b ̲ , s ̲ s ̲ ) b ̲ ˜ , s ̲ ˜ s ˜ ̲ p ( b ̲ ˜ ) p ( s ̲ ˜ s ˜ ̲ ) · 𝟙 i = 1 m ( ( b ̲ ˜ i , y ̲ ) A ε n ) ( ( s ̲ ˜ s ˜ ̲ , y ̲ ) A ε n ) + 2 n 1 2 3 n ε b ̲ , s ̲ s ̲ p ( b ̲ ) p ( s ̲ s ̲ ) y ̲ p ( y ̲ | b ̲ , s ̲ s ̲ ) s ̲ ˜ s ˜ ̲ p ( s ̲ ˜ s ˜ ̲ ) · 𝟙 i = 1 m ( ( b ̲ i , y ̲ ) A ε n ) ( ( s ̲ ˜ s ˜ ̲ , y ̲ ) A ε n )
= M a 2 n 1 2 6 n ε b ̲ , s ̲ p ( b ̲ ) p ( s ̲ ) y ̲ p ( y ̲ | b ̲ , s ̲ ) b ̲ ˜ , s ˜ ̲ p ( b ̲ ˜ ) p ( s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ ˜ i , y ̲ ) A ε n ) ( ( s ˜ ̲ , y ̲ ) A ε n ) + 2 n 1 2 3 n ε b ̲ , s ̲ p ( b ̲ ) p ( s ̲ ) y ̲ p ( y ̲ | b ̲ , s ̲ ) s ˜ ̲ p ( s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ i , y ̲ ) A ε n ) ( ( s ˜ ̲ , y ̲ ) A ε n ) ,
where:
(A75)
follows for n sufficiently large and for b ̲ B S Y , ε n ( B ) from 1 / M a 2 3 n ε p ( b ̲ ) and from p ( s ̲ s ̲ ) = 2 n ,
(A76)
follows from summing over b ̲ { 0 , 1 } m n instead of over b ̲ ( m a ) B ε n and over b ̲ ˜ { 0 , 1 } m n instead of b ̲ ( k a ) B ε n for k a m a . Moreover, it follows from summing over s ̲ S n 1 instead of s ̲ ( k s ) for k s = 1 , 2 , , M s and k s m s ,
(A77)
follows from substituting s ̲ for s ̲ s ̲ and s ˜ ̲ for s ̲ ˜ s ˜ ̲ .
Finally, from (A77), we obtain:
P ¯ e ( 2 ) = M a 2 n 1 2 6 n ε y ̲ p ( y ̲ ) b ̲ ˜ , s ˜ ̲ p ( b ̲ ˜ , s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ ˜ i , y ̲ ) A ε n ) ( ( s ˜ ̲ , y ̲ ) A ε n ) + 2 n 1 2 3 n ε b ̲ , y ̲ p ( b ̲ , y ̲ ) s ˜ ̲ p ( s ˜ ̲ ) 𝟙 i = 1 m ( ( b ̲ i , y ̲ ) A ε n ) ( ( s ˜ ̲ , y ̲ ) A ε n )
2 n ( H ( B ) + ε ) 2 n γ 2 6 n ε | A ε n ( Y ) | 2 n ( H ( Y ) ε ) i = 1 m | A ε n ( B i | y ̲ ) | | A ε n ( S | y ̲ ) | 2 n ( H ( B 1 B 2 B m S ) ε ) + 2 n γ 2 3 n ε | A ε n ( Y ) | 2 n ( H ( B Y ) ε ) 2 n ( H ( S ) ε ) i = 1 m | A ε n ( B i | y ̲ ) | | A ε n ( S | y ̲ ) |
2 n ( H ( B ) + ε ) 2 n γ 2 6 n ε 2 n ( H ( Y ) + ε ) 2 n ( H ( Y ) ε ) i = 1 m 2 n ( H ( B i | Y ) + 2 ε ) 2 n ( H ( S | Y ) + 2 ε ) 2 n ( H ( B S ) ε ) + 2 n γ 2 3 n ε 2 n ( H ( Y ) + ε ) 2 n ( H ( B Y ) ε ) 2 n ( H ( S ) ε ) i = 1 m 2 n ( H ( B i | Y ) + 2 ε ) 2 n ( H ( S | Y ) + 2 ε )
= 2 n H ( B ) + γ + i = 1 m H ( B i | Y ) + H ( S | Y ) H ( B S ) + ( 12 + 2 m ) ε + 2 n γ + H ( Y ) H ( B Y ) H ( S ) + i = 1 m H ( B i | Y ) + H ( S | Y ) + ( 8 + 2 m ) ε .
Here, we substituted n 1 = n γ in (A79). Then:
(A78)
is obtained by working out the summations over b ̲ 1 , b ̲ 2 , , b ̲ m , s ̲ in the first part and s ̲ in the second part.
(A79)
is obtained using for the first part that M a = | B ε n ( B ) | 2 n ( H ( B ) + ε ) , i.e., the B -typicality property P3, (12) for p ( y ̲ ) , and (17) for p ( b ̲ ˜ , s ˜ ̲ ) . For the second part, we used (12) for p ( s ˜ ̲ ) and (17) for p ( b ̲ , y ̲ ) .
(A80)
follows from (11) and (19).
The conclusion from (A81) is that for:
H ( B ) + γ R BMD ( 12 + 2 m ) ε ,
and for:
γ H ( B Y ) + H ( S ) H ( Y ) i = 1 m H ( B i | Y ) H ( S | Y ) ( 8 + 2 m ) ε ,
the error probability of the second kind:
P ¯ e ( 2 ) ε
for n large enough. The second constraint (A83) is already implied by the first constraint (A82) since:
γ H ( B Y ) + H ( S ) H ( Y ) i = 1 m H ( B i | Y ) H ( S | Y ) ( 8 + 2 m ) ε
= H ( B Y ) + H ( S ) H ( Y ) i = 1 m H ( B i | Y ) H ( S | Y ) + H ( B S ) H ( B S ) ( 8 + 2 m ) ε
= H ( B Y ) + H ( S ) H ( Y ) + R BMD H ( B ) H ( S ) ( 8 + 2 m ) ε
= H ( B | Y ) + R BMD H ( B ) ( 8 + 2 m ) ε .
Using (A72) and (A84) in (A14), we find that the total error probability averaged over all possible modified sign-codes P ¯ e 2 ε for n large enough. This implies the existence of a modified sign-code with total error probability P e = Pr { ( M ^ a , M ^ s ) ( M a , M s ) } 2 ε . This holds for all ε > 0 , and thus, the rate:
R = H ( B ) + γ R BMD ,
is achievable with modified sign-coding, which concludes the proof of Theorem 4.

References

  1. Imai, H.; Hirakawa, S. A new multilevel coding method using error-correcting codes. IEEE Trans. Inf. Theory 1977, 23, 371–377. [Google Scholar] [CrossRef]
  2. Wachsmann, U.; Fischer, R.F.H.; Huber, J.B. Multilevel codes: Theoretical concepts and practical design rules. IEEE Trans. Inf. Theory 1999, 45, 1361–1391. [Google Scholar] [CrossRef] [Green Version]
  3. Ungerböck, G. Channel coding with multilevel/phase signals. IEEE Trans. Inf. Theory 1982, 28, 55–67. [Google Scholar] [CrossRef]
  4. Zehavi, E. 8-PSK trellis codes for a Rayleigh channel. IEEE Trans. Commun. 1992, 40, 873–884. [Google Scholar] [CrossRef]
  5. Caire, G.; Taricco, G.; Biglieri, E. Bit-interleaved coded modulation. IEEE Trans. Inf. Theory 1998, 44, 927–946. [Google Scholar] [CrossRef] [Green Version]
  6. IEEE Standard for Information Technology—Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications; IEEE Std 802.11-2016 (Revision of IEEE Std 802.11-2012); IEEE Standards Association: Piscataway, NJ, USA, 2016; pp. 1–3534. [CrossRef]
  7. Digital Video Broadcasting (DVB); 2nd Generation Framing Structure, Channel Coding and Modulation Systems for Broadcasting, Interactive Services, News Gathering and Other Broadband Satellite Applications (DVB-S2); European Telecommun. Standards Inst. (ETSI) Standard EN 302 307, Rev. 1.2.1; European Telecommunications Standards Institute: Valbonne, France, 2009.
  8. Böcherer, G.; Steiner, F.; Schulte, P. Bandwidth efficient and rate-matched low-density parity-check coded modulation. IEEE Trans. Commun. 2015, 63, 4651–4665. [Google Scholar] [CrossRef] [Green Version]
  9. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  10. Buchali, F.; Steiner, F.; Böcherer, G.; Schmalen, L.; Schulte, P.; Idler, W. Rate adaptation and reach increase by probabilistically shaped 64-QAM: An experimental demonstration. J. Lightw. Technol. 2016, 34, 1599–1609. [Google Scholar] [CrossRef]
  11. Idler, W.; Buchali, F.; Schmalen, L.; Lach, E.; Braun, R.; Böcherer, G.; Schulte, P.; Steiner, F. Field trial of a 1 Tb/s super-channel network using probabilistically shaped constellations. J. Lightw. Technol. 2017, 35, 1399–1406. [Google Scholar] [CrossRef]
  12. Böcherer, G. Achievable rates for probabilistic shaping. arXiv 2018, arXiv:1707.01134. [Google Scholar]
  13. Böcherer, G. Principles of Coded Modulation. Habilitation Thesis, TUM Department of Electrical and Computer Engineering Technical University of Munich, Munich, Germany, 2018. [Google Scholar]
  14. Amjad, R.A. Information rates and error exponents for probabilistic amplitude shaping. In Proceedings of the 2018 IEEE Information Theory Workshop (ITW), Guangzhou, China, 25–29 November 2018. [Google Scholar]
  15. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons: New York, NY, USA, 1968. [Google Scholar]
  16. Kramer, G. Topics in multi-user information theory. Found. Trends Commun. Inf. Theory 2008, 4, 265–444. [Google Scholar] [CrossRef] [Green Version]
  17. Kaplan, G.; Shamai, S. Information rates and error exponents of compound channels with application to antipodal signaling in a fading environment. AËU Archiv für Elektronik und Übertragungstechnik 1993, 47, 228–239. [Google Scholar]
  18. Merhav, N.; Kaplan, G.; Lapidoth, A.; Shamai, S. On information rates for mismatched decoders. IEEE Trans. Inf. Theory 1994, 40, 1953–1967. [Google Scholar] [CrossRef] [Green Version]
  19. Szczecinski, L.; Alvarado, A. Bit-Interleaved Coded Modulation: Fundamentals, Analysis, and Design; John Wiley & Sons: Chichester, UK, 2015. [Google Scholar]
  20. Martinez, A.; Guillén i Fàbregas, A.; Caire, G.; Willems, F.M.J. Bit-interleaved coded modulation revisited: A mismatched decoding perspective. IEEE Trans. Inf. Theory 2009, 55, 2756–2765. [Google Scholar] [CrossRef] [Green Version]
  21. Guillén i Fàbregas, A.; Martinez, A. Bit-Interleaved Coded Modulation with Shaping. In Proceedings of the 2010 IEEE Information Theory Workshop, Dublin, Ireland, 30 August–3 September 2010. [Google Scholar]
  22. Alvarado, A.; Brännström, F.; Agrell, E. High SNR bounds for the BICM capacity. In Proceedings of the 2011 IEEE Information Theory Workshop, Paraty, Brazil, 16–20 October 2011. [Google Scholar]
  23. Peng, L. Fundamentals of Bit-Interleaved Coded Modulation and Reliable Source Transmission. Ph.D. Thesis, University of Cambridge, Cambridge, UK, 2012. [Google Scholar]
  24. Böcherer, G. Probabilistic signal shaping for bit-metric decoding. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014. [Google Scholar]
  25. Böcherer, G. Probabilistic signal shaping for bit-metric decoding. arXiv 2014, arXiv:1401.6190. [Google Scholar]
  26. Böcherer, G. Achievable rates for shaped bit-metric decoding. arXiv 2016, arXiv:1410.8075. [Google Scholar]
  27. Schulte, P.; Böcherer, G. Constant composition distribution matching. IEEE Trans. Inf. Theory 2016, 62, 430–434. [Google Scholar] [CrossRef] [Green Version]
  28. Fehenberger, T.; Millar, D.S.; Koike-Akino, T.; Kojima, K.; Parsons, K. Multiset-partition distribution matching. IEEE Trans. Commun. 2019, 67, 1885–1893. [Google Scholar] [CrossRef] [Green Version]
  29. Schulte, P.; Steiner, F. Divergence-optimal fixed-to-fixed length distribution matching with shell mapping. IEEE Wirel. Commun. Lett. 2019, 8, 620–623. [Google Scholar] [CrossRef] [Green Version]
  30. Gültekin, Y.C.; van Houtum, W.J.; Koppelaar, A.; Willems, F.M.J. Enumerative sphere shaping for wireless communications with short packets. IEEE Trans. Wirel. Commun. 2020, 19, 1098–1112. [Google Scholar] [CrossRef] [Green Version]
  31. Amjad, R.A. Information Rates and Error Exponents for Probabilistic Amplitude Shaping. arXiv 2018, arXiv:1802.05973. [Google Scholar]
  32. Shulman, N.; Feder, M. Random coding techniques for nonrandom codes. IEEE Trans. Inf. Theory 1999, 45, 2101–2104. [Google Scholar] [CrossRef]
  33. Yeung, R. Information Theory and Network Coding; Springer: Boston, MA, USA, 2008. [Google Scholar]
Figure 1. Probabilistic amplitude shaping with transmission rate R = k / n + γ bit/1D.
Figure 1. Probabilistic amplitude shaping with transmission rate R = k / n + γ bit/1D.
Entropy 22 00762 g001
Figure 2. The scope of the random coding experiments considered in this work and in [12,13,14].
Figure 2. The scope of the random coding experiments considered in this work and in [12,13,14].
Entropy 22 00762 g002
Figure 3. Sign-coding structure: sign-coding (coder) is combined with amplitude shaping (shaper). SMD, symbol-metric decoding; BMD, bit-metric decoding.
Figure 3. Sign-coding structure: sign-coding (coder) is combined with amplitude shaping (shaper). SMD, symbol-metric decoding; BMD, bit-metric decoding.
Entropy 22 00762 g003
Figure 4. Shaping layer of the random sign-coding setup with SMD.
Figure 4. Shaping layer of the random sign-coding setup with SMD.
Entropy 22 00762 g004
Figure 5. Shaping layer of the random sign-coding setup with BMD for M-ASK.
Figure 5. Shaping layer of the random sign-coding setup with BMD for M-ASK.
Entropy 22 00762 g005
Figure 6. Sign-coding with SMD for 4-ASK. All C 4 - ASK 0.562 bit/1D can be achieved with sign-coding. AIR, achievable information rate.
Figure 6. Sign-coding with SMD for 4-ASK. All C 4 - ASK 0.562 bit/1D can be achieved with sign-coding. AIR, achievable information rate.
Entropy 22 00762 g006
Table 1. Input alphabet and mapping function for 8-ASK.
Table 1. Input alphabet and mapping function for 8-ASK.
A75311357
S−1−1−1−11111
X−7−5−3−11357
B 1 00111100
B 2 01100110

Share and Cite

MDPI and ACS Style

Gültekin, Y.C.; Alvarado, A.; Willems, F.M.J. Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments. Entropy 2020, 22, 762. https://doi.org/10.3390/e22070762

AMA Style

Gültekin YC, Alvarado A, Willems FMJ. Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments. Entropy. 2020; 22(7):762. https://doi.org/10.3390/e22070762

Chicago/Turabian Style

Gültekin, Yunus Can, Alex Alvarado, and Frans M. J. Willems. 2020. "Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments" Entropy 22, no. 7: 762. https://doi.org/10.3390/e22070762

APA Style

Gültekin, Y. C., Alvarado, A., & Willems, F. M. J. (2020). Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments. Entropy, 22(7), 762. https://doi.org/10.3390/e22070762

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop