Next Article in Journal
General Secure Information Exchange Protocol for a Multiuser MIMO Relay Channel
Next Article in Special Issue
A Strong Converse Theorem for Hypothesis Testing Against Independence over a Two-Hop Network
Previous Article in Journal
Information Contained in Molecular Motion
Previous Article in Special Issue
Secure Degrees of Freedom in Networks with User Misbehavior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Optimality of Interference Decoding Schemes for K-User Gaussian Interference Channels
 †

by
Ragini Chaluvadi
1,
Madhuri Bolli
2 and
Srikrishna Bhashyam
1,*
1
Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai 600036, India
2
Qualcomm India Pvt Limited, Hyderabad, Telangana 500081, India
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in IEEE ISIT 2019, Paris, France, 7–12 July 2019.
Entropy 2019, 21(11), 1053; https://doi.org/10.3390/e21111053
Submission received: 16 September 2019 / Revised: 23 October 2019 / Accepted: 24 October 2019 / Published: 28 October 2019
(This article belongs to the Special Issue Multiuser Information Theory II)

Abstract

:
The sum capacity of the general K-user Gaussian Interference Channel (GIC) is known only when the channel coefficients are such that treating interference as noise (TIN) is optimal. The Han-Kobayashi (HK) scheme is an extensively studied coding scheme for the K-user interference channel (IC). Simple HK schemes are HK schemes with Gaussian signaling, no time sharing and no private-common power splitting. The class of simple HK (S-HK) schemes includes the TIN scheme and schemes that involve various levels of interference decoding and cancellation at each receiver. For the 2-user GIC, simple HK schemes are sufficient to achieve all known sum capacity results—sum capacity under mixed, strong and noisy interference conditions. We derive channel conditions under which simple HK schemes achieve sum capacity for general K-user Gaussian ICs. For the K-user GIC, these results generalize existing sum capacity results for the TIN scheme to the class of simple HK schemes.

1. Introduction

Wireless cellular networks have evolved significantly in terms of both channel-adaptive transmission and interference management. Early cellular systems were based on the interference avoidance approach and relied on static resource allocation. However, current cellular systems allocate resources dynamically based on short-term channel state feedback. Interference decoding and cancellation are also implementable in today’s systems. The K-user Gaussian Interference Channel (GIC) models a wireless network with K transmit-receive pairs. The optimal transmission scheme for the K-user GIC depends on the channel coefficients. Simultaneous channel-aware adaptation of multiple transmit-receive pairs requires a good understanding of the optimal scheme for each channel condition.
The capacity region and sum capacity of the general K-user Gaussian Interference Channel (GIC) are not known. The 2-user GIC is the most well understood special case [1,2,3,4,5,6,7]. The capacity region of the 2-user GIC under strong interference conditions was obtained in References [1,2]. The sum capacity when the interference can be treated as noise was obtained in References [3,4,5,6]. The sum capacity under mixed interference conditions was obtained in Reference [6]. The capacity region of the 2-user GIC within one bit was derived in Reference [7] using suitably chosen Han-Kobayashi (HK) schemes [8].
For the general K-user GIC, the channel conditions under which Treating Interference as Noise (TIN) achieves sum capacity were obtained in References [5] (Thm. 3) and [9] (Thm. 9). The sum capacity of some partially connectedK user GICs were derived in Reference [10,11,12,13] under some channel conditions. Z-like GICs, where the channel matrix is upper triangular with a specific structure, were studied in Reference [10], cascade GIC was studied in Reference [11] and many-to-one and one-to-many GICs were studied in References [12,13]. Some new outer bounds on the capacity of the K-user GIC were recently derived in Reference [14]. Simple HK (S-HK) schemes with Gaussian signalling, no timesharing and no common-private power splitting, achieve sum capacity under the channel conditions obtained in References [9,10,11,12,13]. S-HK schemes include the simple and practical TIN scheme and schemes that involve various levels of interference decoding and cancellation at each receiver as special cases. For the 2-user GIC, S-HK schemes are sufficient to achieve all known sum capacity results. For the K-user GIC, we will generalize the sum capacity optimality results for the TIN scheme in References [5,9], to S-HK schemes.
There are some other related results for the symmetric GIC or for GICs where the channel coefficients satisfy some equality conditions [15,16], respectively. For a K-user interference channel, the sum capacity under a strong interference condition was obtained in Reference [16] under conditions that include some equality conditions on the channel coefficients. Equality conditions cannot be satisfied if the channel coefficients come from continuous distributions. The symmetric K-user GIC and many-to-one GIC have been considered in References [15,17], respectively. Unlike the other results discussed above based on S-HK schemes, in References [15,17], lattice coding and interference alignment are used to obtain the sum capacity when the interference is very strong. For the general asymmetric GIC, only approximate sum capacity and degrees of freedom results have been obtained using interference alignment. Other structured codes like coset codes have also been studied in Reference [18] to show achievable sum rates better than those achieved by HK schemes for some 3-user interference channels. Interference alignment and structured codes are useful under channel conditions where HK schemes are not optimal. We identify channel conditions where S-HK schemes are sum capacity optimal for the K-user GIC.
In this paper, we generalize the sum capacity optimality results for the TIN scheme in References [5,9] to S-HK schemes. In particular, we derive two sets of channel conditions under which S-HK schemes are sum capacity optimal for general K-user GICs. For the first set of channel conditions, we consider schemes where interference is decoded and cancelled before decoding the desired message. For the second set of channel conditions, we consider schemes where the one interference signal is jointly decoded with the message signal at one of the receivers. These two sets of channel conditions provide us new sum capacity results for several channel conditions under which sum capacity was not known earlier. Furthermore, existing results for the sum capacity of the 2-user GIC and some partially connected K user GICs in References [11,12,13] can be obtained as special cases of these results. To further understand the significance of the results, we evaluate, using Monte Carlo simulations, the probability that these channel conditions for sum capacity are satisfied for random wireless networks. Three different random network models are considered and we observe that this probability is significant under all three models.

2. Channel Model and Simple HK Schemes

The K-user GIC in standard form [5] is given by
y i = x i + K j = 1 j i h i j x j + z i , i [ K ] { 1 , , K } ,
where x i is transmitted by transmitter i, y i is received by receiver i, h i j is the real channel coefficient from transmitter j to receiver i and z i N ( 0 , 1 ) is the additive white Gaussian noise at receiver i. Let P i denote the transmit power constraint at transmitter i. For the 2-user GIC, the HK scheme [8] splits the message at each user into two parts; common and private. The common message is decoded at both the receivers and private part is decoded only at its corresponding receiver. This scheme can be generalized to the K-user GIC in several ways [19] (Sec. 6.9). In Reference [20], the message at each user of a K-user GIC is split into two—common and private messages. The common message is decoded at all receivers. A more general scheme can split each message into more than two parts and specify the subset of receivers that can decode each part. We consider schemes where there is only one message, but that is decoded by the intended receiver and a subset of the remaining K 1 receivers. Equivalently, these schemes can also be described by specifying the messages that are decoded at each receiver. We call such HK schemes with Gaussian signaling, no timesharing and no common-private power splitting as simple HK schemes. Each S-HK scheme is specified by the sets { I ( 1 ) , I ( 2 ) , , I ( K ) } , I ( i ) [ K ] { i } , i . In each such S-HK scheme, at receiver i, interference from transmitters j I ( i ) are treated as noise and interference from transmitters j D ( i ) { [ K ] { I ( i ) , i } } are decoded. For the TIN scheme, I ( i ) = [ K ] { i } , i .

3. Sum Capacity Results

In this section, we derive two sets of channel conditions for the general K-user GIC under which sum capacity is achieved by S-HK schemes. The first set of channel conditions are in Equations (2)–(4) of Theorem 1. The second set of channel conditions are given by Equations (6), (7)–(10) in Theorems 2 and 3, respectively.
In the result in Theorem 1, we consider the strategy of decoding interference from transmitters in D ( i ) for each i before decoding the desired message. For such decoding to be possible, conditions in (4) need to be satisfied. For the optimality of treating the interference from transmitters in I ( i ) as noise for each i, we get conditions (2) and (3). These conditions correspond to the TIN optimality conditions for the modified GIC where all the links corresponding to decoded interference are removed.
Theorem 1.
For the K-user GIC, the S-HK scheme defined by I ( i ) [ K ] { i } , i [ K ] achieves sum capacity, if there exist ρ i ( 0 , 1 ) , i [ K ] , such that the following conditions are satisfied for all i [ K ]
j : i I ( j ) h j i 2 1 + Q j ρ j 2 1 P i + 1 + Q i ρ i 2 ,
j I ( i ) h i j 2 ( 1 + Q j ) 2 ρ j 2 1 ρ i 2 ,
j J 1 + P j 1 + Q j 1 + j J h i j 2 P j 1 + P i + Q i J D ( i ) ,
where Q i = j I ( i ) h i j 2 P j , D ( i ) = [ K ] { i , I ( i ) } . The sum capacity is
C s u m = K i = 1 1 2 log 1 + P i 1 + Q i .
Proof. 
The detailed proof is given in Appendix A. For the converse, at each receiver i [ K ] , we use the genie signal s i n = { x i n + n i n , x j n , j D ( i ) } where n i n N ( 0 , σ i 2 I ) and E [ n i z i ] = ρ i σ i , 0 < ρ i < 1 . Here, for each i, we provide signals x j n , j D ( i ) in addition to the genie signal x i n + n i n that is used in Reference [9]. Under (2) and (3), we get the required upper bound following steps similar to the proof in Reference [9] (Theorem 9) but with the above genie signals.
Combining the conditions (2) and (3) for the converse with the conditions (4) for achievability, we get the required result. □
Now, we derive the second set of channel conditions under which S-HK schemes are optimal. We do this in two steps. First, we derive general bounds on the achievable sum rate of S-HK schemes in Theorem 2. Unlike Theorem 1, where the interference is decoded and cancelled before decoding the desired signal, here we determine more general bounds on the achievable sum rate for an S-HK scheme. Then, we show in Theorem 3 that one of the sum rate upperbounds in Theorem 2 is also a sum capacity bound under some channel conditions. Therefore, the channel conditions under which we get a sum capacity result will comprise of (i) the conditions (7)–(10) required to prove the sum capacity upper bound in Theorem 3 and (ii) the conditions (6) under which this sum capacity upperbound is achievable in Theorem 2.
Theorem 2.
For the K-user GIC, the S-HK scheme defined by { I ( i ) } achieves sum rates S satisfying the following conditions for each l [ K ] .
l . S 1 2 i [ K ] log 1 + j J i h i j 2 P j 1 + Q i
for each choice of J i [ K ] I ( i ) such that i [ K ] J i = S l . Here S l is a multiset containing l copies of each element in [ K ] and is denoted S l = { ( a , l ) : a [ K ] } and Q i = j I ( i ) h i j 2 P j .
Proof. 
At each receiver i, users [ K ] I ( i ) form a Gaussian MAC with noise variance 1 + Q i . The achievable rates of each MAC at receiver i [ K ] satisfy
j J i R j 1 2 log 1 + j J i h i j 2 P j 1 + Q i J i [ K ] I ( i ) .
Using Fourier-Motzkin elimination, we get the sum rate bounds in (6). □
The maximum sum rate achievable using an S-HK scheme is determined by the least lower bound for S among the bounds in (6). As an example of a bound in the above theorem, consider l = 1 , J m = { m , k } for some m , k [ K ] , J k = ϕ and J i = i for i [ K ] { m , k } . This gives us the bound on sum rate to be
1 2 log 1 + P m + h m k 2 P k 1 + Q m + K i = 1 i k , m 1 2 log 1 + P i 1 + Q i .
Now, if we can show that one of these inequalities in (6) is also an upper bound on the sum capacity under some conditions, then we get a sum capacity result. In the following theorem, we show that the sum rate bound expression in the example above is a sum capacity upper bound under conditions (7)–(10) (for the choice G ( i ) = I ( i ) in the following theorem).
Theorem 3.
Let G ( i ) [ K ] { i } , i [ K ] and let there be some m , k [ K ] such that m , k G ( i ) , i [ K ] { k } . For the K-user GIC, if there exist ρ i ( 0 , 1 ) , i [ K ] { m } such that the following conditions are satisfied
1 P r + 1 + Q r ρ r 2 i : r G ( i ) i { m , k } h i r 2 1 + Q i ρ i 2 + δ r h m r 2 1 + Q m ρ k 2 r [ K ] { m , k } ,
ρ k h m k = 1 + Q m
j G ( i ) h i j 2 ( 1 + Q j ) 2 ρ j 2 1 ρ i 2 , i [ K ] { m , k } ,
j G ( m ) h m j 2 ( 1 + Q j ) 2 ρ j 2 1 ρ k 2 ,
where δ r = 1 if r G ( m ) and δ r = 0 otherwise and Q i = j G ( i ) h i j 2 P j , then the sum capacity C s u m is upper bounded by
C s u m 1 2 log 1 + ( P m + h m k 2 P k ) 1 + Q m + K i = 1 i k , m 1 2 log 1 + P i 1 + Q i .
Proof. 
The detailed proof is provided in Appendix B. Here, we present a brief outline and highlight some aspects of the proof. First, we consider a modified channel with no interference at receiver k. The sum capacity of the original channel is upper bounded by the sum capacity of the modified channel. Then, we derive a genie-aided upper bound for the modified channel using the genie signals s i n at receiver i for each i [ K ] as follows:
s i n = { x i n + n i n , x j n , j G ¯ ( i ) } , i [ K ] { m , k } s m n = { x j n , j G ¯ ( m ) k } s k n = h m k x k n + j G ( m ) h m j x j n + n k n
where G ¯ ( i ) = [ K ] { i , { G ( i ) } } , n i N ( 0 , σ i 2 ) , E [ n i z i ] = ρ i σ i and 0 < ρ i < 1 , for each i [ K ] { m } and σ k = 1 . This choice of genie is then shown to be useful and smart under conditions (7)–(10) to obtain the upper bound in the theorem statement.
Here are some remarks about this proof.
  • The genie signal is different from Theorem 1 for receivers m and k. The genie at receiver k has the interference component at receiver m from transmitter k and the other transmitters that are treated as noise. This choice ensures that h ( s k n ) = h ( y m n | s m n , x m n ) and helps in cancelling one negative term in the sum capacity upper bound.
  • The assumption that m , k G ( i ) , i [ K ] { k } is used as part of the argument that the genie is useful.
  • The first upper bounding step is with a modified channel with no interference at receiver k. It is interesting to note that the sum capacity result for the 2-user GIC under mixed interference in Reference [6] also uses the one-sided GIC as the first step and we recover these results as special cases of our result.
  • This proof also generalizes the proof for the many-to-one GIC in Reference ([13], Theorem 4) to the general K user GIC. □
Some examples of the conditions obtained from Theorems 1–3 are presented in Appendix C.

Relation with Existing Sum Capacity Results

Applying Theorems 1–3 to the special case of 2-user channels, that is, K = 2 , we recover all known sum capacity results for the 2-user GIC in References [1,2,3,4,6]. As special cases, the first set of channel conditions in our paper gives the noisy interference result in References [3,4,6], the very strong interference result in Reference [1] and part of the mixed interference result in Reference [6] (Thm. 10) where the interference is decoded before decoding the message. The second set of channel conditions in our paper gives the remaining part of the mixed interference result in Reference [6] (Thm. 10) where the interference is jointly decoded with the desired message and the strong interference result in Reference [2]. The actual list of channel conditions and the corresponding sum capacity can be found in Appendix D.
Applying Theorems 1–3 to the special cases of partially connected Gaussian ICs, we can recover the sum capacity results in References [11,12,13]. We can also get some new results for the K-user cyclic and cascade GICs. The results corresponding to the two channel conditions for the cyclic, cascade and many-to-one GICs are presented in Appendix E.

4. Numerical Results

In this section, we numerically find the probability that the first set of channel conditions under which S-HK schemes achieve sum capacity, i.e, Equations (2)–(4), are satisfied for three different random wireless network topologies. Theoretical analysis of the probability of the event that the channel conditions (2)–(4) required for the sum capacity result are satisfied is difficult because of the following reasons: (1) There are many conditions that describe the event, (2) Each condition is a complicated function of channel coefficients, (3) The variables ρ 1 , ρ 2 , , ρ K in the conditions are not available in closed form. Therefore, we resort to Monte Carlo simulations in this paper.
Topology 1: In this topology, all K transmitters are placed randomly and uniformly in a circular cell of radius 1 km. We assume that each transmitter has a nominal coverage radius of r 1 m. For each transmitter, we then place its receiver randomly and uniformly in its coverage area. This topology is illustrated in Figure 1 for K = 5 .
Topology 2 (Motivated by the one-to-many channel): In this topology, the first transmitter is placed at the center of a circle of radius r 2 m and all the other transmitters are placed equally spaced on the perimeter of this circle. The nominal coverage radius of first transmitter is 3 r 2 m and nominal coverage radius of all other transmitters are r 2 m. For each transmitter, we place its receiver randomly and uniformly in its coverage area. This topology for K = 4 is illustrated in Figure 2. In topology 2, the first transmitter has a longer range and, therefore, there is higher probability that its signal at other receivers is strong enough to decode.
Topology 3 (Motivated by the cascade channel): In this topology, all transmitters are placed equidistantly along a line, with transmitter to transmitter distance r 3 m. For each transmitter, we place its corresponding receiver randomly and uniformly along the same line towards its right within r 3 m. We assume that the nominal coverage radius of each transmitter is r 3 m. This topology for K = 4 is illustrated in Figure 3. In topology 3, each receiver usually observes strong interference only from its adjacent transmitter.
For channel fading, we use the Erceg model [21]. We consider two terrain categories, hilly/light tree density (terrain type 1) and hilly/moderate-to-heavy tree density (terrain type 2). The model parameters for the two terrain categories are given in Reference [21] (Table I). We have reproduced the parameter values in Appendix F for completeness. We used an operating frequency of 1.9 GHz, antenna height h b = 50 m, close-in distance d 0 = 100 m. The noise floor is taken as 110 dBm and transmit power at each transmitter is chosen such that the expected value of the SNR at the boundary of their nominal coverage area is 0 dB.
For generating the plots, we consider 1000 realizations of the channel. With topology 1, for every realization we randomly place K transmitters inside 1 km circular cell and also randomly place each receiver in its corresponding transmitters coverage area. With topology 2 and topology 3, first we fix the transmitters locations and for every realization we randomly place each receiver in its corresponding transmitter’s coverage area.
In Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, we plot the probability that the conditions (2)–(4) are satisfied for (i) TIN scheme, (ii) all S-HK schemes except the TIN scheme (denoted S-HK∖TIN) and (iii) all S-HK schemes. Figure 4, Figure 5 and Figure 6 are plotted for terrain type 1 and Figure 7, Figure 8 and Figure 9 are plotted for terrain type 2. From the Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, we observe that the probability that the conditions for optimality are satisfied is significant. In Figure 4 and Figure 7 this probability increases with increasing nominal coverage radius r 1 as expected for S HK TIN . In Figure 5, Figure 6, Figure 8 and Figure 9, S-HK schemes have a much higher probability of being optimal compared to the TIN scheme.
In Figure 10, Figure 11 and Figure 12, we plot the expected rate of TIN scheme, the expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied, and the expected rate of the TIN scheme given that the conditions (2)–(4) are satisfied with topologies 1, 2, 3 respectively. Note that when (2)–(4) are satisfied, S-HK schemes are optimal. Therefore, the expected rate in this case is the expected capacity. As expected, from the plots, the expected rate of TIN scheme is lower than the expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied.
In Figure 13, Figure 14 and Figure 15, we plot the probability that the conditions (2)–(4) are satisfied for (i) all S-HK schemes except TIN scheme, (ii) all S-HK schemes where atmost 1 strong interference signal is decoded at each receiver except TIN (denoted S-HK1∖TIN). Figure 13, Figure 14 and Figure 15 are plotted for topologies 1, 2, and 3, respectively. In topologies 2 and 3, decoding at most one strong interference at each receiver is the most important class of S-HK schemes as expected, since there is mainly one strongly interfering signal in these topologies.
In Figure 16, we plot the success probability of the achievability conditions (4) alone and compare them with success probability of all conditions (2)–(4) for all S-HK schemes except TIN for topology 1 with K = 3 and K = 4 . It can be observed that the probability that achievability conditions are satisfied is much larger than the probability that all conditions are satisfied. It is worth noting that whenever the achievability conditions are satisfied, interference can be decoded and the resulting sum rate will be significantly better than the rate achieved by the TIN scheme. Therefore, even when the sum capacity conditions are not satisfied, there is significant improvement in the sum rate of S-HK schemes with interference decoding compare to the TIN scheme. As K increases, the probability of at least one interference signal being decodable increases as expected. Numerical results for the 2-user GIC are given in Appendix D.

5. Conclusions

We obtained new sum capacity results for the general K-user Gaussian IC. We derived two sets of channel conditions under which S-HK schemes are sum capacity optimal for the K user Gaussian IC. This general result also allows us to obtain all existing sum capacity results for 2-user GICs and partially connected GICs like the cascade, many-to-one and one-to-many GICs as special cases. The first sum capacity result corresponds to the case when interference is decoded and cancelled before decoding the desired signal at each receiver. The second sum capacity result corresponds to the case when one interference signal is jointly decoded with the desired message at one of the receivers. At all other receivers, interference is decoded and cancelled before the desired message.
We also studied the probability that the channel conditions required for the sum capacity result are satisfied in random wireless networks using Monte Carlo simulations. Three different random network models were considered. The numerical results showed that S-HK schemes are optimal with significant probability in the topologies that are considered. By selecting the best S-HK scheme for each channel condition, these results can be used for dynamic interference management and sum rate maximization in wireless networks.

Author Contributions

All authors have contributed to the study and preparation of the article. All authors have read and approved the final manuscript.

Funding

This research was funded by Indian Institute of Technology, Madras.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Proof of Theorem 1

(Achievability) Suppose that each receiver i decodes the interference from transmitters D ( i ) and then decodes the information from ith transmitter, while treating interference from other transmitters I ( i ) as noise. The multiple access channel (MAC) constraints for decoding the interference at each receiver i are
j J R j 1 2 log 1 + j J h i j 2 P j 1 + P i + Q i , J D ( i ) .
The sum capacity in (5) is achieved if choosing
R i = 1 2 log 1 + P i 1 + Q i , i [ K ]
satisfies (A1), thereby resulting in conditions in (4).
(Converse) Consider the genie-aided channel in Figure A1, where each receiver i [ K ] is given the genie signal s i n = { x i n + n i n , x j n , j D ( i ) } , where n i n N ( 0 , σ i 2 I ) and E [ n i z i ] = ρ i σ i , 0 < ρ i < 1 . For the result for TIN in Reference [9], the special case of this genie-aided channel, where D ( i ) is empty for all i, was used. Now, the sum capacity can be upper bounded as
n C s u m i = 1 K I ( x i n ; y i n , s i n ) = i = 1 K I ( x i n ; y i n , x i n + n i n | x j n , j D ( i ) ) = i = 1 K h ( x i n + n i n ) h ( n i n ) + i = 1 K h y i n | x i n + n i n , x j n , j D ( i ) i = 1 K h j I ( i ) h i j x j n + u i n
where u i n N ( 0 , ( 1 ρ i 2 ) I ) , i [ K ] . Assuming
1 ρ i 2 = ϕ i + j I ( i ) h i j 2 σ j 2 , i [ K ]
where ϕ i 0 , we can write
cov ( u i n ) = cov ( ϕ i n 0 n + j I ( i ) h i j n j n ) ,
where n 0 n N ( 0 , I ) and is independent of n i n , i [ K ] . Now, we have
exp 2 n h j I ( i ) h i j x j n + u i n = exp 2 n h ϕ i n 0 n + j I ( i ) ( h i j x j n + h i j n j n ) ( a ) exp 2 n h ϕ i n 0 n + j I ( i ) exp 2 n h h i j x j n + h i j n j n = 2 π e ϕ i + j I ( i ) h i j 2 exp 2 n h ( x j n + n j n ) ,
where (a) follows from entropy-power inequality (EPI) [19]. Therefore, we have
i = 1 K h ( x i n + n i n ) h j I ( i ) h i j x j n + u i n n 2 i = 1 K t i log 2 π e ϕ i + j I ( i ) h i j 2 e t j n 2 f ( t ) .
where t i 2 n h ( x i n + n i n ) , i [ K ] and t is the vector of all t i ’s. From the power constraints, we have t i log 2 π e ( P i + σ i 2 ) , i [ K ] . Under these constraints on t i , it can be shown as in Reference [9] that f ( t ) is maximized at t k = log 2 π e ( P k + σ k 2 ) provided f t k at t i = log 2 π e ( P i + σ i 2 ) , i [ K ] are greater than equal to 0. Thus, we have the conditions
i : k I ( i ) h i k 2 1 + Q i ρ i 2 1 P k + σ k 2 , k [ K ] .
Therefore, we now have
C s u m i = 1 K I ( x i G ; y i G , s i G ) = i = 1 K I ( x i G ; y i G , x j G , j D ( i ) ) + i = 1 K I ( x i G ; x i G + n i G | y i G , x j G , j D ( i ) ) = ( b ) i = 1 K I ( x i G ; y i G , x j G , j D ( i ) ) = K i = 1 1 2 log 1 + P i 1 + Q i
where (b) is true if I ( x i G ; x i G + n i | y i G , x j G , j D ( i ) ) = 0 , i . From Reference [3] (Lemma 8), I ( x i G ; x i G + n i G | y i G , x j G , j D ( i ) ) = 0 iff
ρ i σ i = 1 + Q i , i [ K ] .
Also, ϕ i 0 , i [ K ] which implies
j I ( i ) h i j 2 ( 1 + Q j ) 2 ρ j 2 1 ρ i 2 ,
Using the conditions (A7), (A8) and (A4), we get the conditions (2) and (3) for the converse.
Figure A1. Genie aided channel for proving Theorem 1.
Figure A1. Genie aided channel for proving Theorem 1.
Entropy 21 01053 g0a1

Appendix B. Proof of Theorem 3

First, we consider the modified channel in Figure A2 with no interference at receiver k. The sum capacity of the original channel is upper bounded by the sum capacity of the modified channel.
Figure A2. Modified channel.
Figure A2. Modified channel.
Entropy 21 01053 g0a2
Then, we derive a genie-aided upper bound for the modified channel in Figure A3 using the genie signals s i n at receiver i for each i [ K ] as follows:
s i n = { x i n + n i n , x j n , j G ¯ ( i ) } , i [ K ] { m , k } s m n = { x j n , j G ¯ ( m ) k } s k n = h m k x k n + j G ( m ) h m j x j n + n k n
where G ¯ ( i ) = [ K ] { i , { G ( i ) } } , n i N ( 0 , σ i 2 ) , E [ n i z i ] = ρ i σ i and 0 < ρ i < 1 , for each i [ K ] { m } and σ k = 1 . This choice of genie can be shown to be useful and smart under conditions (7)–(10) to obtain the upper bound in the theorem statement. This proof generalizes the proof for the many-to-one GIC in Reference [13] (Theorem 4) to the general K user GIC. Assume
1 ρ i 2 = ϕ i + j G ( i ) h i j 2 σ j 2 , i [ K ] { m , k } ,
1 ρ k 2 = ϕ k + j G ( m ) h m j 2 σ j 2 ,
where ϕ i 0 and assume h i j 2 1 , i G ( i ) , i [ K ] . Now, we have
n C s u m i = 1 K I ( x i n ; y i n , s i n ) = I ( x m n ; y m n | s m n ) + I ( x k n ; y k n , s k n ) + i = 1 i { m , k } K I ( x i n ; y i n , s i n ) = h ( y m n | s m n ) h ( y m n | s m n , x m n ) + h ( s k n ) + h ( y k n | s k n ) h ( z k n ) h ( s k n | y k n , x k n ) + i = 1 i { m , k } K I ( x i n ; y i n , s i n ) = ( a ) h ( y m n | s m n ) + h ( y k n | s k n ) h ( z k n ) h j G ( m ) h m j x j n + u k n + i = 1 i { m , k } K [ h ( x i n + n i n ) ] + i = 1 i { m , k } K h y i n | s i n h ( n i n ) h j G ( i ) h i j x j n + u i n
where (a) follows because h ( s k n ) = h ( y m n | s m n , x m n ) , u i n N ( 0 , ( 1 ρ i 2 ) I ) , i [ K ] { m } . Also note that from (A9) and (A10), we have
cov ( u i n ) = cov ( ϕ i n 0 n + j G ( i ) h i j n j n ) i [ K ] { m , k } , cov ( u k n ) = cov ( ϕ k n 0 n + j G ( m ) h m j n j n ) ,
where n 0 n N ( 0 , I ) and n 0 n is independent of n i n , i [ K ] { m } . From EPI, we have
exp 2 n h j G ( i ) h i j x j n + u i n = exp 2 n h ϕ i n 0 n + j G ( i ) ( h i j x j n + h i j n j n ) exp 2 n h ϕ i n 0 n + j G ( i ) exp 2 n h h i j x j n + h i j n j n
= 2 π e ϕ i + j G ( i ) exp 2 n h ( x j n + n j n ) exp 2 n log ( h i j n )
= 2 π e ϕ i + j G ( i ) h i j 2 exp 2 n h ( x j n + n j n )
Considering the terms that are not directly maximized by Gaussian inputs, we get
i = 1 i m , k K h ( x i n + n i n ) h j G ( i ) h i j x j n + u i n h j G ( m ) h m j x j n + u k n ( b ) n 2 i = 1 i m , k K t i log 2 π e ϕ i + j G ( i ) h i j 2 e t j n 2 log 2 π e ϕ k + j G ( m ) h m j 2 e t j n 2 f ( t ) ,
where (b) follows from (A12) and t i 2 n h ( x i n + n i n ) , i [ K ] . From the power constraints, we have
t i log 2 π e ( P i + σ i 2 ) , i = 1 , , K .
Since m , k G ( i ) , i [ K ] { k } , terms t m , t k do not appear in f ( t ) and we consider the optimization problem
max f ( t ) s . t t i log 2 π e ( P i + σ i 2 ) , i [ K ] { m , k } .
The Lagrangian for the above optimization problem is
L = f ( t ) i m , k μ i [ t i log 2 π e ( P i + σ i 2 ) ] ,
where μ i 0 , i [ K ] . At optimal t r , L t r = 0 . We want optimal t r = 1 2 log 2 π e ( P r + σ r 2 ) for Gaussian inputs to be optimal for the genie-aided channel. From KKT conditions, f t r 0 at t r = log 2 π e ( P r + σ r 2 ) , r [ K ] { m , k } (since μ r 0 ). Thus, we have
1 P r + σ r 2 i : r G ( i ) i { m , k } h i r 2 1 + Q i ρ i 2 + δ r h m r 2 1 + Q m ρ k 2 , r [ K ] { m , k } ,
where Q i , δ r are defined as in the theorem statement. Therefore, we now have
C s u m i = 1 K I ( x i G ; y i G , s i G ) = I ( x m G ; y m G | s m G ) + I ( x k G ; y k G , s k G ) + i = 1 i { m , k } K I ( x i G ; y i G , s i G ) = ( c ) I ( x m G ; y m G | s m G ) + I ( x k G ; s k G ) + i = 1 i { m , k } K I ( x i G ; y i G | x j G , j G ¯ ( i ) ) = ( d ) I ( x m G , x k G ; y m G | s m G ) + i = 1 i { m , k } K I ( x i G ; y i G | x j G , j G ¯ ( i ) ) ,
(c) is valid when I ( x i G ; x i G + n i | y i G , x j G , j G ¯ ( i ) ) = 0 , i [ K ] { m , k } and I ( x k G ; y k G | s k G ) = 0 . From Reference [3] (Lemma 8), I ( x i G ; x i G + n i | y i G , x j G , j G ¯ ( i ) ) = I x i G ; x i G + n i | ( x i G + j I ( i ) h i j x j G + z i ) = 0 , i [ K ] { m , k } and I ( x k G ; y k G | s k G ) = I x k G ; x k G + z k | x k G + j I ( m ) h m j x j G + n k h m k = 0 iff
ρ i σ i = 1 + Q i , i [ K ] { m , k } ,
ρ k h m k = 1 + Q m .
(d) is valid since genie s k is chosen such that h ( s k ) = h ( y m | s m , x m ) which implies I ( x k G ; s k G ) = I ( x k G ; y m G | s m G , x m G ) .
Using ϕ i 0 and (A13) and (A14), we get conditions (7)–(10) for the upper bound on C s u m to be valid.
Figure A3. Genie aided channel for proving Theorem 3.
Figure A3. Genie aided channel for proving Theorem 3.
Entropy 21 01053 g0a3

Appendix C. Examples

In this section, we will give some examples of finding the two sets of channel conditions under which sum capacity is achieved by a S-HK scheme using Theorems 1–3.
Example A1.
In this example, using Theorem 1, we will find the first set of channel conditions under which sum capacity is achieved by a S-HK scheme. Consider a 3-user GIC with S-HK scheme given by I ( 1 ) = { 2 } , I ( 2 ) = { 3 } , I ( 3 ) = { } . Inequalities (2) and (3) gives the same set of conditions given by
h 12 2 ( 1 + h 23 2 P 3 ) ρ 2 2 ( 1 ρ 1 2 )
h 23 2 ( 1 ρ 2 2 )
for some ρ 2 , ρ 1 ( 0 , 1 ) . In inequality (4), for i = 1 , J = { 3 } ; for i = 2 , J = { 1 } ; and for i = 3 , J can be { 1 } , { 2 } and { 1 , 2 } . Therefore, (4) gives the set of conditions
1 + P 1 + h 12 2 P 2 h 13 2
1 + P 2 + h 23 2 P 3 h 21 2 ( 1 + h 12 2 P 2 )
( 1 + P 3 ) h 31 2 ( 1 + h 12 2 P 2 )
( 1 + P 3 ) h 32 2 ( 1 + h 23 2 P 3 )
( 1 + P 1 1 + Q 1 ) ( 1 + P 2 1 + Q 2 ) 1 + h 31 2 P 1 + h 32 2 P 2 1 + P 3
where Q 1 = h 12 2 P 2 , Q 2 = h 23 2 P 3 , Q 3 = 0 . Under conditions (A15)–(A21), sum capacity is achieved by S-HK scheme with I ( 1 ) = { 2 } , I ( 2 ) = { 3 } , I ( 3 ) = { } and the sum capacity is given by
C s u m = i = 1 3 1 2 log 1 + P i 1 + Q i .
Example A2.
In this example, we will find achievable sum rates in (6) for a S-HK scheme. Consider a 3-user GIC with S-HK scheme given by I ( 1 ) = { } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { 2 } which implies D ( 1 ) = { 2 , 3 } , D ( 2 ) = { } , D ( 3 ) = { 1 } . From Theorem 2, l can be 1,2,3 and J 1 { 1 , 2 , 3 } , i . e . , J 1 can be {}, {1}, {2}, {3}, {1,2},{1,3},{2,3}, {1,2,3}. J 2 { 2 } , J 3 { 1 , 3 } . For l = 1 , 2 , 3 , the possible sets of J 1 , J 2 , J 3 such that i [ K ] J i = S l are given in Table A1.
Table A1. Set of J i such that i [ K ] J i = S l for S-HK with I ( 1 ) = { } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { 2 } .
Table A1. Set of J i such that i [ K ] J i = S l for S-HK with I ( 1 ) = { } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { 2 } .
l J 1 J 2 J 3
l = 1 { } { 2 } { 1 , 3 }
{ 1 } { 2 } { 3 }
{ 2 } { } { 1 , 3 }
{ 3 } { 2 } { 1 }
{ 1 , 2 } { } { 3 }
{ 1 , 3 } { 2 } { }
{ 2 , 3 } { } { 1 }
{ 1 , 2 , 3 } { } { }
l = 2 { 1 , 2 , 3 } { 2 } { 1 , 3 }
Achievable sum rates are given by
S 1 2 log 1 + P 2 1 + Q 2 + 1 2 log 1 + h 31 2 P 1 + P 3 1 + Q 3 S 1 2 log ( 1 + P 1 ) + 1 2 log 1 + P 2 1 + Q 2 + 1 2 log 1 + P 3 1 + Q 3 S 1 2 log ( 1 + h 12 2 P 2 ) + 1 2 log 1 + h 31 2 P 1 + P 3 1 + Q 3 S 1 2 log ( 1 + h 13 2 P 3 ) + 1 2 log 1 + P 2 1 + Q 2 + 1 2 log 1 + h 31 2 P 1 1 + Q 3 S 1 2 log ( 1 + P 1 + h 12 2 P 2 ) + 1 2 log 1 + P 3 1 + Q 3 S 1 2 log ( 1 + P 1 + h 13 2 P 3 ) + 1 2 log 1 + P 2 1 + Q 2 S 1 2 log ( 1 + h 12 2 P 2 + h 13 2 P 3 ) + 1 2 log 1 + h 31 2 P 1 1 + Q 3 S 1 2 log ( 1 + P 1 + h 12 2 P 2 + h 13 2 P 3 ) 2 S 1 2 log ( 1 + P 1 + h 12 2 P 2 + h 13 2 P 3 ) + 1 2 log 1 + P 2 1 + Q 2 + 1 2 log 1 + h 31 2 P 1 + P 3 1 + Q 3 .
where Q 2 = h 21 2 P 1 + h 23 2 P 3 , Q 3 = h 23 2 P 2 . Depending on the channel and power constraints one of the above inequalities will be dominant.
Example A3.
In this example, using Theorems 2 and 3 we will find the second set of channel conditions under which sum capacity is achieved by a S-HK scheme. Consider a 3-user GIC with S-HK scheme given by I ( 1 ) = { 3 } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { } . Let m = 1 , k = 2 . Here m, k I ( i ) , i { 1 , 3 } . First we will find the converse conditions or the conditions under which sum rate
S 1 2 log 1 + ( P 1 + h 12 2 P 2 ) 1 + Q 1 + 1 2 log ( 1 + P 3 )
is an upper bound. For I ( 1 ) = { 3 } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { } , inequalities (7) and (10) gives the same set of conditions
h 13 2 ρ 3 2 ( 1 ρ 2 2 ) .
(9) does not give any condition. (8) implies
ρ 2 h 12 = 1 + h 13 2 P 3
Combining (A23) and (A24), sum rate in (A22) is an upper bound for all the channels satisfying
h 13 2 + 1 + h 13 2 P 3 h 12 2 1
For achievablity conditions, first we will find all achievable sum rates of the S-HK scheme using Theorem 2. Observe that J 1 { 1 , 2 } , J 2 { 2 } , J 3 { 1 , 2 , 3 } . For l = 1 , 2 , 3 , the possible sets of J 1 , J 2 , J 3 such that i [ K ] J i = S l are given in Table A2.
Table A2. Set of J i such that i [ K ] J i = S l for S-HK with I ( 1 ) = { 3 } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { } .
Table A2. Set of J i such that i [ K ] J i = S l for S-HK with I ( 1 ) = { 3 } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { } .
l J 1 J 2 J 3
l = 1 { } { } { 1 , 2 , 3 }
{ } { 2 } { 1 , 3 }
{ 1 } { } { 2 , 3 }
{ 1 } { 2 } { 3 }
{ 2 } { } { 1 , 3 }
{ 1 , 2 } { } { 3 }
Achievable sum rates for S-HK with I ( 1 ) = { 3 } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { } are given by
S 1 2 log 1 + P 3 + h 32 2 P 2 + h 31 2 P 1
S 1 2 log 1 + P 2 1 + Q 2 + 1 2 log 1 + P 3 + h 31 2 P 1
S 1 2 log 1 + P 1 1 + Q 1 + 1 2 log 1 + P 3 + h 32 2 P 2
S 1 2 log 1 + P 1 1 + Q 1 + 1 2 log 1 + P 2 1 + Q 2 + 1 2 log ( 1 + P 3 )
S 1 2 log 1 + h 12 2 P 2 1 + Q 1 + 1 2 log 1 + P 3 + h 31 2 P 1
S 1 2 log 1 + P 1 + h 12 2 P 2 1 + Q 1 + 1 2 log ( 1 + P 3 )
where Q 1 = h 13 2 P 3 , Q 2 = h 21 2 P 1 + h 23 2 P 3 . We want (A31) to be dominant among the inequalities (A26)–(A31) which gives the conditions
( 1 + P 3 ) ( P 1 + h 12 2 P 2 ) ( h 32 2 P 2 + h 31 2 P 1 ) ( 1 + Q 1 )
( 1 + P 3 ) P 1 + h 12 2 P 2 1 + Q 1 ( 1 + P 3 + h 31 2 P 1 ) P 2 1 + Q 2 + h 31 2 P 1
h 12 2 ( 1 + P 3 ) h 32 2 ( 1 + P 1 + Q 1 )
h 12 2 ( 1 + Q 2 ) ( 1 + P 1 + Q 1 )
( 1 + P 3 ) ( 1 + Q 1 + h 12 2 P 2 ) h 31 2
Therefore, under conditions (A32)–(A36) and (A25), sum capacity is achievable by S-HK scheme with I ( 1 ) = { 3 } , I ( 2 ) = { 1 , 3 } , I ( 3 ) = { } and the sum capacity is given by (A22).

Appendix D. 2-User GIC Results

For a 2 user GIC, there are only 4 possible S-HK schemes. Optimality conditions and sum capacity of S-HK schemes using Theorem 1 are given in Table A3.
Table A3. Optimality conditions and sum capacity of S-HK schemes for a 2 user GIC using Theorem 1.
Table A3. Optimality conditions and sum capacity of S-HK schemes for a 2 user GIC using Theorem 1.
S-HK SchemeOptimality ConditionsSum CapacityMatches
I ( 1 ) = { 2 } | h 12 ( 1 + h 21 2 P 1 ) | + 1 2 log ( 1 + P 1 1 + h 12 2 P 1 )
I ( 2 ) = { 1 } | h 21 ( 1 + h 12 2 P 2 ) | 1 + 1 2 log ( 1 + P 2 1 + h 21 2 P 1 ) [3,4,6]
I ( 1 ) = { } h 21 2 1 , 1 2 log ( 1 + P 1 ) + Thm. 10 in [6]
I ( 2 ) = { 1 } h 12 2 1 + P 1 1 + h 21 2 P 1 1 2 log ( 1 + P 2 1 + h 21 2 P 1 )
I ( 1 ) = { 2 } h 12 2 1 , 1 2 log ( 1 + P 2 ) + Thm. 10 in [6]
I ( 2 ) = { } h 21 2 1 + P 2 1 + h 12 2 P 2 1 2 log ( 1 + P 1 1 + h 12 2 P 2 )
I ( 1 ) = { } h 12 2 1 + P 1 , 1 2 log ( 1 + P 1 ) + [1]
I ( 2 ) = { } h 21 2 1 + P 2 1 2 log ( 1 + P 2 )
Optimality conditions and sum capacity of S-HK schemes using Theorems 2 and 3 are given in Table A4. These conditions are plotted in Figure A4.
Figure A4. Channel conditions where sum capacity is obtained for the 2-user GIC, P 1 = P 2 = 1 .
Figure A4. Channel conditions where sum capacity is obtained for the 2-user GIC, P 1 = P 2 = 1 .
Entropy 21 01053 g0a4
In Figure A4, T 1 i denotes the region given by scheme i (corresponds to ith row in the Table) in Table A3 and T 2 i denotes the region given by scheme i in Table A4. In Figure A5 and Figure A6, for Topology 1, we plot the success probability of conditions in Table A3 and Table A4 respectively. For topology 1, as transmitter range increases, the probability that decoding interference is optimal increases and the probability that treating interference as noise is optimal reduces. Therefore, as expected, probability that T 1 1 is optimal decreases with increasing range and the probability that the other schemes are optimal increases.
Table A4. Optimality conditions and sum capacity of S-HK schemes for a 2 user GIC using Theorems 2 and 3.
Table A4. Optimality conditions and sum capacity of S-HK schemes for a 2 user GIC using Theorems 2 and 3.
S-HK SchemeOptimality ConditionsSum CapacityMatches
I ( 1 ) = { } h 21 1 , h 12 1 , 1 2 log ( 1 + P 1 + h 12 2 P 2 ) Thm. 10 in Reference [6]
I ( 2 ) = { 1 } h 12 2 ( 1 + h 21 2 P 1 ) 1 + P 1
I ( 1 ) = { 2 } h 12 1 , h 21 1 , 1 2 log ( 1 + P 2 + h 21 2 P 1 ) Thm. 10 in Reference [6]
I ( 2 ) = { } h 21 2 ( 1 + h 12 2 P 2 ) 1 + P 2
I ( 1 ) = { } 1 h 12 2 1 + P 1 , 1 2 log ( 1 + P 1 + h 12 2 P 2 )
I ( 2 ) = { } P 1 + h 12 2 P 2 P 2 + h 21 2 P 1
[2]
1 h 21 2 1 + P 2 , 1 2 log ( 1 + P 2 + h 21 2 P 1 )
P 2 + h 21 2 P 1 P 1 + h 12 2 P 2
Figure A5. Success probability of conditions in Table A3 for Topology 1.
Figure A5. Success probability of conditions in Table A3 for Topology 1.
Entropy 21 01053 g0a5
Figure A6. Success probability of conditions in Table A4 for Topology 1.
Figure A6. Success probability of conditions in Table A4 for Topology 1.
Entropy 21 01053 g0a6

Appendix E. Partially Connected GIC Results

We specialize our sum capacity results to cyclic, cascade and many-to-one GICs, which are special cases of GIC. For cyclic channels, we specialize the two channel conditions for general S-HK schemes to get two new sum capacity results. For 3 user cascade channels, sum capacity results were derived in Reference [11] but we derive sum capacity results for a general K user cascade channels. For the many-to-one and one-to-many GICs, we can recover the results derived in Reference [13].

Appendix E.1. Cyclic GIC

We use the following channel model for cyclic GIC
y k = x k + h k + 1 x k + 1 + z k , k [ K ] ,
where the indices are modulo K.
Result A1.
For a cyclic GIC, satisfying the following conditions for some sets I 1 , D 1 [ K ] and I 1 D 1 = [ K ]
h i + 1 2 ( 1 + Q i + 1 ) 2 ρ i + 1 2 1 ρ i 2 , i I 1
h i + 1 2 ( 1 + Q i + 1 ) 1 + P i , i D 1
where
Q i = h i + 1 2 P i + 1 : i I 1 0 : e l s e
the sum capacity is given by
C s u m = K i = 1 1 2 log 1 + P i 1 + Q i
and the sum capacity is achieved by S-HK scheme defined by I ( i ) = ϕ , i D 1 and I ( i ) = { i + 1 } , i I 1 .
Proof. 
Use Theorem 1. □
Corollary A1.
For the cyclic channel, if we treat interference as noise at receivers i I 1 and decode interference at receivers i D 1 = [ K ] I 1 , then the achievable sum rates given by
S 1 2 j J 1 log ( 1 + P j + h j + 1 2 P j + 1 ) + 1 2 j J 2 log ( 1 + c j P j ) J 1 D 1 ,   s u c h   t h a t   i f   i J 1 , t h e n , i + 1 J 1
J 2 = [ K ] { i , i + 1 : i J 1 }
2 S 1 2 j = 1 K log ( 1 + P j + h j + 1 2 P j + 1 ) , i f D 1 = [ K ]
where, for all i = 1 , 2 , , K
c i = min h i 2 , 1 1 + h i + 1 2 P i + 1 : i 1 D 1 , i I 1 min { h i 2 , 1 } : i 1 D 1 , i D 1 1 1 + h i + 1 2 P i + 1 : i 1 I 1 , i I 1 1 : i 1 I 1 , i D 1
are achievable.
Proof. 
Use Theorem 2. □
Result A2.
For the cyclic GIC, let I 1 , D 1 [ K ] such that I 1 D 1 = [ K ] and let some { k } D 1 , if the channel satisfies the following conditions
h i + 1 2 ( 1 + Q i + 1 ) 2 ρ i + 1 2 1 ρ i 2 , i I 1 ,
h k + 1 1
j = 1 j { k . k + 1 } K 1 + P j 1 + Q j j [ K ] 1 2 ( 1 + P j + h j + 1 2 P j + 1 ) ( 1 + P k + h k + 1 2 P k + 1 ) i f D 1 = [ K ] j = 1 j { k . k + 1 } K 1 + P j 1 + Q j j J 2 ( 1 + c j P j ) j J 1 ( 1 + P j + h j + 1 2 P j + 1 ) ( 1 + P k + h k + 1 2 P k + 1 ) , J 1 D 1 , s . t . i f i J 1 , t h e n , i + 1 J 1 ,
J 2 = [ K ] { i , i + 1 : i J 1 }
where
Q i = h i + 1 2 P i + 1 : i I 1 0 : e l s e
the sum capacity is given by
C s u m = 1 2 log ( 1 + P k + h k + 1 2 P k + 1 ) + j = 1 j { k , k + 1 } K 1 2 log ( 1 + P j 1 + Q j )
where c i , i [ K ] is defined as in Corollary A1.
Proof. 
Use Theorem 3, to get the converse conditions (A42) and (A43) and use Corollary A1, to get the achievability conditions (A45). □

Appendix E.2. Cascade GIC

We use the following channel model for cascade GIC
y k = x k + h k + 1 x k + 1 + z k , k { 1 , 2 , , K 1 } y K = x K + z K
Result A3.
For the cascade GIC, satisfying the following conditions for some sets I 1 , D 1 [ K ] and I 1 D 1 { K } = [ K ] and { K } I 1 , D 1
h i + 1 2 ( 1 + Q i + 1 ) 2 ρ i + 1 2 1 ρ i 2 , i I 1
h i + 1 2 ( 1 + Q i + 1 ) 1 + P i , i D 1
where
Q i = h i + 1 2 P i + 1 : i I 1 0 : e l s e
the sum capacity is given by
C s u m = K i = 1 1 2 log 1 + P i 1 + Q i
Proof. 
We get the result by taking I ( i ) = ϕ , i I 1 and I ( i ) = { i + 1 } , i D 1 in Theorem 1. Noisy and mixed interference results in Reference [11] (Cor. 1), [11] (Thm. 3) can be obtained from this result. □
Corollary A2.
For the cascade channel, if we treat interference as noise at receivers i I 1 and decode interference at receivers i D 1 = { 1 , 2 , , K 1 } I 1 , then the sum rates given by
S 1 2 j J 1 log ( 1 + P j + h j + 1 2 P j + 1 ) + 1 2 j J 2 log ( 1 + e j P j ) J 1 D 1 ,   s u c h   t h a t   i f   i J 1 , t h e n , i + 1 J 1 J 2 = [ K ] { i , i + 1 : i J 1 }
where, for all i = 1 , 2 , , K 1
e i = min h i 2 , 1 1 + h i + 1 2 P i + 1 : i 1 D 1 , i I 1 min { h i 2 , 1 } : i 1 D 1 , i D 1 1 1 + h i + 1 2 P i + 1 : i 1 D 1 , i I 1 1 : i 1 D 1 , i D 1
and e K = 1 are achievable.
Result A4.
For the cascade GIC, treating interference as noise at receivers i I 1 and decoding interference at receivers i D 1 (assuming { K 1 } D 1 ) is optimal if the channel satisfies the following conditions
h i + 1 2 ( 1 + Q i + 1 ) 2 ρ i + 1 2 1 ρ i 2 , i I 1 ,
h K 1 ( 1 + P K 1 + h K 2 P K ) j = 1 K 2 1 + P j 1 + Q j j J 1 ( 1 + P j + h j + 1 2 P j + 1 ) j J 2 ( 1 + e j P j ) , J 1 D 1 ,   s u c h   t h a t   i f   i J 1 , t h e n , i + 1 J 1 ,
J 2 = [ K ] { i , i + 1 : i J 1 }
where
Q i = h i + 1 2 P i + 1 : i I 1 0 : e l s e
and the sum capacity is given by
C s u m = 1 2 log ( 1 + P K 1 + h K 2 P K ) + j = 1 K 2 1 2 log ( 1 + P j 1 + Q j )
where e i , i [ K ] is defined as in Corollary A2.
Proof. 
Use Theorem 3 to get the converse conditions (A50) and (A51) and use Corollary A2 to get the achievability conditions (A52). Part of the strong interference result in Reference [11] (Cor. 2) can be obtained from this result. □

Appendix E.3. Many-to-One GIC

Channel model for many-to-one IC is given by
y 1 = x 1 + j = 2 K h i x i + z i y i = x i + z i , i = 2 , 3 , , K
Result A5.
For a many-to-one channel, satisfying the following conditions
j = k + 1 K h j 2 1 i B N ( 1 + P i ) . ( 1 + K j = k + 1 h j 2 P j + P 1 )
1 + i B N h i 2 P i + P 1 , N B , N B ,
where B = { 2 , 3 , , k } , k { 1 , 2 , . . , K } , the sum capacity is given by then the sum capacity is given by
C s u m = 1 2 log 1 + P 1 1 + j = k + 1 K h j 2 P j + i = 2 K 1 2 log ( 1 + P i )
Proof. 
From Theorem 1, taking I ( 1 ) = { k + 1 , , K } , we get the required sum capacity if the channel satisfies the conditions (A55) and also
j = k + 1 K h j 2 ρ j 2 1 ρ 1 2
for some ρ i [ 0 , 1 ] , i = 1 , 2 , , K .
Choose ρ 1 = 0 and ρ j = 1 j = k + 1 , , K to get the condition (A55). Reference [13] (Thm. 4) can be obtained from this result. □
Result A6.
For the K-user Gaussian many-to-one IC satisfying the following channel conditions:
i N ( 1 + P i ) 1 + P 1 + K i = k + 1 h i 2 P i + i B N h i 2 P i
i = 2 k 1 ( 1 + P i ) ( 1 + P 1 + j = k K h j 2 P j ) N B , N { 2 , 3 , . . , k 1 } a n d B = { 2 , 3 , . . . k }
K i = k + 1 h i 2 1 ρ 2 , ρ h k = 1 + K i = k + 1 h i 2 P i
the sum capacity is given by
C s u m = K i = 2 i k 1 2 log ( 1 + P i ) + 1 2 log 1 + P 1 + h k 2 P k 1 + K i = k + 1 h i 2 P i
Proof. 
(Converse) From Theorem 3, taking G ( i ) = { k + 1 , , K } , we get the required outer bound when (A59) is satisfied.
(Achievability) Using Theorem 2 with I ( i ) = { k + 1 , , K } , we get the following achievable sum rates
S 1 2 K i = k + 1 log ( 1 + P i ) + 1 2 i M log ( 1 + m i P i ) + 1 2 log 1 + P 1 + i B M h i 2 P i 1 + K i = k + 1 h i 2 P i , M B .
where B = { 2 , 3 , , k } and m i = min 1 , h i 2 1 + K j = k + 1 h j 2 P j .
Among these sum rates, we want the sum rate with M = B { k } and m i = 1 , i B { k } to be dominant. We get the conditions (A58), for the sum rate with M = B { k } to be dominant assuming m i = 1 , i { 2 , 3 , , k } . Given the converse conditions (A59) and (A58), conditions with m 1 = h i 2 1 + K j = k + 1 h j 2 P j are always redundant. Reference [13] (Thm. 5) can be obtained from this result. □

Appendix E.4. One-to-Many GIC

We use the following model for one-to-many channel
y i 1 = x i + h i x K + z i i = 2 , , K 1 y K = x K + z K
Result A7.
For the K user Gaussian one-to-many channel satisfying the following conditions
1 + P i | h i | 2 , 1 i k ,
K 1 j = k + 1 | h j | 2 P K + | h j | 2 | h j | 2 P K + 1 1 ,
then the sum capacity is given by
C s u m = 1 2 k i = 1 log ( 1 + P i ) + 1 2 log ( 1 + P K ) + 1 2 K 1 j = k + 1 log 1 + P j 1 + | h j | 2 P K
Proof. 
From Theorem 1, taking I ( i ) = ϕ , i = { 1 , 2 , , k } and I ( i ) = { K } , i = { k + 1 , , K 1 } , we get the required sum capacity under the conditions (A63) and
j = k + 1 K 1 h j 2 1 + h j 2 P K ρ j 2 1 1 + ( 1 ρ K ) 2
Choosing ρ K = 1 , ρ j = 0 , j = k + 1 , , K 1 , we get (A64). Reference [13] (Thm. 6) can be obtained from this result. □
Result A8.
For the K-user Gaussian one-to-many IC satisfying the following conditions:
1 h l 2 1 + P l
h l 2 1 + P l h i 2 1 + P i , 1 i K 1 , i l
for any l { 1 , 2 , , K 1 } , the sum capacity is
C s u m = 1 2 K 1 j = 1 , j l log ( 1 + P j ) + 1 2 log ( 1 + P l + h l 2 P K )
Proof. 
From Theorem 3, with G ( i ) = { K } , i = { 1 , 2 , , K 1 } , sum rate is upper bounded by (A67).
From Theorem 2, we get the following achievable sum rates
S K j = 1 1 2 log ( 1 + P j ) , S K 1 j = 1 j i 1 2 log ( 1 + P j ) + 1 2 log ( 1 + P i + h i 2 P K ) , 1 i K 1
For the required sum rate to be dominant among all the achievable rates, channel should satisfy (A65) and (A66). Reference [13] (Thm. 7) can be obtained from this result. □

Appendix F. Model Parameters Used in the Numerical Results

For channel fading, we use the Erceg model [21]. The path loss function is given by
P L = 20 log 10 4 π d 0 λ + 10 γ log 10 d d 0 + s ,
where d 0 is some close-in distance, λ is wavelength, γ is path loss exponent, s is shadow fading component. γ is characterized as follows
γ = a b h b + c h b + x σ γ ,
where h b is antenna height, x N ( 0 , 1 ) , a , b , c , σ γ are constants which depends on the terrain type.
The shadow fading component s is given by
s = y ( μ σ + z σ σ ) ,
where y , z N ( 0 , 1 ) , μ σ , σ σ are constants which depends on the terrain type. We consider two terrain categories, hilly/light tree density (terrain type 1) and hilly/moderate-to-heavy tree density (terrain type 2). The model parameters for the two terrain categories are given in the Table A5.
Table A5. Numerical values of model parameters.
Table A5. Numerical values of model parameters.
Model
Parameter
Terrain Category
Hilly/Light Tree DensityHilly/Moderate-to-Heavy Tree Density
a4.04.6
b
(in m 1 )
0.00650.0075
c
(in m)
17.112.6
σ γ 0.750.57
μ σ 9.610.6
σ σ 3.02.3

References

  1. Carleial, A. A case where interference does not reduce capacity (Corresp.). IEEE Trans. Inf. Theory 1975, 21, 569–570. [Google Scholar] [CrossRef]
  2. Sato, H. The capacity of the Gaussian interference channel under strong interference (Corresp.). IEEE Trans. Inf. Theory 1981, 27, 786–788. [Google Scholar] [CrossRef]
  3. Annapureddy, V.S.; Veeravalli, V.V. Gaussian Interference Networks: Sum Capacity in the Low-Interference Regime and New Outer Bounds on the Capacity Region. IEEE Trans. Inf. Theory 2009, 55, 3032–3050. [Google Scholar] [CrossRef] [Green Version]
  4. Shang, X.; Kramer, G.; Chen, B. A New Outer Bound and the Noisy-Interference SumRate Capacity for Gaussian Interference Channels. IEEE Trans. Inf. Theory 2009, 55, 689–699. [Google Scholar] [CrossRef]
  5. Shang, X.; Kramer, G.; Chen, B. New outer bounds on the capacity region of Gaussian interference channels. In Proceedings of the 2008 IEEE International Symposium on Information Theory, Toronto, ON, Canada, 6–11 July 2008; pp. 245–249. [Google Scholar]
  6. Motahari, A.S.; Khandani, A.K. Capacity Bounds for the Gaussian Interference Channel. IEEE Trans. Inf. Theory 2009, 55, 620–643. [Google Scholar] [CrossRef] [Green Version]
  7. Etkin, R.H.; Tse, D.N.C.; Wang, H. Gaussian Interference Channel Capacity to Within One Bit. IEEE Trans. Inf. Theory 2008, 54, 5534–5562. [Google Scholar] [CrossRef] [Green Version]
  8. Han, T.; Kobayashi, K. A new achievable rate region for the interference channel. IEEE Trans. Inf. Theory 1981, 27, 49–60. [Google Scholar] [CrossRef] [Green Version]
  9. Shang, X. On the Capacity of Gaussian Interference Channels; Syracuse University: Syracuse, NY, USA, 2008. [Google Scholar]
  10. Tuninetti, D. K-user interference channels: General outer bound and sum-capacity for certain Gaussian channels. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, Chicago, IL, USA, 31 July–1 August 2011; pp. 1166–1170. [Google Scholar] [CrossRef]
  11. Liu, Y.; Erkip, E. On the sum capacity of K-user cascade Gaussian Z-interference channel. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, Chicago, IL, USA, 31 July–1 August 2011; pp. 1382–1386. [Google Scholar] [CrossRef]
  12. Prasad, R.; Bhashyam, S.; Chockalingam, A. On the Gaussian Many-to-One X Channel. IEEE Trans. Inf. Theory 2016, 62, 244–259. [Google Scholar] [CrossRef]
  13. Gnanasambandam, A.; Chaluvadi, R.; Bhashyam, S. On the sum capacity of many-to-one and one-to-many Gaussian interference channels. In Proceedings of the 2017 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2017; pp. 1842–1846. [Google Scholar] [CrossRef]
  14. Nam, J. Capacity Bounds for the K-User Gaussian Interference Channel. IEEE Trans. Inf. Theory 2017, 63, 6416–6439. [Google Scholar] [CrossRef]
  15. Sridharan, S.; Jafarian, A.; Vishwanath, S.; Jafar, S.A. Capacity of Symmetric K-User Gaussian Very Strong Interference Channels. In Proceedings of the IEEE GLOBECOM 2008—2008 IEEE Global Telecommunications Conference, New Orleans, LA, USA, 30 November–4 December 2008; pp. 1–5. [Google Scholar] [CrossRef]
  16. Farsani, R.K. The K-user interference channel: Strong interference regime. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; pp. 2029–2033. [Google Scholar] [CrossRef]
  17. Zhu, J.; Gastpar, M. Lattice Codes for Many-to-One Interference Channels With and Without Cognitive Messages. IEEE Trans. Inf. Theory 2015, 61, 1309–1324. [Google Scholar] [CrossRef] [Green Version]
  18. Padakandla, A.; Sahebi, A.G.; Pradhan, S.S. An Achievable Rate Region for the Three-User Interference Channel Based on Coset Codes. IEEE Trans. Inf. Theory 2016, 62, 1250–1279. [Google Scholar] [CrossRef] [Green Version]
  19. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar] [CrossRef]
  20. Gou, T.; Jafar, S.A. Capacity of a class of symmetric SIMO Gaussian interference channels within O(1). In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 1924–1928. [Google Scholar] [CrossRef]
  21. Erceg, V.; Greenstein, L.J.; Tjandra, S.Y.; Parkoff, S.R.; Gupta, A.; Kulic, B.; Julius, A.A.; Bianchi, R. An empirically based path loss model for wireless channels in suburban environments. IEEE J. Sel. Areas Commun. 1999, 17, 1205–1211. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Topology 1 setup where triangles are transmitters and crosses are receivers.
Figure 1. Topology 1 setup where triangles are transmitters and crosses are receivers.
Entropy 21 01053 g001
Figure 2. Topology 2 setup where triangles are transmitters and crosses are receivers.
Figure 2. Topology 2 setup where triangles are transmitters and crosses are receivers.
Entropy 21 01053 g002
Figure 3. Topology 3 setup where triangles are transmitters and crosses are receivers.
Figure 3. Topology 3 setup where triangles are transmitters and crosses are receivers.
Entropy 21 01053 g003
Figure 4. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 1, terrain type 1, K = 3 .
Figure 4. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 1, terrain type 1, K = 3 .
Entropy 21 01053 g004
Figure 5. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 2, terrain type 1, K = 3 .
Figure 5. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 2, terrain type 1, K = 3 .
Entropy 21 01053 g005
Figure 6. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 3, terrain type 1, K = 3 .
Figure 6. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 3, terrain type 1, K = 3 .
Entropy 21 01053 g006
Figure 7. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 1, terrain type 2, K = 3 .
Figure 7. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 1, terrain type 2, K = 3 .
Entropy 21 01053 g007
Figure 8. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 2, terrain type 2, K = 3 .
Figure 8. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 2, terrain type 2, K = 3 .
Entropy 21 01053 g008
Figure 9. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 3, terrain type 2, K = 3 .
Figure 9. Success probability of conditions (2)–(4) for TIN scheme, S-HK schemes excluding TIN and all S-HK schemes with topology 3, terrain type 2, K = 3 .
Entropy 21 01053 g009
Figure 10. Expected rate of TIN scheme, expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied and the expected rate of TIN scheme given that the conditions (2)–(4) are satisfied with topology 1, terrain type 1, K = 3 .
Figure 10. Expected rate of TIN scheme, expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied and the expected rate of TIN scheme given that the conditions (2)–(4) are satisfied with topology 1, terrain type 1, K = 3 .
Entropy 21 01053 g010
Figure 11. Expected rate of TIN scheme, expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied and the expected rate of the TIN scheme given that the conditions (2)–(4) are satisfied with topology 2, terrain type 1, K = 3 .
Figure 11. Expected rate of TIN scheme, expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied and the expected rate of the TIN scheme given that the conditions (2)–(4) are satisfied with topology 2, terrain type 1, K = 3 .
Entropy 21 01053 g011
Figure 12. Expected rate of TIN scheme, expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied and the expected rate of the TIN scheme given that the conditions (2)–(4) are satisfied with topology 3, terrain type 1, K = 3 .
Figure 12. Expected rate of TIN scheme, expected rate of S-HK schemes given that the conditions (2)–(4) are satisfied and the expected rate of the TIN scheme given that the conditions (2)–(4) are satisfied with topology 3, terrain type 1, K = 3 .
Entropy 21 01053 g012
Figure 13. Success probability of conditions (2)–(4) for topology 1, terrain type 1, K = 3 .
Figure 13. Success probability of conditions (2)–(4) for topology 1, terrain type 1, K = 3 .
Entropy 21 01053 g013
Figure 14. Success probability of conditions (2)–(4) for topology 2, terrain type 1, K = 3 .
Figure 14. Success probability of conditions (2)–(4) for topology 2, terrain type 1, K = 3 .
Entropy 21 01053 g014
Figure 15. Success probability of conditions (2)–(4) for topology 3, terrain type 1, K = 3 .
Figure 15. Success probability of conditions (2)–(4) for topology 3, terrain type 1, K = 3 .
Entropy 21 01053 g015
Figure 16. Success probability of achievability conditions (4) and success probability of conditions (2)–(4) for all S-HK schemes except TIN with topology 1, terrain type 1, K = 3 and K = 4 .
Figure 16. Success probability of achievability conditions (4) and success probability of conditions (2)–(4) for all S-HK schemes except TIN with topology 1, terrain type 1, K = 3 and K = 4 .
Entropy 21 01053 g016

Share and Cite

MDPI and ACS Style

Chaluvadi, R.; Bolli, M.; Bhashyam, S. On the Optimality of Interference Decoding Schemes for K-User Gaussian Interference Channels
. Entropy 2019, 21, 1053. https://doi.org/10.3390/e21111053

AMA Style

Chaluvadi R, Bolli M, Bhashyam S. On the Optimality of Interference Decoding Schemes for K-User Gaussian Interference Channels
. Entropy. 2019; 21(11):1053. https://doi.org/10.3390/e21111053

Chicago/Turabian Style

Chaluvadi, Ragini, Madhuri Bolli, and Srikrishna Bhashyam. 2019. "On the Optimality of Interference Decoding Schemes for K-User Gaussian Interference Channels
" Entropy 21, no. 11: 1053. https://doi.org/10.3390/e21111053

APA Style

Chaluvadi, R., Bolli, M., & Bhashyam, S. (2019). On the Optimality of Interference Decoding Schemes for K-User Gaussian Interference Channels
. Entropy, 21(11), 1053. https://doi.org/10.3390/e21111053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop