Next Article in Journal
A Brief History of Long Memory: Hurst, Mandelbrot and the Road to ARFIMA, 1951–1980
Next Article in Special Issue
Simultaneous Wireless Information and Power Transfer for MIMO Interference Channel Networks Based on Interference Alignment
Previous Article in Journal
Contextuality and Indistinguishability
Previous Article in Special Issue
A View of Information-Estimation Relations in Gaussian Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interfering Relay Channels

1
Ericsson Research, Ericsson, 164 40 Stockholm, Sweden
2
School of Electrical Engineering, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
3
Department of Electrical & Computer Engineering, Tufts University, Medford, MA 02155, USA
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(9), 441; https://doi.org/10.3390/e19090441
Submission received: 8 July 2017 / Revised: 9 August 2017 / Accepted: 21 August 2017 / Published: 23 August 2017
(This article belongs to the Special Issue Network Information Theory)

Abstract

:
This paper introduces and studies a model in which two relay channels interfere with each other. Motivated by practical scenarios in heterogeneous wireless access networks, each relay is assumed to be connected to its intended receiver through a digital link with finite capacity. Inner and outer bounds for achievable rates are derived and shown to be tight for new discrete memoryless classes, which generalize and unify several known cases involving interference and relay channels. Capacity region and sum capacity for multiple Gaussian scenarios are also characterized to within a constant gap. The results show the optimality or near-optimality of the quantize-bin-and-forward coding scheme for practically relevant relay-interference networks, which brings important engineering insight into the design of wireless communications systems.

1. Introduction

Two of the fundamental building blocks of network information theory are the interference channel (IC) and the relay channel (RC). Despite the fact that the capacity of each channel is still unknown, the use of relays in interference networks is an interesting research topic and has been part of wireless networks design, e.g., [1] (Chapter 18). In this paper, we study a specific channel model which accounts for a particular interaction between the interference channel and the relay channel, and establish the capacity region or approximate capacity region for several classes of these channels.
Consider a network as illustrated in Figure 1. In this network, a relay channel consisting of transmitter Tx 1 , relay R 1 , and receiver Rx 1 interferes with a neighboring relay channel consisting of transmitter Tx 2 , relay R 2 , and receiver Rx 2 . Specifically, the signal sent by Tx i is also received by R j and Rx j , for i , j { 1 , 2 } , i j . Further, each relay R i is connected to its intended receiver Rx i via a digital link of finite capacity C i . This network model characterizes certain key components of heterogeneous wireless access networks, an important part of the current and future wireless architectures. For example, in the context of dual connectivity [2], Tx i , R i , and Rx i can respectively act as a mobile terminal, a pico base station, and the macro base station in a cell i neighboring a cell j within a multi-cell cellular network. The pico base station and the macro base station are connected over a dedicated finite-rate backhaul link. In such heterogeneous network setting, there can be multiple groups of relay channels active at the same time within the wireless transmission range, causing interference to each other, e.g., the inter-cell interference. As a basis for understanding their effects, we study the specific case of two interfering relay channels. We call the network in Figure 1 the interfering relay channels (IRC) to emphasize the fact that each relay is meant to help only one receiver. The IRC can also be used as an abstraction for many other settings in wireless ad-hoc networks, sensor networks, and device-to-device (D2D) communications. The digital link between each relay and its intended receiver models the scenario in which the relay-receiver link is a wireless link operating at an orthogonal frequency to the underlying interference channel (e.g., a microwave link) or the scenario in which the relay-receiver link is a wireline link.

1.1. Related Work

The channel under investigation is clearly a special case of the general interference channel with multiple relays. Each relay not only can relay the intended signal but also can forward the unintended signals for the purpose of interference mitigation. The IRC is also an extension of the interference channel with a single relay [3], which was thoroughly studied in many contexts, e.g., [4,5,6,7,8,9]. The most relevant setting among these references is the interference channel with a degraded broadcasting relay [8]. Specifically, if R 1 coincides with R 2 in Figure 1 we recover the channel studied in [8]. Another closely related problem is the interference channel with limited receiver cooperation in [10], where each receiver also acts as a relay. Particularly, in the special case when R i coincides with receiver Rx j , i , j { 1 , 2 } , i j , we recover the channel in [10]. In subsequent sections we will draw connections between our results and the results in [8,10]. At the same time, it is easy to recognize that the IRC is a generalization of the primitive relay channel, studied by Cover and Kim in [11]. A key difference is the presence of interference, which may require new ingredients in the optimal coding strategy. We will later show that a coding scheme that is an extended version for the scheme proposed in [11] can achieve the capacity region of several types of IRC which have similar relationship between the channel output to the relay, the channel input from the transmitter, and the channel output to the receiver as the primitive relay channel has. We also note that the type of relays considered in the IRC (as well as in [8,9,11]) belongs to the type of in-band reception and out-of-band transmission relays categorized in [5].
Although the capacity for the general interference channel and relay channels are unknown, due to their practical relevance, there exist several recent studies investigating fundamental performance bounds of specific settings of both interference and relaying. In particular, in a Gaussian interference channel with a causal relay, outer bounds are derived for strong and very strong interference cases [12]. For an interference channel with a relay a layered quantize-and-forward scheme is shown to achieve a constant gap to the capacity region under certain conditions [13]. New cut-set bounds for causal discrete memoryless relay networks are derived and achieved by a simple amplify-and-forward scheme in a causal vector Gaussian relay channel and the two-way relay channel [14]. For the multi-antenna Gaussian interference channel with a relay, inner and outer bounds are established on the degrees of freedom [15]. A novel relay-aided interference management strategy can achieve the optimal degrees of freedom performance even with limited channel knowledge in a massive antenna setting [16]. The degrees of freedom of interference channels with a cognitive relay under delayed feedback is considered in [17]. Lastly, a new cooperative transmit scheme is proposed for the multi-way relay channel, building upon a distributed compute-and-forward strategy [18].

1.2. Summary of Results and Contributions

In this paper, we study several classes of discrete memoryless and Gaussian IRC. The main results are summarized as follows.
  • We propose an inner bound for the capacity region of the discrete memoryless IRC. The coding technique is based on rate splitting at the transmitters and quantize-bin-forward [10] (Note that in the literature of relay channels, the term binning and hashing often have the same meaning, see, e.g., [19]) or extended hash-forward [20].
  • We characterize the capacity region of a class of discrete memoryless channels, namely the semi-deterministic IRC which includes several known interference and relay channels as special cases.
  • We derive an outer bound for the Gaussian IRC and use it to show constant-gaps to the capacity region or to the sum capacity in multiple scenarios: When the interference is strong, we characterize the capacity region of the real-valued Gaussian IRC to within 1 / 2 bit. For the Gaussian IRC with high-capacity relay-receiver links, we characterize the capacity region to within log ( 7 ) bits. For other regimes of channel parameters, we show that the inner bound and outer bound are within a bounded gap as long as the interference that each transmitter induces at the neighboring relay is not unboundedly stronger than the interference induced at the neighboring receiver. Moreover, for the Gaussian IRC in the so called weak interference-relay regime, we characterize the sum capacity to within 1 bit.
  • We also study a closely related channel model, the compound multiple access relay channel. We characterize capacity region of a class of the semi-deterministic compound multiple access relay channel and the capacity region of a class of Gaussian compound multiple access relay channels to within 1 / 2 bit.
The achievable scheme in this paper is a combination of common-private rate splitting [21,22] and quantize-bin-forward [10]. As noted in Section 1.1, related problems have also been studied by combining rate splitting with generalized hash-forward [8,9] or with noisy network coding [23,24]. By focusing on the particular model of IRC and based on the insights learned from proving the inner bound, we are able to prove capacity and approximate capacity results for new types of discrete memoryless and Gaussian channels. For the discrete memoryless channel, the newly established capacity regions in this paper generalize and unify the capacity results of several relay and interference channels. The main technical contribution of the paper is the various outer bounds that help characterize the exact or approximate capacity regions. Further, unlike [8,10] which study only Gaussian channels, we study both discrete memoryless and Gaussian channels. The results of our study show that a “simple” combination of rate splitting with the quantize-bin(or hash)-forward protocol is optimal or nearly optimal for such a complex interference networks like the IRC. This has an important engineering implication because such a relaying scheme, which does not require intermediate nodes in the network to decode messages, is highly appealing in real-world systems.

1.3. Organization

The paper is organized as follows: Section 2 presents an achievable rate region for the discrete memoryless IRC. Section 3 presents examples of discrete memoryless channel for which the achievable rate region in Section 2 is the capacity region. Section 4 studies the Gaussian IRC, wherein a general outer bound to the capacity region is derived and used to show the capacity region or sum capacity to within a constant number of bits for different types of channels. This section also characterizes the capacity region of a class of Gaussian compound multiple access relay channel to within a constant number of bits. Conclusions are presented in Section 5 and long proofs are relegated to the appendices.

1.4. Notation and Definition

We follow the notation of [25]. In particular, we use T ϵ n to denote the set of ϵ -typical n-sequences as defined in [25] (Chapter 2), and use [ 1 : 2 r ] to denote the set { 1 , 2 , , 2 r } , where r is the smallest integer r . We define C ( x ) = 1 2 log ( 1 + x ) and ( x ) + = max ( 0 , x ) for a real number x. All log’s in this paper are to the base 2. a : = b means a equals b by definition. Throughout the paper the random variable Q denotes the time-sharing variable, with a finite support set Q having cardinality | Q | . We will make use of the notion of the gap between certain achievable rate regions and the capacity regions of different channels, whose definition is given below.
Definition 1 (k-bit gap).
Let S 1 and S 2 denote two sets of nonnegative rate pairs ( R 1 , R 2 ) . The set S 1 is said to be within k bits of the set S 2 if for any ( R 1 , R 2 ) S 2 we have ( ( R 1 k ) + , ( R 2 k ) + ) S 1 .

2. Discrete Memoryless IRC and A General Achievability

2.1. Problem Formulation

Refer again to the IRC as depicted in Figure 1. Transmitter Tx i wants to send a message M i to receiver Rx i , i { 1 , 2 } . Relay R i helps receiver Rx i decode its desired message by sending aid signals via a digital link with finite capacity of C i bits. The channel is memoryless in the sense that
p ( y 1 i , y 2 i , y r 1 i , y r 2 i | m 1 , m 2 , x 1 i , x 2 i ) = p Y 1 , Y 2 , Y r 1 , Y r 2 | X 1 , X 2 ( y 1 i , y 2 i , y r 1 i , y r 2 i | x 1 i , x 2 i ) .
A ( 2 n R 1 , 2 n R 2 , n ) code, where n denotes the codeword length, for the discrete memoryless IRC consists of the following elements, for each k { 1 , 2 } :
  • a message set M k = [ 1 : 2 n R k ] ;
  • an encoding function f k : M k X k n that assigns a sequence x k n to each message m k M k ;
  • a relaying function r k : Y r k i 1 × [ 1 : 2 ( i 1 ) C k ] [ 1 : 2 C k ] that maps ( y r k i 1 , v k i 1 ) to a symbol v k i for all i [ 1 : n ] , where y r k i 1 denotes the received sequence at the relay k up to and including the ( i 1 ) th symbol and v k i denotes the ith symbol sent via the digital link of capacity C k ;
  • a decoding function g k : Y k n × [ 1 : 2 n C k ] M k that produces a guess m ^ k from the received sequence y k n and an index from the set [ 1 : 2 n C k ] .
The average probability of error is defined as follows
P e ( n ) : = 1 2 n R 1 2 n R 2 m 1 [ 1 : 2 n R 1 ] , m 2 [ 1 : 2 n R 2 ] P ( m ^ 1 , m ^ 2 ) ( m 1 , m 2 ) .
A rate pair ( R 1 , R 2 ) is said to be achievable if there exists a sequence of ( 2 n R 1 , 2 n R 2 , n ) codes, indexed by n, such that P e ( n ) 0 as n . The capacity region of the network is the closure of the set of achievable rate pairs.

2.2. An Achievable Rate Region for the Discrete Memoryless IRC

Motivation of the achievable scheme: as note in Section 1, the IRC is related to both the primitive relay channel [11], the interference channel with limited receiver cooperation [10], and the interference channel with a degraded broadcasting relay [8]. The hash-forward coding scheme that achieves the capacity of the primitive relay channel [11] under a constraint of channel input and outputs was generalized to include a quantization stage at the relay in [20] and coined extended hash-forward. Interestingly, in [8,10] a similar relaying scheme combined with rate-splitting encoding at the transmitters, called quantize-bin-forward or generalized hash-forward, was shown to achieve the capacity region of the respective channels to within a constant number of bits. These facts have motivated us to propose a coding strategy based on the quantize-bin-forward scheme to our current problem, which leads to the following inner bound.
Theorem 1 (Inner bound for DM-IRC).
Let us define
a 1 = I ( X 1 ; Y 1 | X 1 c , X 2 c , Q )
a 1 = I ( X 1 ; Y 1 , Y ^ r 1 | X 1 c , X 2 c , Q )
b 1 = I ( X 1 , X 2 c ; Y 1 | X 1 c , Q )
b 1 = I ( X 1 , X 2 c ; Y 1 , Y ^ r 1 | X 1 c , Q )
c 1 = I ( X 1 ; Y 1 | X 2 c , Q )
c 1 = I ( X 1 ; Y 1 , Y ^ r 1 | X 2 c , Q )
d 1 = I ( X 1 , X 2 c ; Y 1 | Q )
d 1 = I ( X 1 , X 2 c ; Y 1 , Y ^ r 1 | Q )
ξ 1 = I ( Y ^ r 1 ; Y r 1 | X 1 , X 2 c , Y 1 , Q ) ,
and define a 2 , a 2 , b 2 , b 2 , c 2 , c 2 , d 2 , d 2 , ξ 2 by exchanging indices 1 2 everywhere in (3)–(11). Q is drawn from some finite set Q . Let P denote the collection of joint distributions P of the form
p ( q ) p ( x 1 , x 1 c | q ) p ( x 2 , x 2 c | q ) p ( y ^ r 1 | y r 1 , q ) p ( y ^ r 2 | y r 2 , q ) p ( y 1 , y 2 , y r 1 , y r 2 | x 1 , x 2 ) .
For a fixed P P , let R ( P ) denote the set of non-negative rate pairs ( R 1 , R 2 ) satisfying
R 1 min { c 1 + ( C 1 ξ 1 ) + , c 1 }
R 1 min { a 1 + ( C 1 ξ 1 ) + , a 1 } + min { b 2 + ( C 2 ξ 2 ) + , b 2 }
R 2 min { c 2 + ( C 2 ξ 2 ) + , c 2 }
R 2 min { a 2 + ( C 2 ξ 2 ) + , a 2 } + min { b 1 + ( C 1 ξ 1 ) + , b 1 }
R 1 + R 2 min { a 1 + ( C 1 ξ 1 ) + , a 1 } + min { d 2 + ( C 2 ξ 2 ) + , d 2 }
R 1 + R 2 min { b 1 + ( C 1 ξ 1 ) + , b 1 } + min { b 2 + ( C 2 ξ 2 ) + , b 2 }
R 1 + R 2 min { d 1 + ( C 1 ξ 1 ) + , d 1 } + min { a 2 + ( C 2 ξ 2 ) + , a 2 }
2 R 1 + R 2 min { a 1 + ( C 1 ξ 1 ) + , a 1 } + min { d 1 + ( C 1 ξ 1 ) + , d 1 } + min { b 2 + ( C 2 ξ 2 ) + , b 2 }
R 1 + 2 R 2 min { b 1 + ( C 1 ξ 1 ) + , b 1 } + min { a 2 + ( C 2 ξ 2 ) + , a 2 } + min { d 2 + ( C 2 ξ 2 ) + , d 2 } .
The convex hull of the set P P R ( P ) is an achievable region for the DM-IRC.
Sketch of proof.
(Details are presented in Appendix A.) Consider i , j { 1 , 2 } and i j . We employ Han–Kobayashi common-private rate splitting at the transmitters and quantize-bin-forward in [10] at the relays as follows: Transmitter i splits its message m i into a common part m i c of rate R i c and a private part m i p of rate R i p . A superposition codebook is generated, with codewords x i c n ( m i c ) and x i n ( m i c , m i p ) generated independently with distributions p X i c and p X i | X i c , respectively. Relay i’s codebook has 2 n R ^ i codewords y ^ r i n ’s, each generated independently according to the marginal distribution p Y ^ r i . Each codeword in the quantization codebook is assigned to one of { 1 , , 2 n C i } bins by a uniformly generated random mapping.
Encoding: transmitter i sends out a codeword x i n corresponding to its message index. Relay i quantizes its received sequence y r i n by choosing a quantization codeword y ^ r i n jointly typical with y r i n , and then sends out the corresponding bin index to receiver i via the digital link of rate C i .
Decoding: receiver i finds a unique message pair ( m i c , m i p ) such that the sequences x j c n ( m j c ) , x i c n ( m i c ) , x i n ( m i c , m i p ) , y ^ r i n ( i j ), and y i n are jointly typical for some m j c { 1 , , 2 R j c } and some y ^ r i n whose bin index matches the index that receiver i received from relay i.
Error analysis: error analysis follows standard techniques, see e.g., [10]). We obtain the following constraints on the partial rates so that the error probability vanishes when n :
R 1 p min { a 1 + ( C 1 ξ 1 ) + , a 1 }
R 1 p + R 2 c min { b 1 + ( C 1 ξ 1 ) + , b 1 }
R 1 p + R 1 c min { c 1 + ( C 1 ξ 1 ) + , c 1 }
R 1 p + R 1 c + R 2 c min { d 1 + ( C 1 ξ 1 ) + , d 1 }
R 2 p min { a 2 + ( C 2 ξ 2 ) + , a 2 }
R 2 p + R 1 c min { b 2 + ( C 2 ξ 2 ) + , b 2 }
R 2 p + R 2 c min { c 2 + ( C 2 ξ 2 ) + , c 2 }
R 2 p + R 2 c + R 1 c min { d 2 + ( C 2 ξ 2 ) + , d 2 } .
Applying Fourier–Motzkin elimination to the above constraints and removing redundant inequalities we obtain (13)–(21). ☐
Remark 1.
Following the same techniques of [23,24], one can show that the above rate region can also be achieved by noisy network coding [26], which does not use explicit binning, and by generalized hash-forward [27]. However, for the channel under consideration which has digital links, we find it is easier to gain insights into the meaning of each term in the rate constraints by employing the quantize-bin-forward strategy, see Remark 2 below. Such insights are useful in establishing approximate capacity results as will be shown in Section 4.
Remark 2.
Observe that each constraint in (22)–(29) is the minimum of two terms. The second term corresponds to the case when C i , i { 1 , 2 } , is large enough to convey the quantization y ^ r i n correctly to receiver i. The first term corresponds to the case when the limited C i allows receiver i to identify only a list of candidates of y ^ r i n . In this case, ξ i plays the role of a “rate loss”, similar to ξ i in [10], whose value depends on the quantization distortion at the relay R i .
Remark 3.
If we replace Y r i (resp. Y ^ r i ) in (3)–(11) by Y j (resp. Y ^ j ), for i , j { 1 , 2 } , i j , and plug them into (22)–(29) we recover the achievable rates for the interference channel with limited receiver cooperation [10] (one-round conferencing). On the other hand, by setting Y ^ r 1 = Y ^ r 2 = Y ^ r in (3)–(11) (and symmetrically for a 2 , a 2 , b 2 , b 2 , c 2 , c 2 , d 2 , d 2 , ξ 2 ) we recover the achievable rate region for the IC with one degraded broadcasting relay in [8].

3. Capacity Region of Some Classes of Discrete Memoryless IRCs

In this section, we give some examples of discrete memoryless IRC for which the inner bound in Theorem 1 is the capacity region. These channels have a common feature that the output at the relay R i is a function of the input at transmitter Tx i and the output at receiver Rx i . To the best of our knowledge, these are first known capacity regions of interference channels with two relays. The capacity results in this section generalize and unify the capacity regions of a class of semi-deterministic relay channels [11,28], the strong interference channel [29], and a class of deterministic interference channels [30].

3.1. Semi-Deterministic IRC

Consider the semi-deterministic discrete memoryless IRC as depicted in Figure 2, which has the following properties:
  • ( i ) : There exist deterministic functions t 1 ( · ) , t 2 ( · ) , f 1 ( · , · ) , f 2 ( · , · ) (not necessary invertible) such that:
    ( i . a ) : T 1 = t 1 ( X 1 ) , T 2 = t 2 ( X 2 ) ,
    ( i . b ) : Y r 1 = f 1 ( X 1 , Y 1 ) , Y r 2 = f 2 ( X 2 , Y 2 ) .
  • ( i i ) : Y 1 depends only on X 1 and T 2 , via the conditional distribution p ( y 1 | x 1 , t 2 ) . Similarly for Y 2 and X 2 , T 1 .
  • ( i i i ) : For all positive integers N and all distributions of the form p ( W ) p ( X 1 N | W ) p ( X 2 N | W ) , the following conditions hold:
    I ( T 1 N ; Y 2 N , Y r 2 N | X 2 N , W ) I ( T 1 N ; Y 1 N , Y r 1 N | X 2 N , W )
    I ( T 2 N ; Y 1 N , Y r 1 N | X 1 N , W ) I ( T 2 N ; Y 2 N , Y r 2 N | X 1 N , W ) .
The main result of this section is in the following theorem.
Theorem 2.
The capacity region of the semi-deterministic IRC is the union of the set of non-negative rate pairs ( R 1 , R 2 ) satisfying
R 1 min { I ( X 1 ; Y 1 | T 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | T 2 , Q ) }
R 2 min { I ( X 2 ; Y 2 | T 1 , Q ) + C 2 , I ( X 2 ; Y 2 , Y r 2 | T 1 , Q ) }
R 1 + R 2 min { I ( X 1 ; Y 1 | T 1 , T 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | T 1 , T 2 , Q ) } + min { I ( X 2 , T 1 ; Y 2 | Q ) + C 2 , I ( X 2 , T 1 ; Y 2 , Y r 2 | Q ) }
R 1 + R 2 min { I ( X 1 , T 2 ; Y 1 | T 1 , Q ) + C 1 , I ( X 1 , T 2 ; Y 1 , Y r 1 | T 1 , Q ) } + min { I ( X 2 , T 1 ; Y 2 | T 2 , Q ) + C 2 , I ( X 2 , T 1 ; Y 2 , Y r 2 | T 2 , Q ) }
R 1 + R 2 min { I ( X 1 , T 2 ; Y 1 | Q ) + C 1 , I ( X 1 , T 2 ; Y 1 , Y r 1 | Q ) } + min { I ( X 2 ; Y 2 | T 1 , T 2 , Q ) + C 2 , I ( X 2 ; Y 2 , Y r 2 | T 1 , T 2 , Q ) }
2 R 1 + R 2 min { I ( X 1 ; Y 1 | T 1 , T 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | T 1 , T 2 , Q ) } + min { I ( X 1 , T 2 ; Y 1 | Q ) + C 1 , I ( X 1 , T 2 ; Y 1 , Y r 1 | Q ) } + min { I ( X 2 , T 1 ; Y 2 | T 2 , Q ) + C 2 , I ( X 2 , T 1 ; Y 2 , Y r 2 | T 2 , Q ) }
R 1 + 2 R 2 min { I ( X 1 , T 2 ; Y 1 | T 1 , Q ) + C 1 , I ( X 1 , T 2 ; Y 1 , Y r 1 | T 1 , Q ) } + min { I ( X 2 ; Y 2 | T 1 , T 2 , Q ) + C 2 , I ( X 2 ; Y 2 , Y r 2 | T 1 , T 2 , Q ) } + min { I ( X 2 , T 1 ; Y 2 | Q ) + C 2 , I ( X 2 , T 1 ; Y 2 , Y r 2 | Q ) }
over all probability distributions of the form p ( q ) p ( x 1 | q ) p ( x 2 | q ) , with | Q | 16 .
Proof. 
The achievability and converse proofs are deferred to Appendix B. ☐
Remark 4.
Property ( i i i ) in the definition of the semi-deterministic IRC resembles a property of the semi-deterministic interference channel with common information [31]. One can interpret (30a, 30b) as partially strong interference condition: knowing X k N , receiver k together with relay k can deduce more information about one part of the message sent by transmitter l (conveyed by T l N ) than receiver l and relay l can, k , l { 1 , 2 } , k l . In Appendix B we show that choosing T k to represent the common message of transmitter k is capacity achieving.
Remark 5.
We can easily obtain the capacity region of the semi-deterministic IRC when only one transmitter causes interference to its unintended relay and receiver, simply by setting T 1 = 0 or T 2 = 0 in Theorem 2. In a more extreme case, when there is no interference in the channel, namely T 1 = T 2 = 0 , the capacity region of the channel will reduce to the capacity region of two parallel deterministic relay channels of the type in [11]:
R 1 min { I ( X 1 ; Y 1 ) + C 1 , I ( X 1 ; Y 1 , Y r 1 ) }
R 2 min { I ( X 2 ; Y 2 ) + C 2 , I ( X 2 ; Y 2 , Y r 2 ) } .
This is because in this case we are employing the capacity achieving hash-forward coding technique [11] in two separate channels.
To illustrate further the connection of the semi-deterministic IRC to previously known channels we will focus on two special cases in the sequel.

3.1.1. Semi-Deterministic IRC with Strong Interference

We observe that in the case when T k N = X k N , k { 1 , 2 } , the condition (30a, 30b) is an immediate generalization of the strong interference channel [29]. As such, let us consider a discrete memoryless IRC which satisfies the following conditions:
  • ( c 1 . ) Y r 1 = f 1 ( X 1 , Y 1 ) and Y r 2 = f 2 ( X 2 , Y 2 ) for some functions f 1 ( · , · ) and f 2 ( · , · ) ,
  • ( c 2 . ) I ( X 1 ; Y 1 , Y r 1 | X 2 ) I ( X 1 ; Y 2 , Y r 2 | X 2 ) and I ( X 2 ; Y 2 , Y r 2 | X 1 ) I ( X 2 ; Y 1 , Y r 1 | X 1 ) for all product distributions on X 1 × X 2 (Note that due to ( c 1 . ) we can rewrite ( c 2 . ) as I ( X 1 ; Y 1 , Y r 1 | X 2 ) I ( X 1 ; Y 2 | X 2 ) and I ( X 2 ; Y 2 , Y r 2 | X 1 ) I ( X 2 ; Y 1 | X 1 ) .).
Note that, by the lemma in [29], the second condition ( c 2 . ) above implies
I ( X 1 n ; Y 1 n , Y r 1 n | X 2 n ) I ( X 1 n ; Y 2 n , Y r 2 n | X 2 n )
I ( X 2 n ; Y 2 n , Y r 2 n | X 1 n ) I ( X 2 n ; Y 1 n , Y r 1 n | X 1 n ) .
Consequently, it can be readily verified that the channel under consideration belongs to the class of the semi-deterministic IRC. Specifically, the former channel is the latter channel with T 1 = X 1 and T 2 = X 2 . Therefore we call the channel under consideration the semi-deterministic IRC under strong interference. Naturally, the capacity region of this channel can be deduced from Theorem 2, given as follows.
Corollary 1.
The capacity region of the semi-deterministic strong IRC is formed by the union of nonnegative rate pairs ( R 1 , R 2 ) satisfying
R 1 min { I ( X 1 ; Y 1 | X 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | X 2 , Q ) }
R 2 min { I ( X 2 ; Y 2 | X 1 , Q ) + C 2 , I ( X 2 ; Y 2 , Y r 2 | X 1 , Q ) }
R 1 + R 2 min { I ( X 1 , X 2 ; Y 1 | Q ) + C 1 , I ( X 1 , X 2 ; Y 1 , Y r 1 | Q ) }
R 1 + R 2 min { I ( X 1 , X 2 ; Y 2 | Q ) + C 2 , I ( X 1 , X 2 ; Y 2 , Y r 2 | Q ) } ,
over all probability distributions of the form p ( q ) p ( x 1 | q ) p ( x 2 | q ) , with | Q | 8 .
Obviously, in the special case when there is no relay, Corollary 1 reduces to the capacity region of the interference channel with strong interference [29].

3.1.2. Deterministic IRC

Consider a discrete memoryless IRC which has the following properties:
  • ( c 1 . ) : There exist deterministic functions t 1 ( · ) , t 2 ( · ) , f 1 ( · , · ) , f 2 ( · , · ) (not necessary invertible) such that:
    ( c 1 . a ) : T 1 = t 1 ( X 1 ) , T 2 = t 2 ( X 2 ) ,
    ( c 1 . b ) : Y r 1 = f 1 ( X 1 , Y 1 ) , Y r 2 = f 2 ( X 2 , Y 2 ) .
  • ( c 2 . ) : Y 1 depends only on X 1 and T 2 via deterministic function y 1 ( X 1 , T 2 ) . Similarly, Y 2 = y 2 ( X 2 , T 1 ) . Moreover there exist functions k 1 ( · , · ) and k 2 ( · , · ) such that T 2 = k 1 ( X 1 , Y 1 ) and T 1 = k 2 ( X 2 , Y 2 ) . In other words, function y i ( · , · ) is invertible given X i , i = 1 , 2 .
We name this channel the deterministic IRC of El Gamal–Costa type, depicted in Figure 3. It can be verified that the deterministic IRC satisfies all the properties of the semi-deterministic IRC in Figure 2. In particular, property ( i i i ) of the semi-deterministic IRC is satisfied because T 2 is a function of X 1 and Y 1 (and T 1 is a function of X 2 and Y 2 ). As a result, the capacity region of the deterministic channel can be deduced from Theorem 2. What is more, due to the existence of k 1 ( · , · ) and k 2 ( · , · ) , the capacity region of the deterministic IRC remains unchanged if we replace the condition y r 1 = f 1 ( x 1 , y 1 ) (respectively y r 2 = f 2 ( x 2 , y 2 ) ) by y r 1 = f 1 ( x 1 , t 2 ) (respectively y r 2 = f 2 ( x 2 , t 1 ) ), for some functions f 1 ( · , · ) , f 2 ( · , · ) . It is also easy to recognize that the deterministic IRC is in turn a generalization of the El Gamal–Costa deterministic interference channel [30]. In this sense the deterministic IRC includes the channel studied in [32], in which the two relays coincide, as a special case.

3.2. Compound Semi-Deterministic Multiple Access Relay Channel

Note that in the proof of Theorem 1 each receiver can decode (non-uniquely) the common message sent by the interfering transmitter. This motivates us to study an akin channel which has the same physical setting as in Figure 1 but each of the receivers is required to decode uniquely both messages from the transmitters. We name this new channel setup compound semi-deterministic multiple access relay channel (CS-MARC). We further assume that the following relations Y r 1 = f 1 ( X 1 , Y 1 ) and Y r 2 = f 2 ( X 2 , Y 2 ) hold for some deterministic functions f 1 ( · , · ) and f 2 ( · , · ) , similar to a property of the semi-deterministic channels studied in Section 3.1. It can be shown that the achievability in Theorem 1, adapting to the special case of no private message, is capacity achieving for the CS-MARC. Moreover, by removing either Y 1 and Y r 1 or Y 2 and Y r 2 in CS-MARC one can recover the capacity region of a class of MARC with orthogonal receive components established in [33]. Details are omitted for brevity.

4. Gaussian Interfering Relay Channels

The focus of this section is on the Gaussian IRC. We aim at giving some non-trivial examples where the inner bound in Theorem 1 can characterize the capacity region or sum capacity to within a bounded number of bits. The key to this task is deriving tight outer bounds and tuning the right parameters affecting the inner bound.
Consider the Gaussian IRC as depicted in Figure 4. The received signals at the receiver Rx i (denoted by Y i ) and at the relay R i (denoted by Y r i ), i { 1 , 2 } , are given by
Y 1 = h 11 X 1 + h 21 X 2 + Z 1 ,
Y 2 = h 12 X 1 + h 22 X 2 + Z 2 ,
Y r 1 = h 1 r X 1 + h 2 c X 2 + Z r 1 ,
Y r 2 = h 1 c X 1 + h 2 r X 2 + Z r 2 .
The noise processes Z i ’s are independent and identically distributed ∼ N ( 0 , 1 ) , i { 1 , 2 , r 1 , r 2 } . Without loss of generality we assume an average unit power constraint at each transmitter, namely 1 n i = 1 n | x i | 2 1 , where n is the block length. For simplicity we only consider real-valued channel gains and input/output symbols, and assume that channel gains are known to all nodes in the network. For for the convenience of notation, we define the following quantities, which will be used extensively hereafter:
SNR 1 = h 11 2 , INR 1 = h 21 2 , SNR r 1 = h 1 r 2 , INR r 1 = h 2 c 2 , SNR 2 = h 22 2 , INR 2 = h 12 2 , SNR r 2 = h 2 r 2 , INR r 2 = h 1 c 2 , η 1 2 = 1 h 1 r h 21 h 2 c h 11 2 , η 2 2 = 1 h 2 r h 12 h 1 c h 22 2 .
In the following, we will first derive an outer bound to the capacity region, adapt the inner bound for the discrete memoryless IRC in Theorem 1 to the Gaussian channel, and quantify the gaps between the bounds. The connection to previously known results are drawn where appropriate.

4.1. An Outer Bound to the Capacity Region

An outer bound to the capacity region of the Gaussian IRC is stated below.
Theorem 3.
The following set of non-negative rate pairs ( R 1 , R 2 ) forms an outer bound to the capacity region of the Gaussian IRC:
R 1 min { C ( SNR 1 ) + C 1 , C ( SNR 1 + SNR r 1 ) }
R 2 min { C ( SNR 2 ) + C 2 , C ( SNR 2 + SNR r 2 ) }
R 1 + R 2 C ( SNR 1 + INR 1 ) + C SNR 2 1 + INR 1 + C 1 + C 2
R 1 + R 2 C ( SNR 2 + INR 2 ) + C SNR 1 1 + INR 2 + C 1 + C 2
R 1 + R 2 C INR 1 + SNR 1 1 + INR 2 + C INR 2 + SNR 2 1 + INR 1 + C 1 + C 2
2 R 1 + R 2 C ( SNR 1 + INR 1 ) + C INR 2 + SNR 2 1 + INR 1 + C SNR 1 1 + INR 2 + 2 C 1 + C 2
R 1 + R 2 C SNR 1 + SNR r 1 1 + INR 2 + INR r 2 + C ( INR 2 + SNR r 2 + INR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) )
R 1 + R 2 C SNR 2 + SNR r 2 1 + INR 1 + INR r 1 + C ( INR 1 + SNR r 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) )
R 1 + R 2 C INR 1 + INR r 1 + SNR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) 1 + INR 2 + INR r 2 + C INR 2 + INR r 2 + SNR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) 1 + INR 1 + INR r 1
2 R 1 + R 2 C SNR 1 + SNR r 1 1 + INR 2 + INR r 2 + C INR 2 + INR r 2 + SNR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) 1 + INR 1 + INR r 1 + C ( INR 1 + SNR r 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) )
R 1 + R 2 C SNR 1 1 + INR 2 + INR r 2 + C ( SNR 2 ( 1 + η 2 2 INR r 2 ) + SNR r 2 + INR 2 + INR r 2 ) + C 1
R 1 + R 2 C SNR 2 + SNR r 2 1 + INR 1 + C ( SNR 1 + INR 1 ) + C 1
R 1 + R 2 C INR 1 + SNR 1 1 + INR 2 + INR r 2 + C INR 2 + INR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) + SNR r 2 1 + INR 1 + C 1
2 R 1 + R 2 C ( SNR 1 + INR 1 ) + C SNR 1 1 + INR 2 + INR r 2 + C INR 2 + INR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) + SNR r 2 1 + INR 1 + 2 C 1
R 1 + R 2 C SNR 2 1 + INR 1 + INR r 1 + C ( SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 + INR 1 + INR r 1 ) + C 2
R 1 + R 2 C ( SNR 2 + INR 2 ) + C SNR 1 + SNR r 1 1 + INR 2 + C 2
R 1 + R 2 C INR 2 + SNR 2 1 + INR 1 + INR r 1 + C INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 1 + INR 2 + C 2
2 R 1 + R 2 C INR 2 + SNR 2 1 + INR 1 + INR r 1 + C ( SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 + INR 1 + INR r 1 ) + C SNR 1 + SNR r 1 1 + INR 2 + C 2
2 R 1 + R 2 C SNR 1 + SNR r 1 1 + INR 2 + C ( SNR 1 + INR 1 ) + C INR 2 + SNR 2 1 + INR 1 + C 1 + C 2
2 R 1 + R 2 C SNR 1 + SNR r 1 1 + INR 2 + C ( SNR 1 + INR 1 ) + C INR 2 + INR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) + SNR r 2 1 + INR 1 + C 1 ,
together with 6 constraints on R 1 + 2 R 2 obtained by exchanging indices 1 2 in the 6 constraints on 2 R 1 + R 2 .
Remark 6.
By setting INR r 1 = SNR r 2 and INR r 2 = SNR r 1 , which, in SNR and INR sense, is similar to letting the two relays in Figure 4 coincide, the above outer bound reduces to the outer bound for the IC with one degraded broadcasting relay in [8].
Sketch of proof of the outer bound.
(The detailed proof is given in Appendix C.) The bounds are proven by first giving different types of side information to the decoders (genie-aided bounds). We then leverage the outer bounds for the multiple input multiple output interference channel (MIMO IC) established by Telatar and Tse in [34], and the outer bound for the Gaussian MIMO IC [35] (which is equivalent to [34] for the current channel) to simplify the derivation. In fact, the outer bound in Theorem 3 is the intersection of 7 regions, each of which corresponds to a group of constraints whose interpretations and outline of the proofs are given below.
  • Group 1: The bounds on individual rates in (50) and (51) are cut-set bounds.
  • Group 2: The bounds in (52)–(55) are obtained by first upper bounding the sum capacity gain due to the relays by C 1 + C 2 . The remaining part is derived by optimizing the bounds established in [34] or directly applying [35] (Lemma 1).
  • Group 3: The bounds in (56)–(59) are genie-aided bounds. First we give Y r i n to decoder i, i { 1 , 2 } , to turn the channel into a single input multiple output (SIMO) IC with two antennas at each receiver. Then we apply the bounds for MIMO IC in [34,35].
  • Group 4: The bounds in (60)–(63) are genie-aided bounds. We upper bound the sum capacity gain due to relay R 1 by C 1 and give Y r 2 to decoder 2 to turn the channel into a SIMO IC. Then we apply the bounds for MIMO IC in [34,35].
  • Group 5: The bounds in (64)–(67) are similarly derived as the bounds in Group 4, by symmetry.
  • Group 6: The bound in (68): A genie gives ( Y r 1 n , S 1 n , X 2 n ) to one of the two decoders 1, and give S 2 n to decoder 2, where S 1 n = h 12 X 1 n + Z 2 n , S 2 n = h 21 X 2 n + Z 1 n . We also bound the sum capacity gain due to the relays by C 1 + C 2 . The rest follows from the facts that conditioning does not increase entropy, the chain rule, and that Gaussian distribution maximizes differential entropy and conditional differential entropy subject to covariance constraint.
  • Group 7: The bound in (69): A genie gives ( Y r 1 n , S 1 n , X 2 n ) to one of the two decoders 1, and gives ( Y r 2 n , S 2 n ) to decoder 2. We also bound the sum capacity gain due to one of the relays R 1 ’s by C 1 .

4.2. Achievability

We can extend the inner bound for the DM-IRC in Theorem 1 to obtain an inner bound for the Gaussian channel. Specifically, we choose the following method to generate codebooks: First we set Q = const . Transmitter i, i { 1 , 2 } , generates superposition Gaussian codebooks X i c n ( m i c ) and X i n ( m i c , m i p ) with X i c N ( 0 , P i c ) and X i = X i c + X i p , X i p N ( 0 , P i p ) , P i p + P i c = P i = 1 . The random variable used to generate the quantization codebook at relay R i is specified by: Y ^ r i = Y r i + Z q i where Z q i N ( 0 , Δ i ) , independent of everything else; Y r i is given in (48) or (49). The quantization distortion Δ i will be specified later, depending on which interference regime the channel belongs to. We also define the following quantities:
SNR 1 p = h 11 2 P 1 p , SNR 2 p = h 22 2 P 2 p ,
INR 1 p = h 21 2 P 2 p , INR 2 p = h 12 2 P 1 p ,
SNR r 1 p = h 1 r 2 P 1 p , SNR r 2 p = h 2 r 2 P 2 p ,
INR r 1 p = h 2 c 2 P 2 p , INR r 2 p = h 1 c 2 P 1 p .
As demonstrated in [10], the key parameters that determine the tightness of the inner bound for the Gaussian IC with limited receiver cooperation are: (i) power allocation for common and private messages at each transmitter, and (ii) quantization level at each relay. Reference [10] shows that the power allocation rule of [36] together with a new rule for designing the quantization distortion achieve the capacity region within a constant gap. In the remaining subsections of the current section, we will show the approximate optimality of such rules for several classes of Gaussian IRC. Especially, the results in Section 4.5 are obtained from the insights gained from the proof of the general achievability in Theorem 1. Before that, in the next subsection we will show that a slightly different method for allocating power achieves a bounded gap to the capacity region of the Gaussian IRC under a condition.

4.3. Capacity Region within a Bounded Gap for General Interference Condition

In this subsection we look for a power allocation and quantization scheme that can guarantee a bounded gap to the capacity region. We make no assumption on the interference regime the channel belongs to, except for an assumption of the finiteness of two ratios of channel gains. In later subsections we will present stronger results, namely constant gaps to the capacity region or the sum capacity, for several interference conditions.
To begin with, instead of directly applying the power allocation and quantization scheme of [10], we use a similar strategy as in [8], i.e., we fix the power allocation and optimize the quantization level at each relay. Specifically, following the insights of [10,36], each transmitter allocates power to its private message such that the corresponding interference induced at the unintended receiver is below the noise level, i.e.,
P 1 p = min 1 , 1 / h 12 2 , P 2 p = min 1 , 1 / h 21 2 .
Δ i , i { 1 , 2 } , is optimized accordingly to minimize the gap between inner and outer bounds.
With the above choice of power allocation we have the following bounded-gap result.
Theorem 4 (Bounded gap to the capacity region).
For a given Gaussian IRC satisfying
θ : = max | h 1 c | 2 | h 12 | 2 , | h 2 c | 2 | h 21 | 2 < ,
the inner bound in Theorem 1, when extended to the Gaussian channel and with the power allocation specified in (74), is within a bounded gap of the capacity region. The gap is given by
δ R = log 2 + 1 2 ( θ + θ 2 + 16 θ + 16 ) b i t s ,
and is obtained with quantization distortion Δ 1 = Δ 2 = 1 4 ( θ 2 + 16 θ + 16 θ ) at both relays.
Proof. 
See Appendix D. ☐
The astute reader may notice that this gap is similar to the gap derived in [8] (Theorem 3) for the IC with a degraded broadcast relaying. However, our gap depends only on how much interference each transmitter causes to the neighboring relay channel (via parameter θ), not on the channel gain between the transmitter and its intended relay. We emphasize that this is in strong contrast to [8] where each interference-relay link carries both desired signal (with respect to one receiver) and interference (with respect to the other receiver). The bounded gap established above is useful in understanding the capacity behaviour of the channel at asymptotic regions, e.g., via generalized degrees of freedom analysis. Albeit bounded, the gap in the stated form can be arbitrarily large and therefore not very informative. In the next subsections we characterized different interference regimes of the channel within which the gap to the capacity region of sum capacity can be shown to be a constant of bits.

4.4. Capacity Region within 1 / 2 Bit for Strong Interference

Let us focus on a class of Gaussian IRC in the strong interference regime. Recall that for the semi-deterministic IRC under strong interference in Section 3 we had the following conditions:
I ( X 1 ; Y 1 , Y r 1 | X 2 ) I ( X 1 ; Y 2 | X 2 ) I ( X 2 ; Y 2 , Y r 2 | X 1 ) I ( X 2 ; Y 1 | X 1 ) .
We now consider the Gaussian counterpart. The Gaussian IRC is said to be in the strong interference regime if
SNR 1 + SNR r 1 INR 2 SNR 2 + SNR r 2 INR 1 .
Let R S I R C i denote the achievable region obtained by: specializing the general inner bound in Theorem 1 to the Gaussian IRC, as specified in Section 4.2; setting the whole message at each transmitter to be the common message, and setting the quantization distortion at each relay to be 1. Then we have the following results.
Proposition 1.
R S I R C i is within 1 / 2 bit to the capacity region of the Gaussian IRC with strong interference.
Proof. 
The proof consists of deriving R S I R C i and comparing it with the outer bound in Theorem 3. Details are in Appendix E. ☐

4.5. Capacity Region within log ( 7 ) Bits for IRC with High-Capacity Relay-Receiver Links

As we noted in Remark 2 in Section 2, each constraint in (22)–(29) is the minimum of two terms. The second term corresponds to the situation when the digital link has a high enough rate to describe the quantization codeword precisely to the receiver. This invokes a question of whether the coding strategy can guarantee a constant gap to the capacity region of the Gaussian channel in such a situation. The answer is positive, as we show below.
Definition 2 (IRC with high-capacity relay-receiver links).
Consider the Gaussian IRC as in Figure 4, the channel is said to have high-capacity relay-receiver links if the following conditions are satisfied:
C 1 1 2 log 2 ( SNR r 1 + INR r 1 + SNR 1 INR r 1 η 1 2 ) ( 1 + INR 1 + INR r 1 ) + 2 ( 4 + 4 INR 1 + 6 INR r 1 ) ( 1 + SNR 1 + INR 1 ) ( 4 + 4 INR 1 + 7 INR r 1 ) ( 1 + SNR 1 + INR 1 )
C 2 1 2 log 2 ( SNR r 2 + INR r 2 + SNR 2 INR r 2 η 2 2 ) ( 1 + INR 2 + INR r 2 ) + 2 ( 4 + 4 INR 1 + 6 INR r 2 ) ( 1 + SNR 2 + INR 2 ) ( 4 + 4 INR 2 + 7 INR r 2 ) ( 1 + SNR 2 + INR 2 ) .
Proposition 2.
For the Gaussian IRC with high-capacity relay-receiver links the quantize-bin-forward scheme can achieve to within log ( 7 ) bits of the capacity region.
Proof. 
The detailed proof is in Appendix F. The proof also includes the derivation of the high-capacity relay-receiver links condition as stated in Definition 2. The key elements of the proof are as follows:
  • Choosing a quantization distortion at each relay such that the rate loss terms ξ 1 , ξ 2 are bounded so that one can characterize the conditions on C 1 , C 2 for the second terms in the min in the RHS of (22)–(29) to be active.
  • Choosing the suitable outer bounds among the ones in Theorem 3.
  • Choosing a proper power allocation scheme at the transmitters such that the resulting achievable rates are within constant gaps to the corresponding chosen outer bounds.
Note that in order to achieve the above goals we use the following intuition: In the regime we are considering, we can think of receiver Rx i and relay R i collectively as a single receiver with output ( Y i , Y ^ r i ) for the inner bound and output ( Y i , Y r i ) for the outer bound. ☐

4.6. Sum Capacity within 1 Bit in the Weak Interference-Relay Regime

In Section 4.4 we have seen that when the interference is strong, it is nearly optimal to set the whole message at each transmitter to be common. A relevant question is: When is it optimal or nearly optimal to treat interference as noise? In this section, we characterize the conditions under which one can achieve to within 1 bit of the sum capacity of the Gaussian IRC by treating interference as noise, i.e., by setting the whole message at each transmitter to be private. First, we need two lemmas, whose proofs are both given in Appendix G.
Lemma 1.
For the Gaussian IRC, if
C 1 1 2 log 1 + 1 + INR 1 + INR r 1 + SNR r 1 + SNR 1 ( 1 + INR r 1 η 1 2 ) ( 1 + INR r 1 ) ( 1 + SNR 1 + INR 1 ) ,
C 2 1 2 log 1 + 1 + INR 2 + INR r 2 + SNR r 2 + SNR 2 ( 1 + INR r 2 η 2 2 ) ( 1 + INR r 2 ) ( 1 + SNR 2 + INR 2 ) ,
then the sum capacity is lower bounded by
C l b w = 1 2 log 1 + SNR 1 1 + INR 1 + C 1 1 2 log 1 + 1 1 + INR r 1 1 + INR r 1 1 + INR 1 + + 1 2 log 1 + SNR 2 1 + INR 2 + C 2 1 2 log 1 + 1 1 + INR r 2 1 + INR r 2 1 + INR 2 + .
Further, C l b w is achieved by treating interference as noise at each decoder.
The next lemma establishes an upper bound on the sum rate.
Lemma 2.
For the Gaussian IRC that satisfies
INR 1 SNR 2 ( 1 + INR 2 ) + INR 2 SNR 1 ( 1 + INR 1 ) 1 ,
the sum capacity is upper bounded by
C u b w = 1 2 log 1 + SNR 1 1 + INR 1 + 1 2 log 1 + SNR 2 1 + INR 2 + C 1 + C 2 .
The above two lemmas lead us to the main result.
Proposition 3 (Sum capacity within 1 bit).
For the Gaussian IRC satisfying (79) and (81), called the weak interference-relay regime, the sum capacity C s u m w is bounded by
C u b w 1 C s u m w C u b w ,
where C u b w is given in (82).
Proof. 
The proof follows directly from Lemma 1 and Lemma 2 and the fact that
1 2 log 1 + 1 1 + INR r 1 1 + INR r 1 1 + INR 1 1 2 1 2 log 1 + 1 1 + INR r 2 1 + INR r 2 1 + INR 2 1 2 .
Our characterization of the weak interference-relay regime can be considered as a generalization of the characterization of the low-interference of noisy-interference regime of the Gaussian IC [37,38,39]. In particular, in the extreme case of C 1 = C 2 = 0 the lower and upper bounds in (80) and (82) meet, and therefore we recover the exact sum capacity of the Gaussian interference channel in the low-interference regime, established in [37,38,39]. Proposition 3 shows that: when the interference is weak and the capacities of the digital links are limited, a very simple coding scheme, where each relay quantizes the received signal and sends the bin index and each decoder treats the unintended signal is noise, is approximately sum-capacity optimal. This is a reasonable setup in multicell communication where the interference from the neighboring base stations is weak and the intra-cell backhaul link between a pico base station and the main base station is capacity-limited.
Figure 5 shows numerical examples for a Gaussian IRC under weak interference-relay condition. The plots use the following channel parameters ( h 11 , h 22 , h 12 , h 21 , h 1 r , h 2 r , h 1 c , h 2 c ) = α × ( 200 , 150 , 1 , 1 , 350 , 330 , 0.05 , 0.05 ) , where α is a parameter called the channel scaling factor. We plot the achievable sum rate in (80), the sum rate upper bound in (82) for different values of α and C 1 , C 2 that satisfy the conditions (79) and (81). We also plot the sum capacity of the IC without relays. As expected, one can see that the gap between the sum rate upper bound and the lower bound (achievable sum rate) is always less than one bit for each fixed ( C 1 , C 2 ) . When C 1 and C 2 decrease, the two bounds move closer to the sum capacity without relay. When C 1 = C 2 = 0 they both coincide with the sum capacity of the IC without relay.

4.7. Capacity Region within 1 / 2 Bit for the Compound Multiple Access Relay Channel

We now turn to a Gaussian channel that does not belong to the family of the IRC but is very closely related. As can be seen in Section 3.2, the coding strategy for the discrete memoryless IRC can be adapted to achieve the capacity region of the discrete memoryless compound semi-deterministic MARC. We are drawing the same analogy here for the Gaussian channels. Consider the same physical channel as the Gaussian IRC but each decoder is required to recover both messages. We call this model the compound multiple access relay channel (C-MARC). Using the cut-set bound we can establish the following outer bound to the capacity region of the Gaussian C-MARC.
Lemma 3 (Outer Bound for the Gaussian C-MARC).
The capacity region of the Gaussian C-MARC is contained in the region R C M A R C o defined as the set of non-negative rate pairs ( R 1 , R 2 ) satisfying
R 1 min { C ( SNR 1 ) + C 1 , C ( SNR 1 + SNR r 1 ) }
R 1 min { C ( INR 2 ) + C 2 , C ( INR 2 + INR r 2 ) }
R 2 min { C ( SNR 2 ) + C 2 , C ( SNR 2 + SNR r 2 ) }
R 2 min { C ( INR 1 ) + C 1 , C ( INR 1 + INR r 1 ) }
R 1 + R 2 C ( SNR 1 + INR 1 ) + C 1
R 1 + R 2 C ( SNR 2 + INR 2 ) + C 2
R 1 + R 2 C ( SNR 1 + INR 1 + SNR r 1 + INR r 1 + SNR 1 INR 1 η 1 2 )
R 1 + R 2 C ( SNR 2 + INR 2 + SNR r 2 + INR r 2 + SNR 2 INR 2 η 2 2 ) .
We can derive an inner bound R C M A R C i for the Gaussian C-MARC from Theorem 1 by assigning the whole message at each transmitter as the common message and then follow the procedure in Section 4.2. Comparing R C M A R C i with R C M A R C o yields the following constant-gap result.
Proposition 4.
The inner bound R C M A R C i for the Gaussian C-MARC is within 1 / 2 bit of the capacity region of the channel.
Proof. 
See Appendix H. ☐

5. Conclusions

In this paper, we proposed and studied an important setup which stemmed from practical heterogeneous wireless systems, in which a relay can be used as a means to both relaying the desired signal and mitigating the interference. A general inner bound and multiple outer bounds are derived and shown to be tight or approximately tight for different classes of channels. The obtained results generalize and unify the capacity results for a number of relay and interference channels. The results establish the optimality or near optimality of the quantize-bin/hash-forward relaying scheme, in combination with the rate-splitting encoding technique, for a new type of interference network with multiple relays. This in turn has an important engineering value because such a coding strategy, which does not require the intermediate nodes in the network to decode any messages, is appealing for implementation in real-world systems. For future work, it would be interesting to investigate further the optimality of the scheme to other types of interference networks. Another direction would be to characterize the tradeoff between rate improvements and the cost of having the relays, e.g., in terms of computational and transmission resources.

Acknowledgments

This work was done while Hieu Do was with KTH Royal Institute of Technology. Parts of this work were presented at the IEEE Information Theory Workshop (ITW), Seville, Spain, September 2013, and at the 47th Asilomar Conference on Signals, Systems, and Computers, November 2013. This work was supported in part by the Swedish Research Council (VR).

Author Contributions

Hieu Do developed this work through discussion with Tobias Oechtering, Mikael Skoglund and Mai Vu. Hieu Do wrote the manuscript with inputs from Tobias Oechtering and with comments from Mikael Skoglund and Mai Vu. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
C-MARCCompound mutiple access relay channel
CS-MARCCompound semi-deterministic multiple access relay channel
DM-IRCDiscrete memoryless interfering relay channels
IRCInterfering relay channels
MARCMutiple access relay channel
MIMO ICMultiple input multiple output interference channel

Appendix A. Proof of Theorem 1.

Codebook generation: Codebook for transmitter k, k { 1 , 2 } : Each message m k is split into a common part m k c and a private part m k p : m k = ( m k c , m k p ) , m k c [ 1 : 2 n R k c ] , m k p [ 1 : 2 n R k p ] . Randomly and independently generate 2 n R k c codewords x k c n ( m k c ) , each according to i = 1 n p X k c ( x k c i ( m k c ) ) . For each x k c n ( m k c ) , randomly and conditionally independently generate 2 n R k p codewords x k n ( m k c , m k p ) , each according to i = 1 n p X k | X k c ( x k i | x k c i ( m k c ) ) . Codebook for relay k, k { 1 , 2 } : independently generate 2 n R ^ k quantization codewords y ^ r k n , each according to the distribution i = 1 n p Y ^ r k ( y ^ r k i ) , where the marginal distribution p Y ^ r k is obtained by the marginalization of (12). Uniformly partition the quantization codewords into 2 n C k bins.
Encoding: Transmitter k sends out a codeword x k n corresponding to its message index m k = ( m k c , m k p ) . Relay k chooses a quantization codeword y ^ r k n which is jointly typical with its received sequence y r k n , and then sends out the bin index l k corresponding to the chosen quantization codeword, i.e., l k = B ( y ^ r k n ) , where B ( · ) is the binning function.
Decoding: We describe the decoding at receiver R x 1 . Decoder 1 looks for a unique index pair ( m 1 c , m 1 p ) such that
x 1 n ( m 1 c , m 1 p ) , x 1 c n ( m 1 c ) , x 2 c n ( m 2 c ) , y ^ r 1 n , y 1 n T ϵ n
for some m 2 c [ 1 : 2 n R 2 c ] and some y ^ r 1 n with B ( y ^ r 1 n ) = l , where l is the bin index received via the digital link from relay 1. If such a unique pair ( m 1 c , m 1 p ) is found, the decoder declares it as the transmitted codeword, otherwise it declares an error.
Error analysis: we will do analysis for the receiver 1, the analysis for the receiver 2 follows immediately by symmetry. In the following we use m ̲ to denote a message triple ( m 1 c , m 1 p , m 2 c ) and x n ( m ̲ ) to denote the corresponding codeword triple. 1 ̲ denotes a vector whose entries are 1’s, with appropriate length.
Without loss of generality, assume that ( m 1 c , m 1 p , m 2 c ) = ( 1 , 1 , 1 ) , i.e., m ̲ = 1 ̲ , was sent.
By the covering lemma [25], the encoding at the relay succeeds with high probability if R ^ 1 I ( Y r 1 ; Y ^ r 1 ) . By the law of large number, the transmitted codeword, the truly selected quantization codeword, and the received sequence are jointly typical as n .
Let K denote the index of the truly chosen quantization codeword by the relay 1, and L denotes its bin index. We identify the following error events:
E 1 : = ( m 1 c , m 1 p ) ( 1 , 1 ) , m 2 c [ 1 : 2 n R 2 c ] and y ^ r 1 n ( k ) , k [ 1 : 2 n R ^ 1 ] , k K such that B ( y ^ r 1 n ( k ) ) = L and the corresponding sequences are jointly typical with y 1 n , as specified in (A1).
E 2 : = ( m 1 c , m 1 p ) ( 1 , 1 ) , m 2 c [ 1 : 2 n R 2 c ] such that the corresponding sequences and y ^ r 1 n ( K ) are jointly typical with y 1 n , as specified in (A1).
Conditioned on the successful encoding at the relay, the decoding error at the decoder 1 can be bounded by:
P e 1 P ( E 1 ) + P ( E 2 ) .
Next we bound P ( E 1 ) and P ( E 2 ) .
First note that the decoder 1 is not required to decode m 2 c correctly. Hence, following the same argument as in [40] we have
P ( E 1 ) S P ( E 1 ( S ) ) ,
where P ( E 1 ( S ) ) is defined and bounded in the sequel, and S { { 1 p } , { 1 c , 1 p } , { 1 p , 2 c } , { 1 c , 1 p , 2 c } } . We also define S ˜ : = { 1 c , 1 p , 2 c } S . Further, let m ̲ S denote the messages whose indices are given by S, and R S = i S R i .
P ( E 1 ( S ) ) : = m ̲ : m ̲ S 1 ̲ , m S ˜ = 1 ̲ k K P ( X n ( m ̲ ) , Y ^ r 1 n ( k ) , Y 1 n ) T ϵ n , B ( Y ^ r 1 n ( k ) ) = L = ( a ) 2 n C 1 m ̲ : m ̲ S 1 ̲ , m S ˜ = 1 ̲ k K P ( X n ( m ̲ ) , Y ^ r 1 n ( k ) , Y 1 n ) T ϵ n 2 n C 1 2 n R S k K P ( X n ( m ̲ ) , Y ^ r 1 n ( k ) , Y 1 n ) T ϵ n ,
where ( a ) follows due to the uniform binning. As an example, let us consider the case S = { 1 p , 2 c } . For this case we have
k K P ( X n ( m ̲ ) , Y ^ r 1 n ( k ) , Y 1 n ) T ϵ n = k K P ( X 1 c n ( 1 ) , X 1 n ( 1 , m 1 p ) , X 2 c n , Y ^ r 1 n ( k ) , Y 1 n ) T ϵ n = k K ( x 1 c n , x 1 n , x 2 c n , y ^ r 1 n , y 1 n ) T ϵ n p ( x 1 c n , x 1 n , x 2 c n , y ^ r 1 n , y 1 n ) ( a ) 2 n R ^ 1 ( x 1 c n , y 1 n ) T ϵ n p ( x 1 c n , y 1 n ) x 1 n T ϵ n ( X 1 | x 1 c n , y 1 n ) p ( x 1 n | x 1 c n ) · x 2 c n T ϵ n ( X 2 c | x 1 c n , x 1 n , y 1 n ) p ( x 2 c n ) y ^ r 1 n T ϵ n ( Y ^ r 1 | x 1 c n , x 1 n , x 2 c n , y 1 n ) p ( y ^ r 1 n )
( b ) 2 n R ^ 1 ( x 1 c n , y 1 n ) T ϵ n p ( x 1 c n , y 1 n ) 2 n ( I ( X 1 , X 2 c ; Y 1 | X 1 c ) + I ( Y ^ r 1 ; X 1 , X 2 c , Y 1 ) δ ( ϵ ) ) 2 n R ^ 1 2 n ( I ( X 1 , X 2 c ; Y 1 | X 1 c ) + I ( Y ^ r 1 ; X 1 , X 2 c , Y 1 ) δ ( ϵ ) ) ,
where δ ( ϵ ) 0 as n . ( a ) follows due to the fact that Y ^ r 1 n ( k ) and X 2 c n ( m 2 c ) are independent of ( X 1 c n , Y 1 n ) for k K and m 2 c 1 ; ( b ) follows from the properties of jointly typical sequences [25]. Substitute (A5) back to (A4) we conclude that for S = { 1 p , 2 c } , P ( E 1 ( S ) ) 0 as n if
R 1 p + R 2 c < I ( X 1 , X 2 c ; Y 1 | X 1 c ) + I ( Y ^ r 1 ; X 1 , X 2 c , Y 1 ) + C 1 R ^ 1 .
On the other hand, we can also bound P ( E 1 ( S ) ) as follows:
P ( E 1 ( S ) ) = m ̲ : m ̲ S 1 ̲ , m S ˜ = 1 ̲ k K P ( X n ( m ̲ ) , Y ^ r 1 n ( k ) , Y 1 n ) T ϵ n , B ( Y ^ r 1 n ( k ) ) = L m ̲ : m ̲ S 1 ̲ , m ̲ S ˜ = 1 ̲ P ( X n ( m ̲ ) , Y 1 n ) T ϵ n · P k K , B ( Y ^ r 1 n ( k ) ) = L , X 1 n ( m ̲ ) , Y ^ r 1 n , Y 1 n T ϵ n | ( X n ( m ̲ ) , Y 1 n ) T ϵ n 2 n R S P ( X n ( m ̲ ) , Y 1 n ) T ϵ n .
Similar to the derivation of (A5) we can bound the probability in (A7) and show that for S = { 1 p , 2 c } , P ( E 1 ( S ) ) 0 as n if
R 1 p + R 2 c < I ( X 1 , X 2 c ; Y 1 | X 1 c ) .
Combining (A6) and (A8) we conclude that for S = { 1 p , 2 c } , P ( E 1 ( S ) ) 0 as n if
R 1 p + R 2 c < I ( X 1 , X 2 c ; Y 1 | X 1 c ) + ( I ( Y ^ r 1 ; X 1 , X 2 c , Y 1 ) + C 1 R ^ 1 ) + = ( a ) I ( X 1 , X 2 c ; Y 1 | X 1 c ) + ( C 1 I ( Y ^ r 1 ; Y r 1 | X 1 , X 2 c , Y 1 ) ) + = I ( X 1 , X 2 c ; Y 1 | X 1 c ) + ( C 1 ξ 1 ) + ,
where ( a ) follows from R ^ 1 I ( Y r 1 ; Y ^ r 1 ) and the Markov chain Y ^ r 1 Y r 1 ( X 1 , X 1 c , X 2 c , Y 1 ) .
For P ( E 2 ) : Similar to (A3) we have
P ( E 2 ) S P ( E 2 ( S ) )
where
P ( E 2 ( S ) ) : = m ̲ : m ̲ S 1 ̲ , m S ˜ = 1 ̲ P ( X n ( m ̲ ) , Y ^ r 1 n ( K ) , Y 1 n ) T ϵ n .
Notice that in this case the quantization codeword is the true one chosen by the relay. Hence, to bound P ( E 2 ( S ) ) ’s we can consider ( Y 1 n , Y ^ r 1 n ) as a single channel output sequence. The bounds then follow directly from the properties of the jointly typical sequences [25]. In particular, for S = { 1 p , 2 c } , P ( E 2 ( S ) ) 0 as n if
R 1 p + R 2 c I ( X 1 , X 2 c ; Y 1 , Y ^ r 1 | X 1 c ) .
Combining (A9) and (A10) we have: For S = { 1 p , 2 c } , P ( E 1 ( S ) ) 0 and P ( E 2 ( S ) ) 0 as n if (23) is satisfied. Applying the same analysis for other S { { 1 p } , { 1 c , 1 p } , { 1 p , 2 c } , { 1 c , 1 p , 2 c } } , we conclude that P ( E 1 ) and P ( E 2 ) , and therefore P e 1 in (A2) vanishes as n if the constraints in (22)–(25) are satisfied. ☐

Appendix B. Proof of Theorem 2.

Appendix B.1. Achievability

Since T k = t k ( X k ) , k { 1 , 2 } , we can choose X k c = T k in the general inner bound in Theorem 1. Furthermore, we set Y ^ r k = Y r k , i.e., we only do hash-forward. Using the properties of the channel we have:
a 1 = I ( X 1 ; Y 1 | T 1 , T 2 , Q ) a 1 = I ( X 1 ; Y 1 , Y r 1 | T 1 , T 2 , Q ) b 1 = I ( X 1 , T 2 ; Y 1 | T 1 , Q ) b 1 = I ( X 1 , T 2 ; Y 1 , Y r 1 | T 1 , Q ) c 1 = I ( X 1 ; Y 1 | T 2 , Q ) c 1 = I ( X 1 ; Y 1 , Y r 1 | T 2 , Q ) d 1 = I ( X 1 , T 2 ; Y 1 | Q ) d 1 = I ( X 1 , T 2 ; Y 1 , Y r 1 | Q ) ξ 1 = I ( Y r 1 ; Y r 1 | X 1 , T 2 , Y 1 , Q ) = ( a ) 0 ,
where ( a ) follows because Y r 1 is a function of ( X 1 , Y 1 ) . By symmetry we easily obtain a 2 , a 2 , b 2 , b 2 , c 2 , c 2 , d 2 , d 2 , ξ 2 . Substituting the above quantities into the inner bound in Theorem 1 we obtain an inner bound for the semi-deterministic IRC, which consists of the inequalities in Theorem 2 plus the following two inequalities, emanating from (14) and (16):
R 1 min { I ( X 1 ; Y 1 | T 1 , T 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | T 1 , T 2 , Q ) } + min { I ( X 2 , T 1 ; Y 2 | T 2 , Q ) + C 2 , I ( X 2 , T 1 ; Y 2 , Y r 2 | T 2 , Q ) }
R 2 min { I ( X 2 ; Y 2 | T 1 , T 2 , Q ) + C 2 , I ( X 2 ; Y 2 , Y r 2 | T 1 , T 2 , Q ) } + min { I ( X 1 , T 2 ; Y 1 | T 1 , Q ) + C 1 , I ( X 1 , T 2 ; Y 1 , Y r 1 | T 1 , Q ) }
In the sequel we prove the redundancy of (A11) and (A12). First note that,
I ( T 2 ; Y 2 , Y r 2 | X 1 , Q ) = H ( T 2 | X 1 , Q ) H ( T 2 | Y 2 , Y r 2 , X 1 , Q ) = ( a ) H ( T 2 | Q ) H ( T 2 | Y 2 , Y r 2 , X 1 , T 1 , Q ) H ( T 2 | T 1 , Q ) H ( T 2 | Y 2 , Y r 2 , T 1 , Q ) = I ( T 2 ; Y 2 , Y r 2 | T 1 , Q ) ,
where ( a ) follows because T 1 is a function of X 1 , and X 1 is independent of T 2 given Q. By symmetry we have
I ( T 1 ; Y 1 , Y r 1 | X 2 , Q ) I ( T 1 ; Y 1 , Y r 1 | T 2 , Q ) .
Next, we have
min { I ( X 2 , T 1 ; Y 2 | T 2 , Q ) + C 2 , I ( X 2 , T 1 ; Y 2 , Y r 2 | T 2 , Q ) } I ( X 2 , T 1 ; Y 2 | T 2 , Q ) I ( T 1 ; Y 2 | X 2 , T 2 , Q ) = ( c ) I ( T 1 ; Y 2 | X 2 , Q ) = ( d ) I ( T 1 ; Y 2 , Y r 2 | X 2 , Q ) ( e ) I ( T 1 ; Y 1 , Y r 1 | X 2 , Q ) ( f ) I ( T 1 ; Y 1 , Y r 1 | T 2 , Q ) ,
where:
  • ( c ) is because T 2 is a function of X 2 , and Y 2 depends only on ( X 2 , T 1 ) ;
  • ( d ) is because Y r 2 = f 2 ( X 2 , Y 2 ) ;
  • ( e ) is due to (30a) with N = 1 , W = Q ;
  • ( f ) is due to (A14).
Inserting (A15) into the right hand side (RHS) of (A11) gives:
RHS of ( A 11 ) min { I ( X 1 ; Y 1 | T 1 , T 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | T 1 , T 2 , Q ) } + I ( T 1 ; Y 1 , Y r 1 | T 2 , Q ) min { I ( X 1 ; Y 1 | T 2 , Q ) + C 1 , I ( X 1 ; Y 1 , Y r 1 | T 2 , Q ) } = RHS of ( 31 ) .
Therefore (A11) is redundant. By symmetry (A12) is redundant. ☐

Appendix B.2. Converse

Before upperbounding the achievable rates we will prove some useful inequalities. Consider a sequence of ( 2 n R 1 , 2 n R 2 , n ) codes with vanishing error probability. Note that since the messages M 1 and M 2 are independent, X k n ( M k ) is independent of X l n ( M l ) and T l n , k , l { 1 , 2 } , k l . Hence we have
I ( T 2 n ; Y 2 n , Y r 2 n | X 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) ,
whose proof is similar to the proof of (A13), by replacing each random variable with the corresponding random n-sequence, and W = .
Let us denote the sequence that receiver k receives from relay k (via the digital link) by V k n , k { 1 , 2 } . We have
I ( X 1 n ; Y 1 n , V 1 n ) = I ( X 1 n ; Y 1 n ) + I ( X 1 n ; V 1 n | Y 1 n )
I ( X 1 n ; Y 1 n , V 1 n ) ( a 1 ) I ( X 1 n ; Y 1 n ) + H ( V 1 n )
I ( X 1 n ; Y 1 n , V 1 n ) ( a 2 ) I ( X 1 n ; Y 1 n | T 2 n ) + H ( V 1 n )
I ( X 1 n ; Y 1 n , V 1 n ) I ( X 1 n , T 1 n ; Y 1 n | T 2 n ) + H ( V 1 n )
I ( X 1 n ; Y 1 n , V 1 n ) = I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n | T 2 n ) + H ( V 1 n )
I ( X 1 n ; Y 1 n , V 1 n ) I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 1 n ) ,
where: ( a 1 ) follows from the nonnegativity of entropy, and conditioning does not increase entropy; ( a 2 ) is because X 1 n is independent of T 2 n . We also have
I ( X 1 n ; Y 1 n , V 1 n ) I ( X 1 n ; Y 1 n , Y r 1 n , V 1 n ) = ( b 1 ) I ( X 1 n ; Y 1 n , Y r 1 n )
I ( X 1 n ; Y 1 n , V 1 n ) ( b 2 ) I ( X 1 n ; Y 1 n , Y r 1 n | T 2 n ) I ( X 1 n , T 1 n ; Y 1 n , Y r 1 n | T 2 n )
I ( X 1 n ; Y 1 n , V 1 n ) = I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n )
where: ( b 1 ) is because V 1 n is a function of Y r 1 n ; ( b 2 ) is because X 1 n is independent of T 2 n . On the other hand, from (A18),
I ( X 1 n ; Y 1 n , V 1 n ) I ( X 1 n ; Y 1 n ) + H ( V 1 n ) = I ( X 1 n , T 2 n ; Y 1 n ) I ( T 2 n ; Y 1 n | X 1 n ) + H ( V 1 n ) = ( c 1 ) I ( X 1 n , T 2 n ; Y 1 n ) I ( T 2 n ; Y 1 n , Y r 1 n | X 1 n ) + H ( V 1 n ) ( c 2 ) I ( X 1 n , T 2 n ; Y 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | X 1 n ) + H ( V 1 n ) ( c 3 ) I ( X 1 n , T 2 n ; Y 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) + H ( V 1 n )
where: ( c 1 ) is because Y r 1 n is a function of X 1 n and Y 1 n ; ( c 2 ) is due to (30a, 30b) with N = n , W = ; ( c 3 ) follows from (A16). Furthermore, using (A23)
I ( X 1 n ; Y 1 n , V 1 n ) I ( X 1 n ; Y 1 n , Y r 1 n ) = I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 2 n ; Y 1 n , Y r 1 n | X 1 n ) ( d 1 ) I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | X 1 n ) ( d 2 ) I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) ,
where: ( d 1 ) is due to (30a, 30b); ( d 2 ) follows from (A16).
By symmetry we obtain the following inequalities, which are the duals of (A26) and (A27):
I ( X 2 n ; Y 2 n , V 2 n ) I ( X 2 n , T 1 n ; Y 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 2 n ) ,
I ( X 2 n ; Y 2 n , V 2 n ) I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) .
We are now ready to bound the achievable rates.
Proof of (31): by Fano’s inequality,
n ( R 1 ϵ n ) I ( W 1 ; Y 1 n , V 1 n ) I ( X 1 n ; Y 1 n , V 1 n ) ( e 1 ) I ( X 1 n ; Y 1 n | T 2 n ) + H ( V 1 n ) ( e 2 ) i = 1 n [ I ( X 1 i ; Y 1 i | T 2 i ) + H ( V 1 i ) ] i = 1 n [ I ( X 1 i ; Y 1 i | T 2 i ) + C 1 ] ,
where: ( e 1 ) is due to (A19); ( e 2 ) is because conditioning does not increase entropy, and because Y 1 i depends only on ( X 1 i , T 2 i ) .
Next
n ( R 1 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) ( f 1 ) I ( X 1 n , T 1 n ; Y 1 n , Y r 1 n | T 2 n ) ( f 2 ) i = 1 n I ( X 1 i , T 1 i ; Y 1 i , Y r 1 i | T 2 i ) ,
where: ( f 1 ) follows from (A24); ( f 2 ) is because Y r 1 i is a function of Y 1 i and X 1 i , and Y 1 i depends only on X 1 i , T 2 i .
We define a random variable Q, uniformly distributed over { 1 , , n } and independent of everything else. Further, for k { 1 , 2 } we define X k = X k Q , T k = T k Q , Y k = Y k Q , Y r k = Y r k Q . Then (A30) and (A31) become
R 1 ϵ n I ( X 1 ; Y 1 | T 2 ) + C 1 R 1 ϵ n I ( X 1 , T 1 ; Y 1 , Y r 1 | T 2 ) ,
where ϵ n 0 as n , which proves (31).
The bound on R 2 in (32) follows from (31) by symmetry.
Proof of (33): again, starting with Fano’s inequality,
(A32) n ( R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( g 1 ) min { I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 1 n ) , I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) } = min { I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + H ( V 1 n ) , I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) } ( g 2 ) min { i = 1 n [ I ( X 1 i ; Y 1 i | T 1 i , T 2 i ) + H ( V 1 i ) ] , i = 1 n I ( X 1 i ; Y 1 i , Y r 1 i | T 1 i , T 2 i ) } + min { i = 1 n [ I ( X 2 i , T 1 i ; Y 2 i ) + H ( V 2 i ) ] , i = 1 n I ( X 2 i , T 1 i ; Y 2 i , Y r 2 i ) } min { i = 1 n [ I ( X 1 i ; Y 1 i | T 1 i , T 2 i ) + C 1 ] , i = 1 n I ( X 1 i ; Y 1 i , Y r 1 i | T 1 i , T 2 i ) } (A33) + min { i = 1 n [ I ( X 2 i , T 1 i ; Y 2 i ) + C 2 ] , i = 1 n I ( X 2 i , T 1 i ; Y 2 i , Y r 2 i ) } ,
where: we use (A22), (A25), (A28), and (A29) in ( g 1 ) ; ( g 2 ) is because conditioning does not increase entropy, Y 1 i depends only on ( X 1 i , T 2 i ) , Y r 1 i is a function of ( X 1 i , Y 1 i ) , and similar conditions for Y 2 i and Y r 2 i .
The bound on R 1 + R 2 in (35) follows from (33) by symmetry.
Proof of (34): by Fano’s inequality,
n ( R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( h 1 ) min { I ( X 1 n , T 2 n ; Y 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) } = min { I ( X 1 n , T 2 n ; Y 1 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) } ( h 2 ) min { I ( X 1 n , T 2 n ; Y 1 n ) I ( T 1 n ; Y 1 n , Y r 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 1 n ; Y 1 n , Y r 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n ) } min { I ( X 1 n , T 2 n ; Y 1 n ) I ( T 1 n ; Y 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 1 n ; Y 1 n , Y r 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 2 n ; Y 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n ) } = ( h 3 ) min { I ( X 1 n , T 1 n , T 2 n ; Y 1 n ) I ( T 1 n ; Y 1 n ) + H ( V 1 n ) , I ( X 1 n , T 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 1 n ; Y 1 n , Y r 1 n ) } + min { I ( X 2 n , T 1 n , T 2 n ; Y 2 n ) I ( T 2 n ; Y 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n , T 2 n ; Y 2 n , Y r 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n ) } = min { I ( X 1 n , T 2 n ; Y 1 n | T 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n | T 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n | T 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n | T 2 n ) } ( h 4 ) min { i = 1 n [ I ( X 1 i , T 2 i ; Y 1 i | T 1 i ) + C 1 ] , i = 1 n I ( X 1 i , T 2 i ; Y 1 i , Y r 1 i | T 1 i ) } + min { i = 1 n [ I ( X 2 i , T 1 i ; Y 2 i | T 2 i ) + C 2 ] , i = 1 n I ( X 2 i , T 1 i ; Y 2 i , Y r 2 i | T 2 i ) }
where: we use (A26), (A27), (A28), and (A29) in ( h 1 ) ; ( h 2 ) is because T 1 n and T 2 n are independent; ( h 3 ) is because T 1 n is a function of X 1 n , T 2 n is a function of X 2 n ; ( h 4 ) holds due to the same reasons that make ( g 2 ) hold.
Proof of (36):
n ( 2 R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( t 1 ) min { I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 1 n ) , I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) + I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) } + min { I ( X 1 n , T 2 n ; Y 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 1 n ; Y 1 n , Y r 1 n | T 2 n ) } = min { I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + H ( V 1 n ) , I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) } + min { I ( X 1 n , T 2 n ; Y 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n | T 1 n ) } ( t 2 ) min { I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + H ( V 1 n ) , I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) } + min { I ( X 1 n , T 2 n ; Y 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) } + min { I ( X 2 n , T 2 n , T 1 n ; Y 2 n ) I ( T 2 n ; Y 2 n ) + H ( V 2 n ) , I ( X 2 n , T 2 n , T 1 n ; Y 2 n , Y r 2 n ) I ( T 2 n ; Y 2 n , Y r 2 n ) } = min { I ( X 1 n ; Y 1 n | T 1 n , T 2 n ) + H ( V 1 n ) , I ( X 1 n ; Y 1 n , Y r 1 n | T 1 n , T 2 n ) } + min { I ( X 1 n , T 2 n ; Y 1 n ) + H ( V 1 n ) , I ( X 1 n , T 2 n ; Y 1 n , Y r 1 n ) } + min { I ( X 2 n , T 1 n ; Y 2 n | T 2 n ) + H ( V 2 n ) , I ( X 2 n , T 1 n ; Y 2 n , Y r 2 n | T 2 n ) } ( t 3 ) min { i = 1 n [ I ( X 1 i ; Y 1 i | T 1 i , T 2 i ) + C 1 ] , i = 1 n I ( X 1 i ; Y 1 i , Y r 1 i | T 1 i , T 2 i ) } + min { i = 1 n [ I ( X 1 i , T 2 i ; Y 1 i ) + C 1 ] , i = 1 n I ( X 1 i , T 2 i ; Y 1 i , Y r 1 i ) } + min { i = 1 n [ I ( X 2 i , T 1 i ; Y 2 i | T 2 i ) + C 2 ] , i = 1 n I ( X 2 i , T 1 i ; Y 2 i , Y r 2 i | T 2 i ) } ,
where: we use (A22), (A25), (A26), (A27), (A28), and (A29) in ( t 1 ) ; ( t 2 ) holds since X 2 n is a function of T 2 n , T 1 n and T 2 n are independent, and due to chain rule and the nonnegativity of entropy; ( t 3 ) holds due to the same reasons that make ( g 2 ) hold.
The bound on R 1 + 2 R 2 in (37) follows from (36) by symmetry.
The bound on the cardinality of Q follows from the application of the Support Lemma [41] (Lemma 3.4, p. 310).

Appendix C. Proof of Theorem 3.

Throughout the proofs we use V i n to denote the output of the digital relay link at receiver i, i { 1 , 2 } . In the inequalities below we utilize the following facts: ( a ) X 1 n is independent of X 2 n , ( b ) V k n is a function of Y r k n , k { 1 , 2 } , ( c ) i.i.d Gaussian distribution maximizes the differential entropy and the conditional differential entropy subject to covariance constraints.
Bounds on individual rates in (50) and (51):
By Fano’s inequality we have
n ( R 1 ϵ n ) I ( X 1 n ; V 1 n , Y 1 n ) I ( X 1 n ; V 1 n , Y 1 n , X 2 n ) = ( a ) I ( X 1 n ; V 1 n , Y 1 n | X 2 n ) = I ( X 1 n ; Y 1 n | X 2 n ) + I ( X 1 n ; V 1 n | Y 1 n , X 2 n ) I ( X 1 n ; Y 1 n | X 2 n ) + H ( V 1 n ) .
We have
I ( X 1 n ; Y 1 n | X 2 n ) = h ( Y 1 n | X 2 n ) h ( Y 1 n | X 1 n , X 2 n ) = h ( h 11 X 1 n + Z 1 n ) h ( Z 1 n ) ( c ) n 2 log ( 1 + SNR 1 ) ,
and
H ( V 1 n ) n H ( V 1 ) n C 1 .
Putting (A37) and (A38) back to (A36) we have
R 1 ϵ n 1 2 log ( 1 + SNR 1 ) + C 1 ,
where ϵ n 0 as n . We also have
n ( R 1 ϵ n ) I ( X 1 n ; V 1 n , Y 1 n ) ( b ) I ( X 1 n ; Y r 1 n , Y 1 n ) = ( a ) I ( X 1 n ; Y r 1 n , Y 1 n | X 2 n ) = I ( X 1 n ; Y 1 n | X 2 n ) + I ( X 1 n ; Y r 1 n | Y 1 n , X 2 n ) = h ( h 11 X 1 n + Z 1 n ) h ( Z 1 n ) + h ( h 1 r X 1 n + Z r 1 n | h 11 X 1 n + Z 1 n ) h ( Z r 1 n ) ( c ) n 2 log ( 1 + SNR 1 + SNR r 1 ) .
Next, to prove the outer bounds on the sum rate and weighted sum rate, we will first transform the genie aided channel into a MIMO IC plus some bounded terms. We will then leverage the outer bounds for the MIMO IC which are developed by Karmakar and Varanasi in [35] (Lemma 1), which can be shown to be the same as the bounds introduced by Telatar and Tse in [34] for the channel under consideration.
Bounds on R 1 + R 2 in (52)–(54).
By Fano’s inequality we have
n ( R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) = I ( X 1 n ; Y 1 n ) + I ( X 1 n ; V 1 n | Y 1 n ) + I ( X 2 n ; Y 2 n ) + I ( X 2 n ; V 2 n | Y 2 n ) I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) + H ( V 1 n ) + H ( V 2 n ) I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) + n C 1 + n C 2 .
At this point we can use the bounds on I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) established in [34] or [35] (Lemma 1) to end up with the bounds in (56)–(58).
Bounds on 2 R 1 + R 2 in (55). By Fano’s inequality we have
n ( 2 R 1 + R 2 3 ϵ n ) 2 I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) 2 I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) + 2 H ( V 1 n ) + H ( V 2 n ) 2 I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) + 2 n C 1 + n C 2 .
Applying the bounds on 2 I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) in [34,35] we complete the proof.
Bounds on R 1 + R 2 in (56)–(58). By Fano’s inequality we have
n ( R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) .
By defining Y ¯ i : = ( Y i , Y r i ) , i { 1 , 2 } , we can apply the bounds in [34,35] to obtain (56)–(58).
Bounds on 2 R 1 + R 2 in (59). By Fano’s inequality we have
n ( 2 R 1 + R 2 3 ϵ n ) 2 I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) 2 I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) .
Then we define Y ¯ i : = ( Y i , Y r i ) , i { 1 , 2 } , and apply the bounds on 2 I ( X 1 n ; Y ¯ 1 n ) + I ( X 2 n ; Y ¯ 2 n ) in [34,35].
Bounds on R 1 + R 2 in (60)–(62). By Fano’s inequality we have
n ( R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) I ( X 1 n ; Y 1 n ) + H ( V 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) + n C 1 .
At this point we can use the bounds on I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) in [34,35] to end up with the bounds in (60)–(62).
Bound on 2 R 1 + R 2 in (63) By Fano’s inequality
n ( 2 R 1 + R 2 3 ϵ n ) 2 I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) 2 I ( X 1 n ; Y 1 n ) + 2 H ( V 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) 2 I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) + 2 n C 1 .
Then we can use the bound on 2 I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) in [34,35] to obtain (63).
Bounds on R 1 + R 2 in (64)–(66). These bounds follow readily from the bounds in (60)–(62) by symmetry.
Bound on 2 R 1 + R 2 in (67) By Fano’s inequality
n ( 2 R 1 + R 2 3 ϵ n ) 2 I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) 2 I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 2 n ; Y 2 n ) + H ( V 2 n ) 2 I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 2 n ; Y 2 n ) + n C 2 .
Then we can use the bound on 2 I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 2 n ; Y 2 n ) in [34,35].
Bounds on 2 R 1 + R 2 in (68): Let S 1 n = h 12 X 1 n + Z 2 n , S 2 n = h 21 X 2 n + Z 1 n . We see that Y 1 n = h 11 X 1 n + S 2 n , Y 2 n = h 22 X 2 n + S 1 n . Using Fano’s inequality, conditioning does not increase entropy, chain rule we have that if a rate pair ( R 1 , R 2 ) is achievable, then
n ( 2 R 1 + R 2 3 ϵ n ) 2 I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 1 n ; Y 1 n ) + I ( X 1 n ; V 1 n | Y 1 n ) + I ( X 2 n ; Y 2 n ) + I ( X 2 ; V 2 n | Y 2 n ) I ( X 1 n ; Y 1 n , Y r 1 n , S 1 n , X 2 n ) + I ( X 1 n ; Y 1 n ) + H ( V 1 n ) + I ( X 2 n ; Y 2 n , S 2 n ) + H ( V 2 n ) = ( a ) I ( X 1 n ; Y 1 n , Y r 1 n , S 1 n | X 2 n ) + I ( X 1 n ; Y 1 n ) + H ( V 1 n ) + I ( X 2 n ; Y 2 n , S 2 n ) + H ( V 2 n ) I ( X 1 n ; S 1 n | X 2 n ) + I ( X 1 n ; Y 1 n , Y r 1 n | S 1 n , X 2 n ) + I ( X 1 n ; Y 1 n ) + h ( Y 2 n , S 2 n ) h ( Y 2 n , S 2 n | X 2 n ) + n C 1 + n C 2 = h ( S 1 n ) h ( Z 2 n ) + h ( Y 1 n , Y r 1 n | S 1 n , X 2 n ) h ( Z 1 n , Z 1 r n ) + h ( Y 1 n ) h ( S 2 n ) + h ( Y 2 n , S 2 n ) h ( S 1 n , Z 1 n ) + n C 1 + n C 2 = h ( S 1 n ) h ( Z 2 n ) + h ( Y 1 n , Y r 1 n | S 1 n , X 2 n ) h ( Z 1 n , Z 1 r n ) + h ( Y 1 n ) + h ( Y 2 n | S 2 n ) h ( S 1 n ) h ( Z 1 n ) + n C 1 + n C 2 = h ( Y 1 n , Y r 1 n | S 1 n , X 2 n ) h ( Z 1 n , Z 1 r n ) + h ( Y 1 n ) h ( Z 1 n ) + h ( Y 2 n | S 2 n ) h ( Z 2 n ) + n C 1 + n C 2 = h ( h 11 X 1 n + Z 1 n , h 1 r X 1 n + Z r 1 n | h 12 X 1 n + Z 2 n ) h ( Z 1 n , Z 1 r n ) + h ( h 11 X 1 n + h 21 X 2 n + Z 1 n ) h ( Z 1 n ) + h ( h 22 X 2 n + h 12 X 1 n + Z 2 n | h 21 X 2 n + Z 1 n ) h ( Z 2 n ) + n C 1 + n C 2 ( c ) n ( RHS of ( 68 ) ) .
Bounds on 2 R 1 + R 2 in (69): We define S 1 n , S 2 n as above. Again, using Fano’s inequality, conditioning does not increase entropy, chain rule we have that if a rate pair ( R 1 , R 2 ) is achievable, then
n ( 2 R 1 + R 2 ϵ n ) 2 I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) ( b ) I ( X 1 n ; Y 1 n , Y r 1 n ) + I ( X 1 n ; Y 1 n ) + I ( X 1 n ; V 1 n | Y 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n ) I ( X 1 n ; Y 1 n , Y r 1 n , S 1 n , X 2 n ) + I ( X 1 n ; Y 1 n ) + H ( V 1 n ) + I ( X 2 n ; Y 2 n , Y r 2 n , S 2 n ) h ( S 1 n ) h ( Z 2 n ) + h ( Y 1 n , Y r 1 n | S 1 n , X 2 n ) h ( Z 1 n , Z 1 r n ) + h ( Y 1 n ) h ( S 2 n ) + n C 1 + I ( X 2 n ; Y 2 n , Y r 2 n , S 2 n ) .
The last term in (A41) can be bounded as follows:
I ( X 2 n ; Y 2 n , Y r 2 n , S 2 n ) = I ( X 2 n ; S 2 n ) + I ( X 2 n ; Y 2 n , Y r 2 n | S 2 n ) = h ( S 2 n ) h ( Z 1 n ) + h ( Y 2 n , Y r 2 n | S 2 n ) h ( Y 2 n , Y r 2 n | X 2 n , S 2 n ) h ( S 2 n ) h ( Z 1 n ) + h ( Y 2 n , Y r 2 n | S 2 n ) h ( Y 2 n | X 2 n , S 2 n ) h ( Y r 2 n | X 2 n , S 2 n , Y 2 n , X 1 n ) = h ( S 2 n ) h ( Z 1 n ) + h ( Y 2 n , Y r 2 n | S 2 n ) h ( S 1 n ) h ( Z r 2 n ) .
Putting (A42) back to (A41) we have
n ( 2 R 1 + R 2 ϵ n ) h ( Y 1 n , Y r 1 n | S 1 n , X 2 n ) h ( Z 1 n , Z 1 r n ) + h ( Y 1 n ) h ( Z 1 n ) + h ( Y 2 n , Y r 2 n | S 2 n ) h ( Z 2 n ) h ( Z r 2 n ) ( c ) n ( RHS of ( 69 ) ) .

Appendix D. Proof of Theorem 4.

Before proving Theorem 4 we need two lemmas, whose proofs follow in the sequel.
Lemma A1.
Given the power allocation in (74), we have:
SNR 1 p SNR 1 1 + INR 2 , SNR r 1 p SNR r 1 1 + INR 2 ,
INR 1 p 1 , INR r 1 p | h 2 c | 2 / | h 21 | 2 .
Proof. 
Since we choose P 1 p = min 1 , 1 | h 12 | 2 and P 2 p = min 1 , 1 | h 21 | 2 ,
SNR 1 p = | h 11 | 2 P 1 p = | h 11 | 2 min 1 , 1 | h 12 | 2 | h 11 | 2 1 + | h 12 | 2 = SNR 1 1 + INR 2 .
Similarly we can show that SNR r 1 p SNR r 1 1 + INR 2 . Next,
INR 1 p = | h 21 | 2 P 2 p = | h 21 | 2 min 1 , 1 | h 21 | 2 1 ,
and
INR r 1 p = | h 2 c | 2 P 2 p = | h 2 c | 2 min 1 , 1 | h 21 | 2 | h 2 c | 2 | h 21 | 2 .
Lemma A1 enables us to prove Lemma A2 below.
Lemma A2.
Let us define:
f 1 ( Δ 1 ) = 1 2 log 2 + 2 Δ 1 + | h 2 c | 2 / | h 21 | 2
f 2 ( Δ 2 ) = 1 2 log 2 + 2 Δ 2 + | h 1 c | 2 / | h 12 | 2 .
Given the power allocation in (74) we have
ξ 1 1 2 log 1 + 1 Δ 1 1 + | h 2 c | 2 | h 21 | 2
a 1 C SNR 1 1 + INR 2 1 2
a 1 C SNR 1 + SNR r 1 1 + INR 2 f 1 ( Δ 1 )
b 1 C INR 1 + SNR 1 1 + INR 2 1 2
b 1 C INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 1 + INR 2 f 1 ( Δ 1 )
c 1 C SNR 1 1 2
c 1 C SNR 1 + SNR r 1 f 1 ( Δ 1 )
d 1 C SNR 1 + INR 1 1 2
d 1 C INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 f 1 ( Δ 1 ) .
Similar inequalities hold for a 2 , a 2 , b 2 , b 2 , c 2 , c 2 , d 2 , d 2 by symmetry.
Proof. 
The proof is similar to the proof of [8] (Lemma 1). We will use the results of Lemma A1, indexed as follows: (a) SNR 1 p SNR 1 1 + INR 2 , (b) SNR r 1 p SNR r 1 1 + INR 2 , (c) INR 1 p 1 , and (d) INR r 1 p | h 2 c | 2 | h 21 | 2 .
First, for the term related to the quantization at each relay:
ξ 1 = I ( Y ^ r 1 ; Y r 1 | X 1 , X 2 c , Y 1 ) = 1 2 log 1 + 1 Δ 1 1 + INR r 1 p 1 + INR 1 p 1 2 log 1 + 1 Δ 1 1 + INR r 1 p INR 1 p = 1 2 log 1 + 1 Δ 1 1 + | h 2 c | 2 | h 21 | 2 .
For other quantities in Lemma A2:
a 1 = I ( X 1 ; Y 1 | X 1 c , X 2 c ) = 1 2 log 1 + SNR 1 p 1 + INR 1 p ( c ) 1 2 log 1 + SNR 1 p 1 2 ( a ) 1 2 log 1 + SNR 1 1 + INR 2 1 2 . a 1 = I ( X 1 ; Y 1 , Y ^ 1 r | X 1 c , X 2 c ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 p + INR 1 p ) + SNR r 1 p + INR r 1 p ( 1 + η 1 2 SNR 1 p ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 p + INR 1 p ) + SNR r 1 p + INR r 1 p ( 1 + η 1 2 SNR 1 p ) 1 2 log ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p ( c , d ) 1 2 log 1 + SNR 1 p + SNR r 1 p ) f 1 ( Δ 1 ) ( a , b ) 1 2 log 1 + SNR 1 + SNR r 1 1 + INR 2 f 1 ( Δ 1 ) .
In the same way we can show that
b 1 = I ( X 1 , X 2 c ; Y 1 | X 1 c ) = 1 2 log 1 + SNR 1 p + INR 1 1 + INR 1 p 1 2 log 1 + INR 1 + SNR 1 1 + INR 2 1 2 . b 1 = I ( X 1 , X 2 c ; Y 1 , Y ^ 1 r | X 1 c ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 p + INR 1 ) + SNR r 1 p + INR r 1 ( 1 + η 1 2 SNR 1 p ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 log 1 + INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 1 + INR 2 f 1 ( Δ 1 ) . c 1 = I ( X 1 ; Y 1 | X 2 c ) = 1 2 log 1 + SNR 1 1 + INR 1 p 1 2 log ( 1 + SNR 1 ) 1 2 . c 1 = I ( X 1 ; Y 1 , Y ^ 1 r | X 2 c ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 + INR 1 p ) + SNR r 1 + INR r 1 p ( 1 + η 1 2 SNR 1 ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 log 1 + SNR 1 + SNR r 1 f 1 ( Δ 1 ) . d 1 = I ( X 1 , X 2 c ; Y 1 ) = 1 2 log 1 + SNR 1 + INR 1 1 + INR 1 p 1 2 log ( 1 + SNR 1 + INR 1 ) 1 2 . d 1 = I ( X 1 , X 2 c ; Y 1 , Y ^ 1 r ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 + INR 1 ) + SNR r 1 + INR r 1 ( 1 + η 1 2 SNR 1 ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 1 + INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 f 1 ( Δ 1 ) .
Hence Lemma A2 is proven. ☐
Now we are ready to prove Theorem 4. First, let us define δ C i : = ( C i ξ i ) + , g i ( ξ i ) : = 1 / 2 + ξ i , i { 1 , 2 } . Comparing each constraint of the inner bound (13)–(21) with each corresponding constraint of the outer bound (50)–(69), and utilizing the inequalities in Lemma A2 (marked with ( * ) below), we have:
For individual rate constraints:
  • min { c 1 + δ C 1 , c 1 } in the RHS of (13) is within max { g 1 ( ξ 1 ) , f 1 ( Δ 1 ) } of the RHS of (50).
    Proof. 
    We have:
    c 1 + ( C 1 ξ 1 ) + ( * ) 1 2 log ( 1 + SNR 1 ) 1 2 + C 1 ξ 1 = 1 2 log ( 1 + SNR 1 ) + C 1 g 1 ( ξ 1 ) ,
    which is within g 1 ( ξ 1 ) of the outer bound 1 2 log ( 1 + SNR 1 ) + C 1 in RHS of (13). The remaining gap follows straightforwardly from the lower bound on c 1 in Lemma A2. ☐
  • a 1 + δ C 1 + b 2 + δ C 2 (resp. a 1 + δ C 1 + b 2 ) in the RHS of (14) is within g 1 ( ξ 1 ) + g 2 ( ξ 2 ) (resp. g 1 ( ξ 1 ) + f 2 ( Δ 2 ) ) of the first term in the RHS of (50).
    Proof. 
    We have:
    a 1 + ( C 1 ξ 1 ) + + b 2 + ( C 2 ξ 2 ) + a 1 + C 1 ξ 1 + b 2 + C 2 ξ 2 ( * ) 1 2 log 1 + SNR 1 1 + INR 2 1 2 + C 1 ξ 1 + 1 2 log 1 + INR 2 + SNR 2 1 + INR 1 1 2 + C 2 ξ 2 1 2 log 1 + SNR 1 + C 1 ( ξ 1 + 1 2 ) ( ξ 2 + 1 2 ) = 1 2 log 1 + SNR 1 + C 1 g 1 ( ξ 1 ) g 2 ( ξ 2 ) ,
    which is within g 1 ( ξ 1 ) + g 2 ( ξ 2 ) of the outer bound 1 2 log 1 + SNR 1 + C 1 . Further,
    a 1 + ( C 1 ξ 1 ) + + b 2 ( * ) 1 2 log 1 + SNR 1 1 + INR 2 1 2 + C 1 ξ 1 + 1 2 log 1 + INR 2 + INR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) + SNR r 2 1 + INR 1 f 2 ( Δ 2 ) 1 2 log 1 + SNR 1 + C 1 g 1 ( ξ 1 ) f 2 ( Δ 2 ) ,
    which is within g 1 ( ξ 1 ) + f 2 ( Δ 2 ) of the outer bound 1 2 log 1 + SNR 1 + C 1 . ☐
  • a 1 + b 2 + δ C 2 (resp. a 1 + b 2 ) in the RHS of (14) is within f 1 ( Δ 1 ) + g 2 ( ξ 2 ) (respectively f 1 ( Δ 1 ) + f 2 ( Δ 2 ) ) of the second term in the RHS of (50).
    Proof. 
    We have:
    a 1 + b 2 + ( C 2 ξ 2 ) + ( * ) 1 2 log 1 + SNR 1 + SNR r 1 1 + INR 2 f 1 ( Δ 1 ) + 1 2 log 1 + INR 2 + SNR 2 1 + INR 1 1 2 + C 2 ξ 2 1 2 log 1 + SNR 1 + SNR r 1 f 1 ( Δ 1 ) ( 1 2 + ξ 2 ) = 1 2 log 1 + SNR 1 + SNR r 1 f 1 ( Δ 1 ) g 2 ( ξ 2 ) ,
    which is within f 1 ( Δ 1 ) + g 2 ( ξ 2 ) of the outer bound 1 2 log 1 + SNR 1 + SNR r 1 . Next,
    a 1 + b 2 ( * ) 1 2 log 1 + SNR 1 + SNR r 1 1 + INR 2 f 1 ( Δ 1 ) + 1 2 log 1 + INR 2 + INR r 2 + SNR 2 ( 1 + η 2 2 INR r 2 ) + SNR r 2 1 + INR 1 f 2 ( Δ 2 ) 1 2 log 1 + SNR 1 + SNR r 1 f 1 ( Δ 1 ) f 2 ( Δ 2 ) ,
    which is within f 1 ( Δ 1 ) + f 2 ( Δ 2 ) of the outer bound 1 2 log 1 + SNR 1 + SNR r 1 . ☐
Similar gaps for R 2 follow by symmetry.
For sum rate constraints: following the same line of the above proofs we can show that of the constraints on R 1 + R 2 in (17)–(19):
  • d 1 + δ C 1 + a 2 + δ C 2 , a 1 + δ C 1 + d 2 + δ C 2 , b 1 + δ C 1 + b 2 + δ C 2 are within g 1 ( ξ 1 ) + g 2 ( ξ 2 ) of the RHS’s of (52)–(54), respectively.
  • a 1 + d 2 , d 1 + a 2 , b 1 + b 2 are within f 1 ( Δ 1 ) + f 2 ( Δ 2 ) of the RHS’s of (56)–(58), respectively.
  • a 1 + δ C 1 + d 2 , d 1 + δ C 1 + a 2 , b 1 + δ C 1 + b 2 are within g 1 ( ξ 1 ) + f 2 ( Δ 2 ) of the RHS’s of (60)–(62), respectively.
  • b 1 + b 2 + δ C 2 , a 1 + d 2 + δ C 2 , d 1 + a 2 + δ C 2 are within f 1 ( Δ 1 ) + g 2 ( ξ 2 ) of the RHS’s of (64)–(66), respectively.
For weighted sum rate constraints: of the inner bounds on 2 R 1 + R 2 in (20), it can be shown that the bounds a 1 + δ C 1 + d 1 + b 2 + δ C 2 and a 1 + δ C 1 + d 1 + b 2 are redundant. For the remaining bounds:
  • a 1 + δ C 1 + d 1 + δ C 1 + b 2 + δ C 2 is within 2 g 1 ( ξ 1 ) + g 2 ( ξ 2 ) of the RHS of (55).
  • a 1 + d 1 + b 2 is within 2 f 1 ( Δ 1 ) + f 2 ( Δ 2 ) of the RHS of (59).
  • a 1 + δ C 1 + d 1 + δ C 1 + b 2 is within 2 g 1 ( ξ 1 ) + f 2 ( Δ 2 ) of the RHS of (63).
  • a 1 + d 1 + b 2 + δ C 2 is within 2 f 1 ( Δ 1 ) + g 2 ( ξ 2 ) of the RHS of (67).
  • a 1 + d 1 + δ C 1 + b 2 + δ C 2 is within f 1 ( Δ 1 ) + g 1 ( ξ 1 ) + g 2 ( ξ 2 ) of the RHS of (68).
  • a 1 + d 1 + δ C 1 + b 2 is within f 1 ( Δ 1 ) + g 1 ( ξ 1 ) + f 2 ( Δ 2 ) of the RHS of (69).
The gaps for R 1 + 2 R 2 follow directly from the gaps for 2 R 1 + R 2 by symmetry.
To this end, we loosen the gaps obtained from the above comparisons by replacing | h 2 c | 2 / | h 21 | 2 in the definition of Δ 1 and ξ 1 in (A46) and (A48) with its upper bound θ defined in (75) (similarly done for Δ 2 and ξ 2 ). The final step is optimizing Δ 1 and Δ 2 to minimize the new (loosened) gaps. At this point we can apply the result in [8] and conclude that the optimal values are Δ 1 = Δ 2 = 1 4 ( θ 2 + 16 θ + 16 θ ) . Direct calculation leads to the optimized gap in (76).

Appendix E. Proof of Proposition 1.

First, by setting X k c X k , R k p = 0 , k { 1 , 2 } , Q = 1 in Theorem 1, i.e., the whole message at each transmitter is set to be the common message, and noticing that each decoder does not interested in decoding the common message of the unpaired transmitter uniquely, we obtain the following achievable rate region R S I R C i :
R 1 min { c 1 + ( C 1 ξ 1 ) + , c 1 }
R 2 min { c 2 + ( C 2 ξ 2 ) + , c 2 }
R 1 + R 2 min { d 1 + ( C 1 ξ 1 ) + , d 1 }
R 1 + R 2 min { d 2 + ( C 2 ξ 2 ) + , d 2 } .
From the equations of the received signals Y 1 , Y 2 , Y r 1 , Y r 2 in (46)–(49), and by choosing the distribution of the quantization codebooks Y ^ r k = Y r k + Z q k with Z q k N ( 0 , Δ k ) , k { 1 , 2 } we have:
c 1 = I ( X 1 ; Y 1 | X 2 ) = 1 2 log ( 1 + SNR 1 )
c 1 = I ( X 1 ; Y 1 , Y ^ r 1 | X 2 ) = 1 2 log 1 + SNR 1 + SNR r 1 1 + Δ 1
d 1 = I ( X 1 , X 2 ; Y 1 ) = 1 2 log ( 1 + SNR 1 + INR 1 )
d 1 = I ( X 1 , X 2 ; Y 1 , Y ^ r 1 ) = 1 2 log 1 + SNR 1 + INR 1 + SNR r 1 + INR r 1 + SNR 1 INR 1 η 1 2 1 + Δ 1
ξ 1 = I ( Y ^ r 1 ; Y r 1 | X 1 , X 2 , Y 1 ) = 1 2 log 1 + 1 Δ 1 ,
and c 2 , c 2 , d 2 , d 2 , ξ 2 follow by symmetry. Next, we choose the quantization level Δ 1 = 1 . This reflects the fact that in many networks with relays, it is near optimal for each relay to quantize its received signal at the noise level [10,42]. It then follows from (A61e) that ξ 1 = 1 / 2 . We proceed to compare the inner and outer bounds, keeping in mind that Δ 1 = Δ 2 = 1 , ξ 1 = ξ 2 = 1 / 2 , SNR 1 + SNR r 1 INR 2 , and SNR 2 + SNR r 2 INR 1 .
Individual rate constraints. Direct comparison reveals that
  • c 1 + ( C 1 ξ 1 ) + = 1 2 log ( 1 + SNR 1 ) + ( C 1 ξ 1 ) + in (A57) is within 1 / 2 bit of the outer bound 1 2 log ( 1 + SNR 1 ) + C 1 in (50).
  • c 1 = 1 2 log 1 + SNR 1 + SNR r 1 1 + Δ 1 in (A57) is within 1 / 2 bit of the outer bound 1 2 log ( 1 + SNR 1 + SNR r 1 ) in (50).
Therefore, the inner bound on R 1 is within 1 / 2 bit of the outer bound. The same gap for R 2 follows by symmetry.
Sum rate constraints. Direct comparison reveals that
  • d 1 + ( C 1 ξ 1 ) + = 1 2 log ( 1 + SNR 1 + INR 1 ) + ( C 1 ξ 1 ) + in (A59) is within 1 bit of the outer bound
    1 2 log 1 + SNR 2 + SNR r 2 1 + INR 1 + 1 2 log ( 1 + SNR 1 + INR 1 ) + C 1
    in (61).
  • d 1 = 1 2 log 1 + SNR 1 + INR 1 + SNR r 1 + INR r 1 + SNR 1 INR 1 η 1 2 1 + Δ 1 in (A59) is within 1 bit of the outer bound
    1 2 log 1 + SNR 2 + SNR r 2 1 + INR 1 + INR r 1 + 1 2 log ( 1 + INR 1 + SNR r 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) )
    in (57).
By symmetry, it follows that min { d 2 + ( C 2 ξ 2 ) + , d 2 } is within 1 bit of the outer bounds in (65) and (56).  ☐

Appendix F. Proof of Proposition 2.

We will prove the theorem in the reverse direction, i.e., we fix the power allocation at the transmitters and the quantization distortion at the relays, and then find the conditions on C 1 , C 2 such that the second terms in the min in the RHS of (22)–(29) are active.
Since we are focusing on the case when the quantization codewords can be recovered correctly at that destination, i.e., Y ^ r k is a channel output at the receiver Rx k , k { 1 , 2 } , it is reasonable to think of relay and receiver as single receiver. As such, we expect that the rule of thumb for power allocation in [10,36] should perform well. Accordingly, we allocate transmit power for the private signal such that it arrives at the unintended receiver at the level below the noise level. Specifically, we choose
P 1 p = min 1 , 1 INR 2 + INR r 2
P 2 p = min 1 , 1 INR 1 + INR r 1 .
Moreover, as learned from the linear deterministic model in [10], it is reasonable for each relay to choose the quantization distortion at the level of undesired signals plus noise perceived by the associated receiver. In particular we choose
Δ 1 = 1 + INR r 1 p , Δ 2 = 1 + INR r 2 p .
Then we have the “rate-loss” terms to be bounded
ξ 1 = I ( Y ^ r 1 ; Y r 1 | X 1 , X 2 c , Y 1 ) = 1 2 log 1 + 1 Δ 1 1 + INR r 1 p 1 + INR 1 p 1 2 .
Similarly, ξ 2 1 2 . In the same line of the proof of Lemma A1 in Appendix D, we can show the following bounds:
INR r 1 1 + INR 1 + INR r 1 INR r 1 p 1
INR 1 1 + INR 1 + INR r 1 INR 1 p 1
SNR r 1 1 + INR 2 + INR r 2 SNR r 1 p
SNR 1 1 + INR 2 + INR r 2 SNR 1 p .
Now, in order for the second terms in the min in the RHS of (22)–(29) to be active we need
( C 1 ξ 1 ) + max { a 1 a 1 , b 1 b 1 , c 1 c 1 , d 1 d 1 } = d 1 d 1 .
Since ξ 1 1 2 , a sufficient condition would be
C 1 1 2 + d 1 d 1 = 1 2 + I ( X 1 , X 2 c ; Y ^ r 1 | Y 1 ) .
Now we have
I ( X 1 , X 2 c ; Y ^ r 1 | Y 1 ) = 1 2 log 1 + Δ 1 + SNR r 1 + INR r 1 + SNR 1 INR r 1 η 1 2 1 + SNR 1 + INR 1 1 + Δ 1 + INR r 1 p 1 + INR 1 p ( a ) 1 2 log 1 + Δ 1 + SNR r 1 + INR r 1 + SNR 1 INR r 1 η 1 2 1 + SNR 1 + INR 1 1 + Δ 1 + INR r 1 p 2 ( b ) 1 2 log ( SNR r 1 + INR r 1 + SNR 1 INR r 1 η 1 2 ) ( 1 + INR 1 + INR r 1 ) + ( 4 + 4 INR 1 + 6 INR r 1 ) ( 1 + SNR 1 + INR 1 ) ( 4 + 4 INR 1 + 7 INR r 1 ) ( 1 + SNR 1 + INR 1 ) ,
where ( a ) follows from the upper bound on INR 1 p in (A66b), ( b ) follows from the lower bound on INR r 1 p in (A66a). Hence, a sufficient condition for C 1 is
C 1 1 2 log 2 ( SNR r 1 + INR r 1 + SNR 1 INR r 1 η 1 2 ) ( 1 + INR 1 + INR r 1 ) + 2 ( 4 + 4 INR 1 + 6 INR r 1 ) ( 1 + SNR 1 + INR 1 ) ( 4 + 4 INR 1 + 7 INR r 1 ) ( 1 + SNR 1 + INR 1 ) .
Due to this condition on C 1 (and similarly for C 2 ), the achievable rate region in Theorem 1 reduces to
R 1 c 1
R 1 a 1 + b 2
R 2 c 2
R 2 a 2 + b 1
R 1 + R 2 a 1 + d 2
R 1 + R 2 b 1 + b 2
R 1 + R 2 d 1 + a 2
2 R 1 + R 2 a 1 + d 1 + b 2
R 1 + 2 R 2 b 1 + a 2 + d 2 .
Let us define
g 1 ( Δ 1 ) : = 1 2 log ( ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p ) g 2 ( Δ 2 ) : = 1 2 log ( ( 1 + Δ 2 ) ( 1 + INR 2 p ) + INR r 2 p ) .
Using the bounds in (A66a)–(A66d) we have
g 1 ( Δ 1 ) 1 2 log ( 7 )
g 2 ( Δ 2 ) 1 2 log ( 7 ) ,
and
a 1 = I ( X 1 ; Y 1 , Y ^ 1 r | X 1 c , X 2 c ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 p + INR 1 p ) + SNR r 1 p + INR r 1 p ( 1 + η 1 2 SNR 1 p ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 log 1 + SNR 1 p + SNR r 1 p ) g 1 ( Δ 1 ) 1 2 log 1 + SNR 1 + SNR r 1 1 + INR 2 + INR r 2 g 1 ( Δ 1 ) .
In the same way we can show that
b 1 = I ( X 1 , X 2 c ; Y 1 , Y ^ 1 r | X 1 c ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 p + INR 1 ) + SNR r 1 p + INR r 1 ( 1 + η 1 2 SNR 1 p ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 log 1 + INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 1 + INR 2 + INR r 2 g 1 ( Δ 1 ) . c 1 = I ( X 1 ; Y 1 , Y ^ 1 r | X 2 c ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 + INR 1 p ) + SNR r 1 + INR r 1 p ( 1 + η 1 2 SNR 1 ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 log 1 + SNR 1 + SNR r 1 g 1 ( Δ 1 ) .
d 1 = I ( X 1 , X 2 c ; Y 1 , Y ^ 1 r ) = 1 2 log ( 1 + Δ 1 ) ( 1 + SNR 1 + INR 1 ) + SNR r 1 + INR r 1 ( 1 + η 1 2 SNR 1 ) ( 1 + Δ 1 ) ( 1 + INR 1 p ) + INR r 1 p 1 2 1 + INR 1 + INR r 1 + SNR 1 ( 1 + η 1 2 INR r 1 ) + SNR r 1 g 1 ( Δ 1 ) .
The preceding inequalities lead to the following gap between inner and outer bounds:
  • Constraints on R 1 :
    The RHS of (A69) is within g 1 ( Δ 1 ) of the RHS of (50).
    The RHS of (A70) is within g 1 ( Δ 1 ) + g 2 ( Δ 2 ) of the RHS of (50).
  • Constraints on R 2 follow by symmetry.
  • Constraints on R 1 + R 2 :
    The RHS of (A73) is within g 1 ( Δ 1 ) + g 2 ( Δ 2 ) of the RHS of (56).
    Similarly for the RHS of (A75).
    The RHS of (A74) is within g 1 ( Δ 1 ) + g 2 ( Δ 2 ) of the RHS of (58).
  • Constraint on 2 R 1 + R 2 :
    The RHS of (A76) is within 2 g 1 ( Δ 1 ) + g 2 ( Δ 2 ) of the RHS of (59).
  • Constraint on 2 R 1 + R 2 follows by symmetry.
In summary, we conclude that inner bound on the individual rates are within g 1 ( Δ 1 ) + g 2 ( Δ 2 ) bits of the outer bound, inner bound on R 1 + R 2 is within g 1 ( Δ 1 ) + g 2 ( Δ 2 ) of the outer bound, inner bound on 2 R 1 + R 2 is within 2 g 1 ( Δ 1 ) + g 2 ( Δ 2 ) of the outer bound, and inner bound on R 1 + 2 R 2 is within g 1 ( Δ 1 ) + 2 g 2 ( Δ 2 ) of the outer bound. Using (A78) we obtain the desired gap. Finally, recall that the outer bounds we have just used are derived by assuming each receiver cooperates with its associated relay. ☐

Appendix G. Proofs of Lemma 1 and Lemma 2.

Proof of Lemma 1: by setting the whole message at each transmitter to be private, i.e., setting X k c = , R k c = 0 , k { 1 , 2 } , and Q = const in Theorem 1, the achievable rate region for the IRC reduces to a rectangle given by
R 1 min { I ( X 1 ; Y 1 ) + ( C 1 I ( Y ^ r 1 ; Y r 1 | X 1 , Y 1 ) ) + , I ( X 1 ; Y 1 , Y ^ r 1 ) } R 2 min { I ( X 2 ; Y 2 ) + ( C 2 I ( Y ^ r 2 ; Y r 2 | X 2 , Y 2 ) ) + , I ( X 2 ; Y 2 , Y ^ r 2 ) } .
Extending the above achievable region to the Gaussian IRC, as described in Section 4.2, with the quantization distortion Δ k = 1 + INR r k , k { 1 , 2 } , we can easily show that I ( X k ; Y k ) + ( C k I ( Y ^ r k ; Y r k | X k , Y k ) ) + I ( X k ; Y k , Y ^ r k ) if the conditions in (79) are satisfied. Hence, the resulting achievable region is given by
R 1 I ( X 1 ; Y 1 ) + ( C 1 I ( Y ^ r 1 ; Y r 1 | X 1 , Y 1 ) ) + R 2 I ( X 2 ; Y 2 ) + ( C 2 I ( Y ^ r 2 ; Y r 2 | X 2 , Y 2 ) ) + ,
which gives rise to the achievable sum rate C l b w = R 1 + R 2 in explicit form in (80). ☐
Proof of Lemma 2: recall that we denote the sequences sent over the digital links by V 1 n and V 2 n . By the Fano’s inequality
n ( R 1 + R 2 2 ϵ n ) I ( X 1 n ; Y 1 n , V 1 n ) + I ( X 2 n ; Y 2 n , V 2 n ) I ( X 1 n ; Y 1 n ) + H ( V 1 n ) + I ( X 2 n ; Y 2 n ) + H ( V 2 n ) I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) + n C 1 + n C 2 ,
where ϵ n 0 as n . Under the condition (81) we can proceed to bound I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) in the same way as done in [37,38,39]. Specifically, we can find a useful and smart genie, which gives us
I ( X 1 n ; Y 1 n ) + I ( X 2 n ; Y 2 n ) n 2 log 1 + SNR 1 1 + INR 1 + n 2 log 1 + SNR 2 1 + INR 2 .
Combining (A79) and (A80) we have Lemma 2 proven. ☐

Appendix H. Proof of Proposition 4.

Similarly to the Gaussian IRC in the strong interference regime, we also set the whole message at each transmitter to be the common message, i.e., X k c X k , R k p = 0 , k { 1 , 2 } , Q = 1 in (22)–(29). As a result, the achievable rate region R C M A R C i consists of (A57)–(A60) plus two extra conditions given below, which follows from (27) and (23), due to the fact that each decoder is to decode both messages:
R 1 min { b 2 + ( C 2 ξ 2 ) + , b 2 }
R 2 min { b 1 + ( C 1 ξ 1 ) + , b 1 } .
By the same procedure for quantization as for the Gaussian IRC with strong interference in Appendix E, we obtain c 1 , c 1 , d 1 , d 1 , ξ 1 as in (A61), plus
b 1 = I ( X 2 ; Y 1 | X 1 ) = 1 2 log ( 1 + INR 1 )
b 1 = I ( X 2 ; Y 1 , Y ^ r 1 | X 1 ) = 1 2 log 1 + INR 1 + INR r 1 1 + Δ 1 ,
and symmetrically for b 2 , b 2 , c 2 , c 2 , d 2 , d 2 , ξ 2 . Note that we also choose Δ 1 = Δ 2 = 1 , which make ξ 1 = ξ 2 = 1 / 2 (see (A61e)). Keeping these in mind, let us compare the inner and outer bounds.
Individual rate constraints.
  • c 1 + ( C 1 ξ 1 ) + = 1 2 log ( 1 + SNR 1 ) + ( C 1 ξ 1 ) + in (A57) is within 1 / 2 bit of the outer bound 1 2 log ( 1 + SNR 1 ) + C 1 in (84).
  • c 1 = 1 2 log 1 + SNR 1 + SNR r 1 1 + Δ 1 in (A57) is within 1 / 2 bit of the outer bound 1 2 log ( 1 + SNR 1 + SNR r 1 ) in (84).
  • b 2 + ( C 2 ξ 2 ) + = 1 2 log ( 1 + INR 2 ) + ( C 2 ξ 2 ) + in (A81) is within 1 / 2 bit of the outer bound 1 2 log ( 1 + INR 2 ) + C 2 in (85).
  • b 2 = 1 2 log 1 + INR 2 + INR r 2 1 + Δ 2 in (A81) is within 1 / 2 bit of the outer bound 1 2 log ( INR 2 + INR r 2 ) in (85).
Therefore we conclude the active inner bound on R 1 is within 1 / 2 bit of some outer bound. By symmetry we obtain the same result for R 2 .
Sum rate constraints.
  • d 1 + ( C 1 ξ 1 ) + = 1 2 log ( 1 + SNR 1 + INR 1 ) + ( C 1 ξ 1 ) + in (A59) is within 1 / 2 bit of the outer bound
    1 2 log ( 1 + SNR 1 + INR 1 ) + C 1 ,
    in (88).
  • d 1 = 1 2 log 1 + SNR 1 + INR 1 + SNR r 1 + INR r 1 + SNR 1 INR 1 η 1 2 1 + Δ 1 in (A59) is within 1 / 2 bit of the outer bound
    1 2 log 1 + SNR 1 + INR 1 + SNR r 1 + INR r 1 + SNR 1 INR 1 η 1 2 ,
    in (90).
Similarly we obtain the same gaps for the two remaining bounds on R 1 + R 2 . Accordingly we conclude that the active inner bound on sum rate is within 1 / 2 bit of some outer bound.
Finally, since the inner bound is within 1 / 2 bit of the outer bound, the inner bound is within 1 / 2 bit of the capacity region of the channel. ☐

References

  1. Dahlman, E.; Parkvall, S.; Skold, J. 4G: LTE/LTE-Advanced for Mobile Broadband; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  2. Jha, S.C.; Sivanesan, K.; Vannithamby, R.; Koc, A.T. Dual Connectivity in LTE small cell networks. In Proceedings of the IEEE Globecom Workshops, Austin, TX, USA, 8–12 December 2014; pp. 1205–1210. [Google Scholar]
  3. Sahin, O.; Erkip, E. Achievable Rates for the Gaussian Interference Relay Channel. In Proceedings of the IEEE Global Telecommunications Conference, Washington, DC, USA, 26–30 November 2007; pp. 1627–1631. [Google Scholar]
  4. Marić, I.; Dabora, R.; Goldsmith, A. On the capacity of the interference channel with a relay. In Proceedings of the IEEE International Symposium on Information Theory, Toronto, ON, Canada, 6–11 July 2008; pp. 554–558. [Google Scholar]
  5. Sahin, O.; Erkip, E.; Simeone, O. Interference channel with a relay: Models, relaying strategies, bounds. In Proceedings of the Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 8–13 February 2009; pp. 90–95. [Google Scholar]
  6. Tian, Y.; Yener, A. The Gaussian Interference Relay Channel: Improved Achievable Rates and Sum Rate Upperbounds Using a Potent Relay. IEEE Trans. Inf. Theory 2011, 57, 2865–2879. [Google Scholar] [CrossRef]
  7. Simeone, O.; Erkip, E.; Shamai, S. On Codebook Information for Interference Relay Channels with Out-of-Band Relaying. IEEE Trans. Inf. Theory 2011, 57, 2880–2888. [Google Scholar] [CrossRef]
  8. Zhou, L.; Yu, W. Incremental Relaying for the Gaussian Interference Channel With a Degraded Broadcasting Relay. IEEE Trans. Inf. Theory 2013, 59, 2794–2815. [Google Scholar] [CrossRef]
  9. Razaghi, P.; Hong, S.N.; Zhou, L.; Yu, W.; Caire, G. Two Birds and One Stone: Gaussian Interference Channel With a Shared Out-of-Band Relay of Limited Rate. IEEE Trans. Inf. Theory 2013, 59, 4192–4212. [Google Scholar] [CrossRef]
  10. Wang, I.H.; Tse, D. Interference Mitigation Through Limited Receiver Cooperation. IEEE Trans. Inf. Theory 2011, 57, 2913–2940. [Google Scholar] [CrossRef]
  11. Cover, T.M.; Kim, Y.H. Capacity of a Class of Deterministic Relay Channels. In Proceedings of the IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 591–595. [Google Scholar]
  12. Chang, H.; Chung, S.Y.; Kim, S. Interference Channel With a Causal Relay Under Strong and Very Strong Interference. IEEE Trans. Inf. Theory 2014, 60, 859–865. [Google Scholar] [CrossRef]
  13. Do, H.T.; Oechtering, T.J.; Skoglund, M. Layered Coding for the Interference Channel With a Relay. IEEE Trans. Inf. Theory 2014, 60, 6154–6180. [Google Scholar] [CrossRef]
  14. Baik, I.J.; Chung, S.Y. Causal Relay Networks. IEEE Trans. Inf. Theory 2015, 61, 5432–5440. [Google Scholar] [CrossRef]
  15. Liu, T.; Tuninetti, D.; Chung, S.Y. On the DoF Region of the MIMO Gaussian Two-User Interference Channel With an Instantaneous Relay. IEEE Trans. Inf. Theory 2017, 63, 4453–4471. [Google Scholar] [CrossRef]
  16. Shin, W.; Lee, N.; Lee, J.; Poor, H.V. Guiding blind transmitters for K-user MISO interference relay channels with Imperfect channel knowledge. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 545–549. [Google Scholar]
  17. Kang, H.S.; Kang, M.G.; Nosratinia, A.; Choi, W. The Degrees of Freedom of the Interference Channel with a Cognitive Relay under Delayed Feedback. IEEE Trans. Inf. Theory 2017, 63, 5299–5313. [Google Scholar] [CrossRef]
  18. Nokleby, M.; Aazhang, B. Cooperative Compute-and-Forward. IEEE Trans. Wirel. Commun. 2016, 15, 14–27. [Google Scholar] [CrossRef]
  19. Kramer, G.; Hou, J. Short-message quantize-forward network coding. In Proceedings of the 8th International Workshop on Multi-Carrier Systems Solutions, Herrsching, Germany, 3–4 May 2011; pp. 1–3. [Google Scholar]
  20. Kim, Y.H. Coding techniques for primitive relay channels. In Proceedings of the Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Monticello, IL, USA, 26–28 September 2007. [Google Scholar]
  21. Carleial, A. Interference Channels. IEEE Trans. Inf. Theory 1978, 24, 60–70. [Google Scholar] [CrossRef]
  22. Han, T.; Kobayashi, K. A New Achievable Rate Region for the Interference Channel. IEEE Trans. Inf. Theory 1981, 27, 49–60. [Google Scholar] [CrossRef]
  23. Do, H.T.; Oechtering, T.J.; Skoglund, M. Noisy Network Coding Approach to the Interference Channel with Receiver Cooperation. In Proceedings of the 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 28–30 September 2011. [Google Scholar]
  24. Do, H.T.; Oechtering, T.J.; Skoglund, M. A new inner bound for the interference relay channel. In Proceedings of the 46th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 21–23 March 2012. [Google Scholar]
  25. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  26. Lim, S.H.; Kim, Y.H.; El Gamal, A.; Chung, S.Y. Noisy Network Coding. IEEE Trans. Inf. Theory 2011, 57, 3132–3152. [Google Scholar] [CrossRef]
  27. Razaghi, P.; Yu, W. Universal relaying for the interference channel. In Proceedings of the Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 31 January–5 February 2010. [Google Scholar]
  28. Kim, Y.H. Capacity of a Class of Deterministic Relay Channels. IEEE Trans. Inf. Theory 2008, 54, 1328–1329. [Google Scholar] [CrossRef]
  29. Costa, M.H.M.; Gamal, A. The capacity region of the discrete memoryless interference channel with strong interference (Corresp.). IEEE Trans. Inf. Theory 1987, 33, 710–711. [Google Scholar] [CrossRef]
  30. Gamal, A.; Costa, M. The capacity region of a class of deterministic interference channels (Corresp.). IEEE Trans. Inf. Theory 1982, 28, 343–346. [Google Scholar] [CrossRef]
  31. Chong, H.F.; Motani, M. The Capacity Region of a Class of Semideterministic Interference Channels. IEEE Trans. Inf. Theory 2009, 55, 598–603. [Google Scholar] [CrossRef]
  32. Kang, B.; Lee, S.H.; Chung, S.Y.; Suh, C. A new achievable scheme for interference relay channels. In Proceedings of the IEEE International Symposium on Information Theory Proceedings (ISIT), Istanbul, Turkey, 7–12 July 2013. [Google Scholar]
  33. Khormuji, M.; Skoglund, M. Capacity of Two Semideterministic Classes of Multiple-Access Relay Channels. IEEE Commun. Lett. 2012, 16, 1529–1531. [Google Scholar] [CrossRef]
  34. Telatar, E.; Tse, D. Bounds on the capacity region of a class of interference channels. In Proceedings of the IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 2871–2874. [Google Scholar]
  35. Karmakar, S.; Varanasi, M.K. The Capacity Region of the MIMO Interference Channel and Its Reciprocity to Within a Constant Gap. IEEE Trans. Inf. Theory 2013, 59, 4781–4797. [Google Scholar] [CrossRef]
  36. Etkin, R.H.; Tse, D.N.C.; Wang, H. Gaussian Interference Channel Capacity to Within One Bit. IEEE Trans. Inf. Theory 2008, 54, 5534–5562. [Google Scholar] [CrossRef]
  37. Shang, X.; Kramer, G.; Chen, B. A New Outer Bound and the Noisy-Interference Sum-Rate Capacity for Gaussian Interference Channels. IEEE Trans. Inf. Theory 2009, 55, 689–699. [Google Scholar] [CrossRef]
  38. Motahari, A.; Khandani, A. Capacity Bounds for the Gaussian Interference Channel. IEEE Trans. Inf. Theory 2009, 55, 620–643. [Google Scholar] [CrossRef]
  39. Annapureddy, V.S.; Veeravalli, V.V. Gaussian Interference Networks: Sum Capacity in the Low-Interference Regime and New Outer Bounds on the Capacity Region. IEEE Trans. Inf. Theory 2009, 55, 3032–3050. [Google Scholar] [CrossRef]
  40. Chong, H.F.; Motani, M.; Garg, H.; El Gamal, H. On The Han-Kobayashi Region for the Interference Channel. IEEE Trans. Inf. Theory 2008, 54, 3188–3195. [Google Scholar] [CrossRef]
  41. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Academic Press: Cambridge, MA, USA, 1981. [Google Scholar]
  42. Avestimehr, A.; Diggavi, S.; Tse, D. Wireless Network Information Flow: A Deterministic Approach. IEEE Trans. Inf. Theory 2011, 57, 1872–1905. [Google Scholar] [CrossRef]
Figure 1. Interfering relay channels (IRC). Dashed lines depict interference. M i and M ^ i , i { 1 , 2 } , denote a message and its estimate, respectively.
Figure 1. Interfering relay channels (IRC). Dashed lines depict interference. M i and M ^ i , i { 1 , 2 } , denote a message and its estimate, respectively.
Entropy 19 00441 g001
Figure 2. Semi-deterministic interfering relay channels.
Figure 2. Semi-deterministic interfering relay channels.
Entropy 19 00441 g002
Figure 3. Deterministic IRC of El Gamal–Costa type.
Figure 3. Deterministic IRC of El Gamal–Costa type.
Entropy 19 00441 g003
Figure 4. Gaussian interfering relay channels.
Figure 4. Gaussian interfering relay channels.
Entropy 19 00441 g004
Figure 5. Sum capacity upper and lower bounds in the weak interference-relay regime.
Figure 5. Sum capacity upper and lower bounds in the weak interference-relay regime.
Entropy 19 00441 g005

Share and Cite

MDPI and ACS Style

Do, H.T.; Oechtering, T.J.; Skoglund, M.; Vu, M. Interfering Relay Channels. Entropy 2017, 19, 441. https://doi.org/10.3390/e19090441

AMA Style

Do HT, Oechtering TJ, Skoglund M, Vu M. Interfering Relay Channels. Entropy. 2017; 19(9):441. https://doi.org/10.3390/e19090441

Chicago/Turabian Style

Do, Hieu T., Tobias J. Oechtering, Mikael Skoglund, and Mai Vu. 2017. "Interfering Relay Channels" Entropy 19, no. 9: 441. https://doi.org/10.3390/e19090441

APA Style

Do, H. T., Oechtering, T. J., Skoglund, M., & Vu, M. (2017). Interfering Relay Channels. Entropy, 19(9), 441. https://doi.org/10.3390/e19090441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop