Next Article in Journal
On Extracting Probability Distribution Information from Time Series
Next Article in Special Issue
A Survey on Interference Networks: Interference Alignment and Neutralization
Previous Article in Journal
Bivariate Rainfall and Runoff Analysis Using Entropy and Copula Theories
Previous Article in Special Issue
Some New Results on the Wiretap Channel with Side Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Network Coding for Line Networks with Broadcast Channels

by
Gerhard Kramer
1 and
Seyed Mohammadsadegh Tabatabaei Yazdi
2
1
Institute for Communications Engineering, Technische Universität München, 80333 Munich, Germany
2
Corporate R&D, Qualcomm, San Diego, CA 92121, USA
Entropy 2012, 14(10), 1813-1828; https://doi.org/10.3390/e14101813
Submission received: 26 July 2012 / Revised: 7 September 2012 / Accepted: 18 September 2012 / Published: 28 September 2012
(This article belongs to the Special Issue Information Theory Applied to Communications and Networking)

Abstract

:
An achievable rate region for line networks with edge and node capacity constraints and broadcast channels (BCs) is derived. The region is shown to be the capacity region if the BCs are orthogonal, deterministic, physically degraded, or packet erasure with one-bit feedback. If the BCs are physically degraded with additive Gaussian noise then independent Gaussian inputs achieve capacity.

Graphical Abstract

1. Introduction

Consider a line network with edge and node capacity constraints as shown in Figure 1. “Supernode” u, u = 1 , 2 , 3 , 4 , consists of two nodes u i , u o where the “i” represents “input” and “o” represents “output”. More generally, for N supernodes V = { 1 , 2 , , N } we have 2 N nodes
V i o = { 1 i , 1 o , 2 i , 2 o , , N i , N o }
and 2 ( N - 1 ) + N directed edges
E i o = { ( 1 o , 2 i ) , ( 2 o , 3 i ) , , ( ( N - 1 ) o , N i }
{ ( N o , ( N - 1 ) i ) , , ( 3 o , 2 i ) , ( 2 o , 1 i }
{ ( 1 i , 1 o ) , ( 2 i , 2 o ) , , ( N i , N o ) }
Every edge ( a , b ) is labeled with a capacity constraint C ( a , b ) and for simplicity we write C ( u i , u o ) as C u .
Figure 1. A line network with edge and node capacity constraints.
Figure 1. A line network with edge and node capacity constraints.
Entropy 14 01813 g001
Let
u D ( u ) = { v ( 1 ) , v ( 2 ) , , v ( L ) }
denote a multicast traffic session, where u , v ( 1 ) , , v ( L ) are supernodes. The meaning is that a source message is available at supernode u and is destined for supernodes in the set D ( u ) . Since u takes on N values and D ( u ) can take on 2 N - 1 - 1 values, there are up to N ( 2 N - 1 - 1 ) multicast sessions. We associate sources with input nodes u i and sinks with output nodes u o . Such line networks are special cases of discrete memoryless networks (DMNs) and we use the capacity definition from [1] (Section III.D). The capacity region was recently established in [2]. A binary linear network code achieves capacity and progressive d-separating edge-cut (PdE) bounds [3] provide the converse.
The goal of this work is to extend results from [2] to wireless line networks by using insights from two-way relaying [4], broadcasting with cooperation [5], and broadcasting with side-information [6]. The model is shown in Figure 2 where the difference to Figure 1 is that node u o transmits over a two-receiver broadcast channel (BC) P ( y u - 1 , y u + 1 | x u ) to nodes ( u - 1 ) i and ( u + 1 ) i (see [7]). The channel outputs at node u i are
Y u - 1 , u = f u - 1 , u ( X u - 1 , Z u - 1 )
Y u + 1 , u = f u + 1 , u ( X u + 1 , Z u + 1 )
for some functions f u - 1 , u ( · ) and f u + 1 , u ( · ) , and where the Z u , u = 1 , 2 , , N , are statistically independent. We permit the noise random variables Z u to be common to Y u , u - 1 and Y u , u + 1 for generality. The edges ( u i , u o ) are the usual links with capacity C u . Such line networks are again special cases DMNs and we use the capacity definition from [1] (Section III.D).
Figure 2. A line network with broadcasting and node capacity constraints.
Figure 2. A line network with broadcasting and node capacity constraints.
Entropy 14 01813 g002
The paper is organized as follows. Section 2 reviews the capacity region for line networks derived in [2]. Section 3 gives our main result: an achievable rate region for line networks with BCs. Section 4 shows that this region is the capacity region for orthogonal, deterministic, and physically degraded BCs, and packet erasure BCs with feedback. We further show that for physically degraded Gaussian BCs the best input distributions are Gaussian. Section 5 relates our work to recent work on relaying and concludes the paper.

2. Review of Wireline Capacity

We review the main result from [2]. Let m ( u D ( u ) ) and R ( u D ( u ) ) denote the message bits and rate, respectively, of traffic session u D ( u ) . We collect the bits going through supernode u into the following 8 sets:
m L R ( u ) = { m ( i D ( i ) ) : 1 i u - 1 , D ( i ) { u + 1 , , N } , u D ( i ) }
m R L ( u ) = { m ( i D ( i ) ) : u + 1 i N , D ( i ) { 1 , , u - 1 } , u D ( i ) }
m L R u = { m ( i D ( i ) ) : 1 i u - 1 , D ( i ) { u + 1 , , N } , u D ( i ) }
m R L u = { m ( i D ( i ) ) : u + 1 i N , D ( i ) { 1 , , u - 1 } , u D ( i ) }
m u = { m ( i D ( i ) ) : 1 i u - 1 , D ( i ) { u + 1 , , N } = , u D ( i ) }
{ m ( i D ( i ) ) : u + 1 i N , D ( i ) { 1 , , u - 1 } = , u D ( i ) }
m u , L R = { m ( u D ( u ) ) : D ( u ) { 1 , , u - 1 } } , D ( u ) { u + 1 , , n } } }
m u , R = { m ( u D ( u ) ) : D ( u ) { 1 , , u - 1 } = } , D ( u ) { u + 1 , , n } } }
m u , L = { m ( u D ( u ) ) : D ( u ) { 1 , , u - 1 } } , D ( u ) { u + 1 , , n } = } }
The idea is that
  • m L R ( u ) and m R L ( u ) represent traffic flowing from left-to-right and right-to-left, respectively, through supernode u without being required at supernode u;
  • m L R u , m R L u represent traffic flowing from left-to-right and right-to-left, respectively, through supernode u but required at supernode u also;
  • m u represents traffic from the left and right, respectively, and destined for supernode u but not destined for any nodes on the right and left (so this traffic “stops" at supernode u on its way from the left or right);
  • m u , L R , m u , R , and m u , L represent traffic originating at supernode u and destined for nodes on both the left and right, right only, and left only, respectively.
The non-negative message rates are denoted R L R ( u ) , R R L ( u ) , R L R u , R R L u , R u , R u , L R , R u , R , and R u , L .
Theorem 1 (Theorem 1 in [2]) 
The capacity region of a line network with supernodes u = 1 , 2 , , N is specified by the inequalities
max ( R L R ( u ) , R R L ( u ) ) + R L R u + R R L u + R u + R u , L R + R u , R + R u , L C u
R R L ( u ) + R R L u + R u , L R + R u , L C u , u - 1
R L R ( u ) + R L R u + R u , L R + R u , R C u , u + 1
Remark 1 The converse in [2] follows by PdE arguments [3] and achievability follows by using rate-splitting, routing, copying, and “butterfly” binary linear network coding. We review both the PdE bound and the coding method after Examples 1 and 2 below.
Remark 2 Inequalities (15) and (16) are classic cut bounds [8] (Section 14.10). If we have no node constraints ( C u = ) then (15) and (16) are routing bounds, so routing is optimal for this case (see [9]).
Example 1 Consider N = 3 for which we have 9 possible multicast sessions. The network is as in Figure 1 but where the nodes 4 i and 4 o are removed, as well as the edges touching them. For supernode u = 1 we collect 7 of these sessions into 2 sets as follows. (We abuse notation and write u { v } as u v .)
m 1 : 2 1 , 2 { 1 , 3 } , 3 1 , 3 { 1 , 2 }
m 1 , R : 1 2 , 1 3 , 1 { 2 , 3 }
Sessions 2 3 and 3 2 are missing from (17) and (18) because they do not involve supernode 1. Similarly, for supernode 2 we collect the 9 sessions into 8 sets:
m L R ( 2 ) : 1 3 , m R L ( 2 ) : 3 1 , m L R 2 : 1 { 2 , 3 } , m R L 2 : 3 { 1 , 2 }
m 2 : 1 2 , 3 2 , m 2 , L R : 2 { 1 , 3 } , m 2 , R : 2 3 , m 2 , L : 2 1
Finally, for supernode 3 we have the 2 sets
m 3 : 1 3 , 1 { 2 , 3 } , 2 3 , 2 { 1 , 3 }
m 3 , L : 3 1 , 3 2 , 3 { 1 , 2 }
The inequalities of Theorem 1 are
u = 1 : R 1 + R 1 , R C 1 R 1 , R C 1 , 2
u = 2 : max ( R L R ( 2 ) , R R L ( 2 ) ) + R L R 2 + R R L 2 + R 2 + R 2 , L R + R 2 , R + R 2 , L C 2 R R L ( 2 ) + R R L 2 + R 2 , L R + R 2 , L C 2 , 1 R L R ( 2 ) + R L R 2 + R 2 , L R + R 2 , R C 2 , 3
u = 3 : R 3 + R 3 , L C 3 R 3 , L C 3 , 2
We discuss the 7 inequalities (23)–(25) in more detail. Consider first the converse. We write a classic cut as ( S , S c ) , where S is the set of nodes on one side of the cut and S c is the set of nodes on the other side of the cut. The inequalities with the edge capacities C 1 , 2 , C 2 , 1 , C 2 , 3 , C 3 , 2 are classic cut bounds. For example, the cut ( S , S c ) = ( { 1 i , 1 o } , { 2 i , 2 o , 3 i , 3 o } ) gives the bound R 1 , R C 1 , 2 .
The inequalities with the “node" capacities C 1 , C 2 , C 3 in (23)–(25) are not classic cut bounds. To see this, consider the bound R 1 + R 1 , R C 1 . A classic cut bound would require us to choose { 1 o , 2 o , 3 o } S c because m 1 and m 1 , R generally include messages with positive rates for all supernodes. But then the only way to isolate the edge ( 1 i , 1 o ) is to choose S = { 1 i } which gives the too-weak bound R 1 , R C 1 .
We require a stronger method and use PdE bounds. We use the notation in [3]: E d is the edge cut, S d is the set of sources whose sum-rate we bound, π ( · ) is the permutation that defines the order in which we test the sources. Consider the edge cut E d = { ( 1 i , 1 o ) } , the source set S d with the traffic sessions (17) and (18), and any permutation π ( · ) for which the sessions (17) appear before the sessions (18). After removing edge ( 1 i , 1 o ) the PdE algorithm removes edge ( 1 o , 2 i ) because node 1 o has no incoming edges. We next test if the sessions (17) are disconnected from one of their destinations; indeed they are because one of these destinations is node 1 o . The PdE algorithm now removes the remaining edges in the graph because the nodes 2 i , 2 o , 3 i , 3 o are not the sources of messages in (18). As a result, the remaining sessions (18) are disconnected from their destinations and the PdE bound gives R 1 + R 1 , R C 1 . The bound on C 3 follows similarly. The bound on C 2 is more subtle and we develop it in a more general context below (see the text after Example 2).
For achievability, note that all 7 inequalities are routing bounds except for the bound on C 2 in (24). To approach this bound, we use a classic method and XOR the bits in sessions 1 3 and 3 1 before sending them through edge ( 2 i , 2 o ) . More precisely, we combine m ( 1 3 ) and m ( 3 1 ) to form
m ( 1 3 ) m ( 3 1 )
by which we mean the bits formed when the smaller-rate message bits are XORed with a corresponding number of bits of the larger-rate message. The remaining larger-rate message bits are appended so that m ( 1 3 ) m ( 3 1 ) has rate max ( R ( 1 3 ) , R ( 3 1 ) ) . The message (26) is sent to node 2 o together with the remaining messages received at node 2 i . We must thus satisfy the bound on C 2 in (24).
For the routing bounds there are two subtleties. First, node 2 o forwards to the left the uncoded bits m ( 3 { 1 , 2 } ) , m ( 2 { 1 , 3 } ) , and m ( 2 1 ) . However, it must treat m ( 1 3 ) m ( 3 1 ) specially because it cannot necessarily determine m ( 3 1 ) and m ( 1 3 ) . But if R ( 1 3 ) > R ( 3 1 ) then node 2 o can remove the appended bits in (26) and communicate m ( 3 1 ) to node 1 i at rate R ( 3 1 ) , rather than max ( R ( 1 3 ) , R ( 3 1 ) ) . If R ( 1 3 ) R ( 3 1 ) then no bits should be removed and node 2 o again communicates m ( 3 1 ) to node 1 i at rate R ( 3 1 ) . The bits node 2 o forwards to the right are treated similarly. In summary, the rates for messages m ( 3 1 ) and m ( 1 3 ) on edges ( 2 o , 1 i ) and ( 2 o , 3 i ) , respectively, are simply the classic routing rates.
The second routing subtlety is more straightforward: after node 1 i receives the XORed bits, it can recover m ( 3 1 ) by subtracting the bits m ( 1 3 ) that it knows. Finally, node 1 i transmits m ( 3 1 ) to node 1 o . Node 3 i operates similarly.
Example 2 Consider Figure 1 with N = 4 for which there are 28 possible multicast sessions. For supernode 1 we collect 19 of these sessions into 2 sets as follows.
m 1 : 2 1 , 2 { 1 , 3 } , 2 { 1 , 4 } , 2 { 1 , 3 , 4 } ,
3 1 , 3 { 1 , 2 } , 3 { 1 , 4 } , 3 { 1 , 2 , 4 } ,
4 1 , 4 { 1 , 2 } , 4 { 1 , 3 } , 4 { 1 , 2 , 3 }
m 1 , R : 1 2 , 1 3 , 1 4 ,
1 { 2 , 3 } , 1 { 2 , 4 } , 1 { 3 , 4 } ,
1 { 2 , 3 , 4 }
The 9 sessions not involving supernode 1 are missing. The rate bounds for supernode 1 are given by (14)–(16) with u = 1 . The messages and rate bounds for supernode 4 are similar.
Similarly, for supernode 2 we collect 26 of 28 sessions into 8 sets as follows.
m L R ( 2 ) : 1 3 , 1 4 , 1 { 3 , 4 }
m R L ( 2 ) : 3 1 , 3 { 1 , 4 } , 4 1 , 4 { 1 , 3 }
m L R 2 : 1 { 2 , 3 } , 1 { 2 , 4 } , 1 { 2 , 3 , 4 }
m R L 2 : 3 { 1 , 2 } , 3 { 1 , 2 , 4 } , 4 { 1 , 2 } , 4 { 1 , 2 , 3 }
m 2 : 1 2 , 3 2 , 3 { 2 , 4 } , 4 2 , 4 { 2 , 3 }
m 2 , L R : 2 { 1 , 3 } , 2 { 1 , 4 } , 2 { 1 , 3 , 4 }
m 2 , R : 2 3 , 2 4 , 2 { 3 , 4 }
m 2 , L : 2 1
Sessions 3 4 and 4 3 are missing. The rate bounds for supernode 2 are given by (14)–(16) with u = 2 . The messages and rate bounds for supernode 3 are similar.
The converse and coding method for N > 3 are entirely similar to the case N = 3 . However, we have not yet developed the PdE bound for N = 3 and edge cut E d = { ( 2 i , 2 o ) } . We do this now but in the more general context of N 2 and E d = { ( u i , u o ) } for any u.
So consider the PdE bound with E d = { ( u i , u o ) } and S d having all the traffic sessions (6)–(13) except for (7). We choose π ( · ) so that the sessions (8)–(10) appear first, the sessions (6) and (11)–(12) appear second, and the sessions (13) appear last. The PdE algorithm performs the following steps.
  • Remove ( u i , u o ) and then remove ( u o , ( u - 1 ) i ) and ( u o , ( u + 1 ) i ) because node u o has no incoming edges. The resulting graph at supernode u is shown in Figure 3.
  • Test if the sessions (8)–(10) (sessions m L R u , m R L u , m u ) are disconnected from one of their destinations, which they are because one of these destinations is node u o .
  • Remove all edges to the right of supernode u because the nodes to the right are not the sources of the remaining sessions (6), (11)–(13) (sessions m L R ( u ) , m u , L R , m u , R , and m u , L ).
  • Test if the sessions (6), (11) and (12) (sessions m L R ( u ) , m u , L R , m u , R ) are disconnected from one of their destinations, which they are because one of these destinations is to the right of supernode u.
  • Remove all edges to the left of supernode u because the nodes to the left are not the sources of the sessions (13) (sessions m u , L ).
  • Test if the sessions (13) are disconnected from one of their destinations, which they are.
Since the algorithm completes successfully, the PdE bound (almost) gives inequality (14), but with R L R ( u ) replacing max ( R L R ( u ) , R R L ( u ) ) . The other inequality, i.e., the one with R R L ( u ) replacing max ( R L R ( u ) , R R L ( u ) ) follows by choosing S d with all the traffic sessions (6)–(13) except for (6), and by modifying π ( · ) so that the edges to the left of supernode u are removed first, and then the edges to the right.
Figure 3. Network at supernode u after the PdE bound has removed the edges ( u i , u o ) , ( u o , ( u - 1 ) i ) , and ( u o , ( u + 1 ) i ) . The session messages are tested in the order: m L R u , m R L u , m u , then m L R ( u ) , m u , L R , m u , R , and finally m u , L .
Figure 3. Network at supernode u after the PdE bound has removed the edges ( u i , u o ) , ( u o , ( u - 1 ) i ) , and ( u o , ( u + 1 ) i ) . The session messages are tested in the order: m L R u , m R L u , m u , then m L R ( u ) , m u , L R , m u , R , and finally m u , L .
Entropy 14 01813 g003

3. Achievable Rates with Broadcast

We separate channel and network coding, which sounds simple enough. However, every BC receiver has side information about some of the messages being transmitted, so we will need the methods of [6]. We further use the theory in [5] to describe our achievable rate region.
We begin by having each node u i combine m L R ( u ) and m R L ( u ) into the message
m L R ( u ) m R L ( u )
by which we mean the same operation as in (26): the smaller-rate message bits are XORed with a corresponding number of bits of the larger-rate message. The remaining larger-rate message bits are appended so that m L R ( u ) m R L ( u ) has rate max ( R L R ( u ) , R R L ( u ) ) . The message (37) is sent to node u o together with the remaining messages received at node u i . As a result, we must satisfy the bound (14).
The bits arriving at node u o are (37) and (8)–(13). Bits m u are removed at node u o since this node is their final destination. The bits (37) and (8)–(9) and (11) must be broadcast to both nodes ( u - 1 ) i and ( u + 1 ) i . The remaining bits m u , R and m u , L are destined (or dedicated) for the right and left only, respectively. However, we know from information theory for broadcast channels [7] that it can help to broadcast parts of these dedicated messages to both receivers. So we split m u , R and m u , L into two parts each, namely the respective ( m u , R , m u , R ) and ( m u , L , m u , L ) where m u , R and m u , L are broadcast to both nodes ( u - 1 ) i and ( u + 1 ) i . The rates of m u , R and m u , R are the respective R u , R and R u , R , and similarly for R u , L and R u , L . We choose a joint distribution P S u T u W u X u ( · ) and generate a codebook of size
2 n max R L R ( u ) , R R L ( u ) + R L R u + R R L u + R u , L R + R u , R + R u , L
with length-n codewords
w ̲ u m L R ( u ) m R L ( u ) , m L R u , m R L u , m u , L R , m u , R , m u , L
by choosing every letter of every codeword independently using P W u ( · ) .
We next choose “binning" rates R T u and R S u . For every w ̲ u , we choose 2 n ( R u , R + R T u ) length-n codewords t ̲ u by choosing the ith letter t u , i of t ̲ u via the distribution P T u | W u ( · | w u , i ) where w u , i is the ith letter of w ̲ u . We label t ̲ u with the arguments of w ̲ u , m u , R , and a “bin” index from { 1 , 2 , , 2 n R T u } . Similarly, for every w ̲ u we generate 2 n ( R u , L + R S u ) length-n codewords s ̲ u generated via P S u | W u ( · ) and label s ̲ u with the arguments of w ̲ u , m u , L , and a “bin” index from { 1 , 2 , , 2 n R S u } .
Next, the encoder tries to find a pair of bin indices such that ( w ̲ u , t ̲ u , s ̲ u ) is jointly typical according to one’s favorite flavor of typicality. Using standard typicality arguments (see, e.g., [5]) a typical triple exists with high probability if n is large and
R S u + R T u > I ( S u ; T u | W u )
Once this triple is found, we transmit a length-n signal x ̲ u that is generated via P X u | S u T u W u ( · | s u , i , t u , i , w u , i ) for i = 1 , 2 , , n .
The receivers use joint typicality decoders to recover their messages. They further use their knowledge (or side-information) about some of the messages. The result is that decoding is reliable if n is large and if the following rate constraints are satisfied (see [5,6]):
R u , L + R S u < I ( S u ; Y u , u - 1 | W u )
R R L ( u ) + R R L u + R u , L R + R u , R + R u , L + R S u < I ( S u W u ; Y u , u - 1 )
R u , R + R T u < I ( T u ; Y u , u + 1 | W u )
R L R ( u ) + R L R u + R u , L R + R u , R + R u , L + R T u < I ( T u W u ; Y u , u + 1 )
Finally, we use Fourier–Motzkin elimination (see [5]) to remove R S u , R T u , R u , L , R u , R , R u , L , and R u , R from the above expressions and obtain the following result.
Theorem 2 
An achievable rate region for a line network with broadcast channels is given by the bounds
max ( R L R ( u ) , R R L ( u ) ) + R L R u + R R L u + R u + R u , L R + R u , R + R u , L C u
R R L ( u ) + R R L u + R u , L R + R u , L I ( S u W u ; Y u , u - 1 )
R L R ( u ) + R L R u + R u , L R + R u , R I ( T u W u ; Y u , u + 1 )
R R L ( u ) + R R L u + R u , L R + R u , R + R u , L I ( S u W u ; Y u , u - 1 ) + I ( T u ; Y u , u + 1 | W u ) - I ( S u ; T u | W u )
R L R ( u ) + R L R u + R u , L R + R u , R + R u , L I ( T u W u ; Y u , u + 1 ) + I ( S u ; Y u , u - 1 | W u ) - I ( S u ; T u | W u )
R L R ( u ) + R R L ( u ) + R L R u + R R L u + 2 R u , L R + R u , R + R u , L
I ( S u W u ; Y u , u - 1 ) + I ( T u W u ; Y u , u + 1 ) - I ( S u ; T u | W u )
for any choice of P ( s u , t u , w u , x u ) and for all u, and where S u T u W u - X u - Y u , u - 1 Y u , u + 1 forms a Markov chain for all u.
Remark 3 The bound (43) is the same as (14).
Remark 4 The bounds (44)–(48) are similar to the bounds of [5, Theorem 5]. A few rates are “missing” because nodes ( u - 1 ) i and ( u + 1 ) i know ( m L R ( u ) , m L R u ) and ( m R L ( u ) , m R L u ) , respectively, when decoding.
Example 3 Consider N = 3 for which we have the sessions (17)–(22). The inequalities of Theorem 2 are
u = 1 : R 1 + R 1 , R C 1 R 1 , R I ( T 1 ; Y 1 , 2 | W 1 ) - I ( S 1 ; T 1 | W 1 )
u = 2 : max ( R L R ( 2 ) , R R L ( 2 ) ) + R L R 2 + R R L 2 + R 2 + R 2 , L R + R 2 , R + R 2 , L C 2 T h e f i v e i n e q u a l i t i e s (44)–(48) w i t h u = 2
u = 3 : R 3 + R 3 , L C 3 R 3 , L I ( S 3 ; Y 3 , 2 | W 3 ) - I ( S 3 ; T 3 | W 3 )
Observe that the channels from node 1 o to node 2 i , and node 3 o to node 2 i , are memoryless channels with capacities C 1 , 2 and C 3 , 2 , respectively. In fact, from (49) and (51) it is easy to see that we may as well choose W 1 , S 1 , W 3 , and T 3 as constants. Moreover, we should choose T 1 = X 1 and S 3 = X 3 , and then choose the input distributions so that I ( X 1 ; Y 1 , 2 ) = C 1 , 2 and I ( X 3 ; Y 3 , 2 ) = C 3 , 2 . The inequalities (44)–(48) at node u = 2 correspond to Marton’s region [10] (Section 7.8) for broadcast channels including a common rate. We will see in the next section that if we specialize to the model of [2] then only the bounds (43)–(45) remain at node 2 because the bounds (46)–(48) are redundant.

4. Special Channels

4.1. Orthogonal Channels

A BC P Y 1 Y 2 | X is orthogonal if X = ( X 1 , X 2 ) and P Y 1 Y 2 | X = P Y 1 | X 1 P Y 2 | X 2 (see [8] (p. 419)). In fact, if all BCs in Figure 2 are orthogonal then the model reduces to that of Figure 1 so hopefully we recover Theorem 1 from Theorem 2.
Let X u = ( X u , u - 1 , X u , u + 1 ) and Y u = ( Y u - 1 , u , Y u + 1 , u ) . Suppose C u , u - 1 and C u , u + 1 are the respective capacities of the memoryless channels P Y u , u - 1 | X u , u - 1 and P Y u , u + 1 | X u , u + 1 . We choose S u = X u , u - 1 , T u = X u , u + 1 , W u = 0 , and X u , u - 1 , X u , u + 1 to be independent and capacity-achieving. Inequalities (44)–(48) reduce to
R R L ( u ) + R R L u + R u , L R + R u , L C u , u - 1
R L R ( u ) + R L R u + R u , L R + R u , R C u , u + 1
The region of Theorem 1 is therefore achievable. The converse follows by using the same steps as in the converse of Theorem 1.

4.2. Deterministic Channels

A BC P Y 1 Y 2 | X is deterministic if Y 1 = f 1 ( X ) and Y 2 = f 2 ( X ) for some functions f 1 ( · ) and f 2 ( · ) . We show that Theorem 2 gives the capacity region if all BCs in Figure 2 are deterministic.
Theorem 3 
The capacity region of a line network with deterministic BCs is the union over all P ( w u , x u ) , u = 1 , 2 , , N , of the (non-negative) rates satisfying
max ( R L R ( u ) , R R L ( u ) ) + R L R u + R R L u + R u + R u , L R + R u , R + R u , L C u
R R L ( u ) + R R L u + R u , L R + R u , L H ( Y u , u - 1 )
R L R ( u ) + R L R u + R u , L R + R u , R H ( Y u , u + 1 )
R R L ( u ) + R R L u + R u , L R + R u , R + R u , L I ( W u ; Y u , u - 1 ) + H ( Y u , u - 1 Y u , u + 1 | W u )
R L R ( u ) + R L R u + R u , L R + R u , R + R u , L I ( W u ; Y u , u + 1 ) + H ( Y u , u - 1 Y u , u + 1 | W u )
R L R ( u ) + R R L ( u ) + R L R u + R R L u + 2 R u , L R + R u , R + R u , L
I ( W u ; Y u , u - 1 ) + I ( W u ; Y u , u + 1 ) + H ( Y u , u - 1 Y u , u + 1 | W u )
Proof. 
Achievability follows by Theorem 2 with S u = Y u , u - 1 and T u = Y u , u + 1 . For the converse, the constraint (54) is the PdE bound of [11] (Section III.A). The bounds (55) and (56) are cut bounds. For the remaining steps, let S c be the complement of S in V . We define
Y S , T = { Y u , v : u S , v T }
Let M u , L be the random message corresponding to m u , L , and similarly for the other messages. The messages are independent and have entropy equal to n times their rate, where n is the number of times we use each BC. Let M ( S ) be the set of messages originating at supernodes in S . Let M u , L c to be the set of all network messages except for M u , L , and similarly for other messages. We use the notation
Y u , u - 1 i - 1 = Y u , u - 1 , 1 , Y u , u - 1 , 2 , , Y u , u - 1 , i - 1 W ˜ u , i = ( M u , L M u , R ) c Y u , u - 1 i - 1 Y u , u + 1 i - 1
For the following, let S = { u , u + 1 , , N } and S ˜ = { 1 , 2 , , u } . We bound
I M R L ( u ) M R L u M u , L R ; Y S , S c n M ( S c ) = I M R L ( u ) M R L u M u , L R ; Y u , u - 1 n | M ( S c )
( a ) I ( M u , L M u , R ) c ; Y u , u - 1 n
= i = 1 n I ( M u , L M u , R ) c ; Y u , u - 1 , i | Y u , u - 1 i - 1
( a ) i = 1 n I W ˜ u , i ; Y u , u - 1 , i
= ( b ) n I W ˜ u , Q ; Y u , u - 1 , Q | Q
( c ) n I W u ; Y u , u - 1
where steps ( a ) follow by I ( A ; B | C ) I ( A C D ; B ) , step ( b ) follows by defining Q to be a time-sharing random variable that is uniform over 1 , 2 , , n , and ( c ) follows by defining Y u , u - 1 = Y u , u - 1 , Q and W u = W ˜ u , Q Q . We similarly have
I M L R ( u ) M L R u M u , L R ; Y S ˜ , S ˜ c n M ( S ˜ c ) n I W u ; Y u , u + 1
where Y u , u + 1 = Y u , u + 1 , Q . Note that our choices for Y u , u - 1 and Y u , u + 1 are appropriate for the cut bounds (55) and (56). Finally, we have
I M u , L M u , R ; Y { u } , V n ( M u , L M u , R ) c = I M u , L M u , R ; Y u , u - 1 n Y u , u + 1 n | ( M u , L M u , R ) c
= i = 1 n H Y u , u - 1 , i Y u , u + 1 , i | W ˜ u , i
= n H Y u , u - 1 Y u , u + 1 | W u
Consider the bound (57). We have
n ( R R L ( u ) + R R L u + R u , L R + R u , R + R u , L )
( a ) I M R L ( u ) M R L u M u , L R ; Y S , S c n M ( S c ) + I M u , L M u , R ; Y { u } , V n ( M u , L M u , R ) c
( b ) n I W u ; Y u , u - 1 + n H Y u , u - 1 Y u , u + 1 | W u
where ( a ) follows by Fano’s inequality [8] (p. 38) when the block error probability tends to zero, and ( b ) follows by (61) and (63). This proves (57), and (58) follows in the same way.
Finally, for (59) we use Fano’s inequality to bound
n ( R L R ( u ) + R R L ( u ) + R L R u + R R L u + 2 R u , L R + R u , R + R u , L )
I M R L ( u ) M R L u M u , L R ; Y S , S c n M ( S c ) + I M L R ( u ) M L R u M u , L R ; Y S ˜ , S ˜ c n M ( S ˜ c )
+ I M u , L M u , R ; Y { u } , V n ( M u , L M u , R ) c
n I W u ; Y u , u - 1 + n I W u ; Y u , u + 1 + n H Y u , u - 1 Y u , u + 1 | W u
This proves Theorem 3. ■

4.3. Physically Degraded Channels

A BC P Y 1 Y 2 | X is said to be physically degraded if either
X - Y 1 - Y 2 or X - Y 2 - Y 1
forms Markov chains (see [8] (p. 422)). For the following theorem, we suppose that X - Y u , u - 1 - Y u , u + 1 forms a Markov chain for all u. However, the direction of degradation can be adjusted either way for any supernode u.
Theorem 4 
The capacity region of a line network with physically degraded BCs is the union over all P ( w u , x u ) , u = 1 , 2 , , N , of the (non-negative) rates satisfying
max ( R L R ( u ) , R R L ( u ) ) + R L R u + R R L u + R u + R u , L R + R u , R + R u , L C u
R L R ( u ) + R L R u + R u , L R + R u , R I ( W u ; Y u , u + 1 )
R R L ( u ) + R R L u + R u , L R + R u , R + R u , L I ( X u ; Y u , u - 1 )
R L R ( u ) + R L R u + R u , L R + R u , R + R u , L I ( W u ; Y u , u + 1 ) + I ( X u ; Y u , u - 1 | W u )
and where W u - X u - Y u , u - 1 - Y u , u + 1 forms a Markov chain.
Proof. 
For achievability, Theorem 2 with S u = X u and T u = 0 gives the region specified by (66)–(69). For the converse, the bound (66) is based on an extension of PdE bounds to mixed wireline/wireless networks (see [11]). The bound (68) is a cut bound. The other two bounds follow by modifying the steps of [12] as follows.
Consider W ˜ u , i = M u , L c Y u , u - 1 i - 1 Y u , u + 1 i - 1 and let S ˜ = { 1 , 2 , , u } . We then have
n ( R L R ( u ) + R L R u + R u , L R + R u , R ) ( a ) I M L R ( u ) M L R u M u , L R M u , R ; Y S ˜ , S ˜ c n M ( S ˜ c )
= I M L R ( u ) M L R u M u , L R M u , R ; Y u , u + 1 n | M ( S ˜ c )
= i = 1 n H Y u , u + 1 , i | M ( S ˜ c ) Y u , u + 1 i - 1
- H Y u , u + 1 , i | M L R ( u ) M L R u M u , L R M u , R M ( S ˜ c ) Y u , u + 1 i - 1
i = 1 n H Y u , u + 1 , i - H Y u , u + 1 , i | W ˜ u , i
= n I W ˜ u , Q ; Y u , u + 1 , Q | Q
( b ) n I W u ; Y u , u + 1
where ( a ) follows by Fano’s inequality and ( b ) follows by defining W u = W ˜ u , Q Q and Y u , u + 1 = Y u , u + 1 , Q . We similarly define X u = X u , Q and Y u , u - 1 = Y u , u - 1 , Q . Note that our choices for X u and Y u , u - 1 are appropriate for the cut bound (68).
Finally, for (69) we use (70) to bound
n ( R L R ( u ) + R L R u + R u , L R + R u , R + R u , L ) n I W u ; Y u , u + 1 + n I M u , L ; Y { u } , V n M u , L c
= n I W u ; Y u , u + 1 + I M u , L ; Y u , u - 1 n Y u , u + 1 n | M u , L c
and
I M u , L ; Y u , u - 1 n Y u , u + 1 n | M u , L c = ( a ) i = 1 n I M u , L X u , i ; Y u , u - 1 , i Y u , u + 1 , i | W ˜ u , i
= ( b ) n I X u , Q ; Y u , u - 1 , Q Y u , u + 1 , Q | W ˜ u , Q Q
= n I X u ; Y u , u - 1 Y u , u + 1 | W u
= ( b ) n I X u ; Y u , u - 1 | W u
where ( a ) follows because X u , i is defined by the messages at supernode u and the past channel outputs at supernode u, and steps ( b ) follow because
Y u , u - 1 i - 1 Y u , u + 1 i - 1 M ( V ) - X u , i - Y u , u - 1 , i - Y u , u + 1 , i
forms a (long) Markov chain. Collecting the bounds (70)–(72) proves Theorem 4. ■

4.4. Physically Degraded Gaussian Channels

The additive white Gaussian noise (AWGN) and physically degraded BC has (see [13])
Y u , u - 1 = X u + Z u , u - 1
Y u , u + 1 = Y u , u - 1 + Z u , u + 1
where X u is real with power constraint i = 1 n X u , i 2 n P u for all u, and Z u , u - 1 and Z u , u + 1 are independent Gaussian random variables with variances N u , u - 1 and N u , u + 1 , respectively (again, the direction of degradation can be swapped for any u without changing the results conceptually).
The capacity region is given by Theorem 4 and it remains to optimize P ( w u , x u ) . The variances of Y u , u - 1 and Y u , u + 1 are at most P u + N u , u - 1 and P u + N u , u - 1 + N u , u + 1 , respectively, so the maximum entropy theorem (see [8] (p. 234)) gives
I ( W u ; Y u , u + 1 ) 1 2 log ( 2 π e ( P u + N u , u + 1 ) ) - h ( Y u , u + 1 | W u )
I ( X u ; Y u , u - 1 ) 1 2 log ( 1 + P u / N u , u - 1 )
I ( X u ; Y u , u - 1 | W u ) h ( Y u , u - 1 | W u ) - 1 2 log ( 2 π e N u , u - 1 )
where N u , u + 1 = N u , u - 1 + N u , u + 1 and h ( Y | W ) is the differential entropy of Y conditioned on W. Observe that
1 2 log 2 π e N u , u - 1 ) h ( Y u , u - 1 | W u 1 2 log 2 π e ( P u + N u , u - 1 )
so there is an α u , 0 α u 1 , such that
1 2 π e e 2 h ( Y u , u - 1 | W u ) = α u P u + N u , u - 1
Furthermore, a conditional version of the entropy power inequality (see [8] (p. 496)) gives
h ( Y u , u + 1 | W u ) = h ( Y u , u - 1 + Z u , u + 1 | W u ) 1 2 log e 2 h ( Y u , u - 1 | W u ) + 2 π e N u , u + 1
Collecting the bounds, and inserting (79) and (80) into (77), we have
I ( W u ; Y u , u + 1 ) 1 2 log 1 + ( 1 - α u ) P u α u P u + N u , u + 1
I ( X u ; Y u , u - 1 ) 1 2 log ( 1 + P u / N u , u - 1 )
I ( X u ; Y u , u - 1 | W u ) 1 2 log ( 1 + α u P u / N u , u - 1 ) .
But we achieve equality in (81)–(83) by choosing
X u = V u + W u
where V u and W u are independent Gaussian random variables with zero-mean and variances α u P u and ( 1 - α u ) P u , respectively. The optimal P ( w u , x u ) is therefore zero-mean Gaussian, and the capacity region is given by inserting (81)–(83) with equality into (67)–(69), and taking the union over the rates permitted by varying α u .

4.5. Packet Erasure Channels with Feedback

A BC P Y 1 Y 2 | X is called packet erasure with feedback if X is an L-bit vector and
P ( y 1 , y 2 | x ) = ( 1 - p 1 ) · ( 1 - p 2 ) , y 1 = y 2 = x p 1 · ( 1 - p 2 ) , y 1 = Δ , y 2 = x ( 1 - p 1 ) · p 2 , y 1 = x , y 2 = Δ p 1 · p 2 , y 1 = y 2 = Δ
and all supernodes receive one bit of feedback from each receiver telling them whether the receiver has seen an erasure or not.
Suppose we give receiver 1 both Y 1 and Y 2 , which means that the channel is physically degraded. Let R 1 be the resulting capacity region. Similarly, let R 2 be the capacity region if we (instead) give receiver 2 both Y 1 and Y 2 . The authors of [14] (see also [15]) showed that the capacity region of the original BC is R 1 R 2 . The following theorem slightly generalizes the main result of [14] and gives the capacity of line networks with broadcast erasure channels and feedback. The input X u has L u bits and we denote the erasure probabilities for Y u , u - 1 and Y u , u + 1 as p u , u - 1 and p u , u + 1 , respectively.
Theorem 5 
The capacity region of a line network with broadcast erasure channels and feedback is the union of the (non-negative) rates satisfying (14) and
R R L ( u ) + R R L u + R u , L R + R u , L 1 - p u , u - 1 + R u , R 1 - p u , u - 1 p u , u + 1 L u
R L R ( u ) + R L R u + R u , L R + R u , R 1 - p u , u + 1 + R u , L 1 - p u , u - 1 p u , u + 1 L u
Proof. 
(Sketch) Achievability follows by using the network codes of [14] and [2]. For the converse, the constraint (14) again follows from PdE bounds. For the constraints (86) and (87), we make every BC physically degraded by giving one of the receivers both channel outputs (see [14,16]). Theorem 4 gives a collection of outer bounds for each degradation choice. Finally, we optimize the coding to obtain (86) and (87). ■

5. Discussion

The capacity results in Section 4.1, Section 4.2, Section 4.3, Section 4.4, Section 4.5 imply that decode-forward (DF) relaying suffices, i.e., amplify-forward (AF) and compress-forward (CF) do not improve rates (see also [19] ([Chapter 4])). Quantize-map-forward [17] and noisy network coding [18] also do not improve on DF. In fact, the non-DF methods are suboptimal in general because they do not use superposition coding or binning to treat broadcasting. However, we have found capacity only for BCs that are orthogonal, deterministic, physically degraded, or packet erasure with one-bit feedback. AF and CF strategies are useful for other classes of BCs, as shown in [20] and many further papers.
Finally, our model applies to wireless problems where every node has a dedicated tone and/or time slot for transmission. If nodes use the same tone at the same time, then one must consider the effects of interference. For example, scheduling transmissions with half-duplex protocols is an interesting problem for further study.

Acknowledgments

G. Kramer was supported by an Alexander von Humboldt Professorship endowed by the German Federal Ministry of Education and Research. He was also supported by the Board of Trustees of the University of Illinois Subaward No. 04-217 under NSF Grant No. CCR-0325673, by ARO Grant W911NF-06-1-0182, and by NSF Grant CCF-0905235. S. M. Sadegh Tabatabaei Yazdi was supported by NSF Grant No. CCF-0430201. The material in this paper was presented in part at the 2008 Conference on Information Sciences and Systems, Princeton, NJ, USA, and at the 2009 IEEE Information Theory Workshop, Taormina, Italy. The former paper was co-authored with Serap Savari from Texas A&M University. She declined to be included as a co-author and we are grateful for her contributions.

References

  1. Kramer, G. Capacity results for the discrete memoryless network. IEEE Trans. Inform. Theor. 2003, 49, 4–21. [Google Scholar] [CrossRef]
  2. Tabatabaei Yazdi, S.M.S.; Savari, S.A.; Kramer, G. Network coding in node-constrained line and star networks. IEEE Trans. Inform. Theor. 2011, 57, 4452–4468. [Google Scholar] [CrossRef]
  3. Kramer, G.; Savari, S.A. Edge-cut bounds on network coding rates. J. Netw. Syst. Manag. 2006, 14, 49–67. [Google Scholar] [CrossRef]
  4. Rankov, B.; Wittneben, A. Spectral efficient protocols for half-duplex fading relay channels. IEEE J. Sel. Area. Comm. 2007, 25, 379–389. [Google Scholar] [CrossRef]
  5. Liang, Y.; Kramer, G. Rate regions for relay broadcast channels. IEEE Trans. Inform. Theor. 2007, 53, 3517–3535. [Google Scholar] [CrossRef]
  6. Kramer, G.; Shamai, S. Capacity for classes of broadcast channels with receiver side information. In Proceedings of the IEEE Information Theory Workshop, Tahoe City, CA, USA, 2–6 September 2007; pp. 313–318.
  7. Cover, T.M. Broadcast channels. IEEE Trans. Inform. Theor. 1972, 18, 2–14. [Google Scholar] [CrossRef]
  8. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
  9. Bakshi, M.; Effros, M.; Gu, W.; Koetter, R. On network coding of independent and dependent sources in line networks. In Proceedings of the IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007.
  10. Kramer, G. Topics in multi-user information theory. Found. Trends Commun. Inf. Theory 2007, 4, 265–444. [Google Scholar] [CrossRef]
  11. Kramer, G.; Savari, S.A. Capacity bounds for relay networks. In Presented at the Workshop on Information Theory and Applications, UCSD Campus, La Holla, CA, USA, 6–10 February 2006.
  12. Gamal, A.E. The feedback capacity of degraded broadcast channels. IEEE Trans. Inform. Theor. 1978, 24, 379–381. [Google Scholar] [CrossRef]
  13. Gamal, A.E.; van der Meulen, E.C. A proof of Marton’s coding theorem for the discrete memoryless broadcast channel. IEEE Trans. Inform. Theor. 1981, 27, 120–122. [Google Scholar] [CrossRef]
  14. Georgiadis, L.; Tassiulas, L. Broadcast erasure channel with feedback—Capacity and algorithms. In Proceedings of the Workshop on Network Coding, Theory, and Applications, Lausanne, Switzerland, 15–16 June 2009.
  15. Wang, C.C. On the capacity of 1-to-K broadcast packet erasure channels with channel output feedback. IEEE Trans. Inform. Theor. 2012, 58, 931–956. [Google Scholar] [CrossRef]
  16. Ozarow, L. The capacity of the white Gaussian multiple access channel with feedback. IEEE Trans. Inform. Theor. 1984, 30, 623–629. [Google Scholar] [CrossRef]
  17. Avestimehr, A.S.; Diggavi, S.N.; Tse, D.N.C. Wireless network information flow: A deterministic approach. IEEE Trans. Inform. Theor. 2011, 57, 1872–1905. [Google Scholar] [CrossRef]
  18. Lim, S.H.; Kim, Y.H.; Gamal, A.E.; Chung, S.Y. Noisy network coding. IEEE Trans. Inform. Theor. 2011, 57, 3132–3152. [Google Scholar] [CrossRef]
  19. Kramer, G.; Marić, I.; Yates, R.D. Cooperative communications. Found. Trends Netw. 2006, 1, 271–425. [Google Scholar] [CrossRef]
  20. Knopp, R. Two-way radio networks with a star topology. In Proceedings of the 2006 International Zurich Seminar on Communications, Zurich, Switzerland, February 2006; pp. 154–157.

Share and Cite

MDPI and ACS Style

Kramer, G.; Yazdi, S.M.T. Network Coding for Line Networks with Broadcast Channels. Entropy 2012, 14, 1813-1828. https://doi.org/10.3390/e14101813

AMA Style

Kramer G, Yazdi SMT. Network Coding for Line Networks with Broadcast Channels. Entropy. 2012; 14(10):1813-1828. https://doi.org/10.3390/e14101813

Chicago/Turabian Style

Kramer, Gerhard, and Seyed Mohammadsadegh Tabatabaei Yazdi. 2012. "Network Coding for Line Networks with Broadcast Channels" Entropy 14, no. 10: 1813-1828. https://doi.org/10.3390/e14101813

APA Style

Kramer, G., & Yazdi, S. M. T. (2012). Network Coding for Line Networks with Broadcast Channels. Entropy, 14(10), 1813-1828. https://doi.org/10.3390/e14101813

Article Metrics

Back to TopTop