Next Article in Journal
Bivariate Partial Information Decomposition: The Optimization Perspective
Next Article in Special Issue
Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks
Previous Article in Journal
Generalized Skew-Normal Negentropy and Its Application to Fish Condition Factor Time Series
Previous Article in Special Issue
Multiuser Channels with Statistical CSI at the Transmitter: Fading Channel Alignments and Stochastic Orders, an Overview
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages

1
School of Information Science and Technology, Southwest JiaoTong University, Chengdu 611756, China
2
The State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2017, 19(10), 529; https://doi.org/10.3390/e19100529
Submission received: 1 August 2017 / Revised: 1 October 2017 / Accepted: 2 October 2017 / Published: 6 October 2017
(This article belongs to the Special Issue Network Information Theory)

Abstract

:
The model for a broadcast channel with confidential messages (BC-CM) plays an important role in the physical layer security of modern communication systems. In recent years, it has been shown that a noiseless feedback channel from the legitimate receiver to the transmitter increases the secrecy capacity region of the BC-CM. However, at present, the feedback coding scheme for the BC-CM only focuses on producing secret keys via noiseless feedback, and other usages of the feedback need to be further explored. In this paper, we propose a new feedback coding scheme for the BC-CM. The noiseless feedback in this new scheme is not only used to produce secret keys for the legitimate receiver and the transmitter but is also used to generate update information that allows both receivers (the legitimate receiver and the wiretapper) to improve their channel outputs. From a binary example, we show that this full utilization of noiseless feedback helps to increase the secrecy level of the previous feedback scheme for the BC-CM.

1. Introduction

Wyner, in his outstanding paper on the degraded wiretap channel [1], first studied secure transmission over a physically degraded broadcast channel in the presence of an additional wiretapper. Wyner showed that the secrecy capacity (the maximum transmission rate with perfect secrecy constraint) of the degraded wiretap channel model was given by
C s d = max P ( x ) ( I ( X ; Y ) I ( X ; Z ) ) ,
where X, Y and Z are the channel input, channel output for the legitimate receiver and channel output for the wiretapper, respectively, and they satisfy the Markov chain X Y Z . Note here that the secrecy capacity defined in (1) can be viewed as the difference between the main channel capacity I ( X ; Y ) (the channel for the transmitter and the legitimate receiver) and the wiretap channel capacity I ( X ; Z ) (the channel for the transmitter and the wiretapper). Later, Csiszar and Korner [2] extended Wyner’s work [1] to a more general case: the broadcast channel with confidential messages (BC-CM), where common and confidential messages were transmitted through a discrete memoryless general broadcast channel (without the degradedness assumption X Y Z ), and the common message was intended to be decoded by both the legitimate receiver and the wiretapper, while the confidential message was only allowed to be decoded by the legitimate receiver. The secrecy capacity region (the capacity region with the perfect secrecy constraint) of this generalized model is determined in [2], and it is given by
C s = { ( R 0 , R 1 ) : 0 R 0 min { I ( U ; Y ) , I ( U ; Z ) } 0 R 1 I ( V ; Y | U ) I ( V ; Z | U ) } ,
where U and V respectively represent the common message and the confidential message, and R 0 and R 1 are the transmission rates of the common message and the confidential message, respectively. Here note that from (2), it is not difficult to show that the secrecy capacity C s (the maximum transmission rate of the confidential message with the perfect secrecy constraint) of the BC-CM is given by
C s = max P ( v , x ) [ I ( V ; Y ) I ( V ; Z ) ] + ,
where the function [ x ] + = x if x 0 , else [ x ] + = 0 , and C s is also called the secrecy capacity of the general wiretap channel. The work of [1] and [2] lays the foundation of the physical layer security in modern communication systems.
Recently, Ahlswede and Cai [3] found that if the legitimate receiver sent his own channel output Y back to the transmitter through a noiseless feedback channel, the secrecy capacity region C s of the BC-CM could be expanded to an achievable secrecy rate region
C s f c a i = { ( R 0 , R 1 ) : 0 R 0 min { I ( U ; Y ) , I ( U ; Z ) } 0 R 1 min { [ I ( V ; Y | U ) I ( V ; Z | U ) ] + + H ( Y | Z , U , V ) , I ( V ; Y | U ) } } ,
where the auxiliary random variables U and V are defined similarly as those in (2). The coding scheme of the region C s f c a i combines Csiszar and Korner’s coding scheme for the BC-CM [2] with the idea of using a secret key to encrypt the transmitted message, where the secret key is generated from the noiseless feedback. Note here that the region C s f c a i is an inner bound on the secrecy capacity C s f of the BC-CM with noiseless feedback, and to the best of the authors’ knowledge, C s f remains unknown. Similar to the work of [2], using (4), Ahlswede and Cai also provided an achievable secrecy rate R s f c a i (lower bound on the secrecy capacity) of the general wiretap channel with noiseless feedback, and it is given by
R s f c a i = max P ( v , x ) min { [ I ( V ; Y ) I ( V ; Z ) ] + + H ( Y | V , Z ) , I ( V ; Y ) } ,
where V is defined in the same way as in (2). In [3], Ahlswede and Cai further pointed out that for the degraded wiretap channel with noiseless feedback (the Markov chain X Y Z holds), the secrecy capacity C s d f was given by
C s d f = max P ( x ) min { I ( X ; Y ) I ( X ; Z ) + H ( Y | X , Z ) , I ( X ; Y ) } .
Here, note that the secrecy capacities in (5) and (6) can be viewed as a combination of two parts: the first part is the difference between the main channel capacity ( I ( V ; Y ) or I ( X ; Y ) ) and the wiretap channel capacity ( I ( V ; Z ) or I ( X ; Z ) ), and the second part is the rate H ( Y | V , Z ) ( H ( Y | X , Z ) ) of a secret key generated by the noiseless feedback and shared between the legitimate receiver and the transmitter. Comparing (6) with (1) and (5) with (3), it is easy to see that by using the noiseless feedback to generate a secret key encrypting the transmitted message, the secrecy capacity of the wiretap channel can be enhanced. Besides the work of [3], other related works on the BC-CM or wiretap channel in the presence of noiseless feedback are in [4,5,6,7].
In this paper, we re-visit the BC-CM with noiseless feedback investigated by Ahlswede and Cai [3] (see Figure 1), and we propose a new achievable secrecy rate region for this feedback model. The coding scheme for this achievable region combines the previous Ahlswede and Cai’s scheme [3] with the Wyner-Ziv scheme for lossy source coding with side information [8], i.e., compared with Ahlswede and Cai’s scheme, in our new scheme, the noiseless feedback is not only used to produce the secret key but also used to generate an update information that allows the legitimate receiver to improve his channel output. From a binary example, we show that this full utilization of noiseless feedback helps to obtain a larger achievable secrecy rate of the confidential message.
Now the remainder of this paper is organized as follows. Section 2 is about the problem formulation and the main result of this paper. A binary example is provided in Section 3. Final conclusions are presented in Section 4.

2. Problem Formulation and New Result

Notations: In this paper, random variables are written in upper case letters (e.g. V), real values are written in lower case letters (e.g. v), and members of the alphabet are written in calligraphic letters (e.g. V ). Random vectors and their values are written in a similar way. The probability P r { V = v } is shortened to P ( v ) . In addition, for the remainder of this paper, the base of the logarithm is 2.
Model description: Suppose that the common message W 0 is chosen to be transmitted, and it is uniformly distributed over its alphabet W 0 = { 1 , 2 , , M 0 } . Analogously, the confidential message W 1 is chosen to be transmitted, and it is uniformly distributed over its alphabet W 1 = { 1 , 2 , , M 1 } . The channel is discrete and memoryless with input X N , outputs Y N , Z N , and has transition probability P ( y , z | x ) . At time i ( 1 i N ), the legitimate receiver receives the channel output Y i , and he sends the previous channel outputs Y 1 ,..., Y i 1 back to the transmitter via a noiseless feedback channel. Hence at time i, the channel encoder f i is denoted by
X i = f i ( W 0 , W 1 ) , i = 1 f i ( W 0 , W 1 , Y i 1 ) , 2 i N .
Here we should note that f i does not need to be deterministic and stochastic encoding is also allowed. For the legitimate receiver, after receiving Y N , he uses a decoding mapping ψ 1 : Y N W 0 × W 1 , to obtain W ^ 0 and W ^ 1 , which are estimations of the transmitted messages W 0 and W 1 , respectively. The legitimate receiver’s decoding error probability P e 1 is defined by
P e 1 = 1 M 0 M 1 i = 1 M 0 j = 1 M 1 P r { ψ 1 ( y N ) ( i , j ) | ( i , j ) sent } .
For the wiretapper, after receiving Z N , he uses a decoding mapping ψ 2 : Z N W 0 , to obtain W ˇ 0 , which is an estimation of the transmitted message W 0 . Moreover, the wiretapper also tries to decode the transmitted message W 1 via his own channel output Z N , and his equivocation (uncertainty) about W 1 is denoted by
Δ = 1 N H ( W 1 | Z N ) .
The wiretapper’s decoding error probability P e 2 is defined by
P e 2 = 1 M 0 i = 1 M 0 P r { ψ 2 ( z N ) i | i sent } .
Finally, using similar criteria in [1] and [2], if for any small positive number ϵ , there exists an encoding-decoding scheme with parameters M 0 , M 1 , N, P e 1 and P e 2 such that
log M 0 N R 0 ϵ , log M 1 N R 1 ϵ , Δ R 1 ϵ , P e 1 ϵ , P e 2 ϵ ,
we say that the rate pair ( R 0 , R 1 ) is achievable with perfect secrecy. The secrecy capacity region C s f is composed of all achievable secrecy rate pairs satisfying (5), and the following Theorem 1 provides an inner bound on C s f .
Theorem 1.
The secrecy capacity region C s f of the discrete memoryless BC-CM with noiseless feedback satisfies
C s f C s f n e w ,
where
C s f n e w = { ( R 0 , R 1 ) : 0 R 0 min { I ( U ; Y , V 1 ) , I ( U ; Z , V 2 ) } I ( U , V , Y ; V 0 , V 2 | Z ) 0 R 1 min { [ I ( V ; Y , V 1 | U ) I ( V ; Z , V 2 | U ) ] + + H ( Y | Z , U , V , V 2 ) , I ( V ; Y , V 1 | U ) } 0 R 0 + R 1 min { I ( U ; Y , V 1 ) , I ( U ; Z , V 2 ) } + I ( V ; Y , V 1 | U ) I ( V 1 ; U , V , Y | V 0 , Y ) I ( V 2 ; U , V , Y | V 0 , Z ) max { I ( V 0 ; U , V , Y | Y ) , I ( V 0 ; U , V , Y | Z ) } } ,
the joint probability mass function P ( v 0 , v 1 , v 2 , u , v , x , y , z ) is denoted by
P ( v 0 , v 1 , v 2 , u , v , x , y , z ) = P ( v 0 , v 1 , v 2 | u , v , y ) P ( y , z | x ) P ( x | u , v ) P ( v | u ) P ( u ) ,
and the auxiliary random variables V 0 , V 1 , V 2 , V, U take values in finite alphabets.
Proof. 
The coding scheme for the inner bound C s f n e w combines the previous Ahlswede and Cai’s scheme of the model of Figure 1 with a “generalized” Wyner-Ziv scheme for lossy source coding with side information [8], and the details of the proof of Theorem 1 are in Appendix A. ☐
Remark 1.
There are some notes on Theorem 1; see the following.
  • Comparing our new inner bound C s f n e w with the previous Ahlswede and Cai’s inner bound C s f c a i , in general, we do not know which one is larger. In the next section, we consider a binary case of the BC-CM with noiseless feedback, and compute these inner bounds for this binary case. From this binary example, we show that the maximum achievable R 1 (the transmission rate of the confidential message with perfect secrecy constraint) in C s f n e w is larger than that in C s f c a i , however, the enhancement of R 1 is at the cost of reducing the transmission rate of the common message R 0 .
  • Note here that in C s f n e w , the auxiliary random variable U represents the encoded sequence for the common message and V represents the encoded sequence for both the common and confidential messages. The auxiliary random variable V 0 is both the legitimate receiver and the wiretapper’s estimation of U, and the index of V 0 is related to the update information generated by the noiseless feedback. The auxiliary random variable V 1 is the legitimate receiver’s estimation of V, and V 2 is the wiretapper’s estimation of V. Both the indexes of V 1 and V 2 are with respect to the update information. The inner bound C s f n e w is constructed by using the feedback to generate a secret key shared between the legitimate receiver and the wiretapper, and generate update information used to construct estimation of the transmitted sequences U and V. The estimation of U and V helps both the legitimate receiver and the wiretapper to improve their own received symbols Y and Z.

3. Binary Example of the BC-CM with Noiseless Feedback

Now we consider a binary case of the model of Figure 1. In this case, the channel input is X and output Y, Z takes values in { 0 , 1 } , and they satisfy
Y = X + Z 1 , Z = X + Z 2 ,
where Z 1 B e r n ( p ) ( p < 0.5 ) and Z 2 B e r n ( q ) ( q < 0.5 ) are the channel noises for the transmitter-legitimate receiver’s channel and transmitter-wiretapper’s channel, respectively, and they are independent of each other and the channel input X.
Without noiseless feedback, letting P ( U = 0 ) = α , P ( U = 1 ) = 1 α , P ( V = 0 ) = β , P ( V = 1 ) = 1 β , U + V = X , using the fact that U is independent of V, and substituting (14) into (2), it is not difficult to calculate the secrecy capacity region C s b of the binary BC-CM, and it is given by
C s b = { ( R 0 , R 1 ) : 0 R 0 min { 1 h ( β p ) , 1 h ( β q ) } 0 R 1 h ( β p ) h ( p ) h ( β q ) + h ( q ) } ,
where h ( x ) = x log ( x ) ( 1 x ) log ( 1 x ) and a b = a + b 2 a b . Here, note that the region (15) is achieved when α = 0.5 .
With noiseless feedback, first, we compute Ahlswede and Cai’s achievable secrecy rate region for this binary case. Letting P ( U = 0 ) = α , P ( U = 1 ) = 1 α , P ( V = 0 ) = β , P ( V = 1 ) = 1 β , U + V = X , using the fact that U is independent of V, and substituting (14) into (4), it is not difficult to calculate Ahlswede and Cai’s achievable secrecy rate region C s b f * for this binary case, and it is given by
C s b f * = { ( R 0 , R 1 ) : 0 R 0 min { 1 h ( β p ) , 1 h ( β q ) } 0 R 1 min { h ( β p ) h ( p ) , [ h ( β p ) h ( p ) h ( β q ) + h ( q ) ] + + h ( p ) } } ,
where [ x ] + = x if x 0 , else [ x ] + = 0 . Comparing (16) with (15), it is easy to see that the noiseless feedback enhances the secrecy capacity region of the binary BC-CM. Here the region (16) is achieved when α = 0.5 .
Then, it remains to compute our new achievable secrecy rate region for this binary case. Letting V 1 = ( U , V ) , V 2 = U , V 0 = Z 1 , P ( U = 0 ) = α , P ( U = 1 ) = 1 α , P ( V = 0 ) = β , P ( V = 1 ) = 1 β , U + V = X , using the fact that U is independent of V, and substituting (14) into C s f n e w of Theorem 1, it is not difficult to show that the achievable secrecy rate region C s b f of our new feedback scheme is given by
C s b f = { ( R 0 , R 1 ) : 0 R 0 min { 1 h ( β p ) h ( p ) , 1 h ( β ) h ( q ) } 0 R 1 min { h ( β ) , h ( β ) h ( β q ) + h ( p ) + h ( q ) } 0 R 1 + R 2 min { 1 h ( β ) h ( p ) h ( q ) , 1 h ( β q ) h ( p ) } } .
The achievability of C s b f can be explained by the following simple block length-(n) scheme.
  • First note that in the following explanation, the channel input x N for the i-th block ( 1 i n ) is denoted by x ˜ i , and similar conventions are applied to u N , v N , v 0 N , v 1 N , v 2 N , y N , z N , z 1 N and z 2 N . For each block, the transmitted message is composed of a common message, a confidential message, a dummy message and update information.
  • (Encoding): In the i-th block ( 2 i n ), after the transmitter receives the feedback channel output y ˜ i 1 , he generates a secret key from y ˜ i 1 and uses this key to encrypt the confidential message of the i-th block. In addition, since y ˜ i 1 = x ˜ i 1 z ˜ 1 , i 1 , the transmitter also knows the legitimate receiver’s channel noise z ˜ 1 , i 1 at the i-th block, and thus he chooses v ˜ 0 , i = y ˜ i 1 x ˜ i 1 = z ˜ 1 , i 1 as an estimation of u ˜ i 1 , v ˜ 1 , i = ( u ˜ i 1 , v ˜ i 1 ) as the legitimate receiver’s estimation of x ˜ i 1 , and v ˜ 2 , i = u ˜ i 1 as the wiretapper’s estimation of x ˜ i 1 . Note that x ˜ i 1 = u ˜ i 1 v ˜ i 1 and the update information is part of the indexes of v ˜ 0 , i , v ˜ 1 , i and v ˜ 2 , i .
  • (Decoding at the legitimate receiver): The legitimate receiver does backward decoding, i.e., the decoding starts from the last block. In block n, the legitimate receiver applies Ahlswede and Cai’s decoding scheme [3] to obtain his update information for block n. Then using the channel output y ˜ n as side information, the legitimate receiver applies Wyner-Ziv’s decoding scheme [8] to obtain v ˜ 0 , n and v ˜ 1 , n . Since v ˜ 0 , n = z ˜ 1 , n 1 , the legitimate receiver knows the legitimate receiver’s channel noise for block n 1 , and thus he computes y ˜ n 1 z ˜ 1 , n 1 to obtain x ˜ n 1 and the corresponding transmitted message for block n 1 . Repeating the above decoding scheme, the legitimate receiver obtains the entire transmitted messages (including both confidential and common messages) for all blocks, and since he also knows the secret keys, the real messages are decrypted by him.
  • (Decoding at the wiretapper): The wiretapper also does backward decoding. In block n, the wiretapper receives z ˜ n , and he applies Ahlswede and Cai’s decoding scheme [3] to obtain his update information for block n. Then using the channel output z ˜ n as side information, the wiretapper applies Wyner-Ziv’s decoding scheme [8] to obtain v ˜ 0 , n and v ˜ 2 , n . Since v ˜ 2 , n = u ˜ n 1 , the wiretapper knows the common message for block n 1 . Repeating the above decoding scheme, finally, the wiretapper obtains the entire common messages for all blocks.
The following Figure 2 shows the achievable secrecy rate region C s b f of our new scheme, Ahlswede and Cai’s achievable secrecy rate region C s b f * and the secrecy capacity region C s b of the binary BC-CM without feedback for p = 0.05 and q = 0.01 , which implies that the wiretapper’s channel noise is smaller than the legitimate receiver’s. From Figure 2, it is easy to see that when the wiretapper’s channel noise is smaller than the legitimate receiver’s, the secrecy rate R 1 of the binary BC-CM without feedback is 0, which implies that perfect secrecy can not be achieved, and the secrecy rate R 1 is enhanced by using noiseless feedback. Moreover, we see that our new scheme performs better than Ahlswede and Cai’s in enhancing the secrecy rate R 1 , however, we should notice that the boosting of the secrecy rate R 1 is at the cost of reducing the rate R 0 of the common message.
The following Figure 3 shows the achievable secrecy rate region C s b f of our new scheme, Ahlswede and Cai’s achievable secrecy rate region C s b f * , and the secrecy capacity region C s b of the binary BC-CM without feedback for p = 0.05 and q = 0.1 , which implies that the wiretapper’s channel noise is larger than the legitimate receiver’s. From Figure 3, it is easy to see that noiseless feedback enhances the secrecy rate of the BC-CM without feedback. However, we also should notice that the enhancement of the secrecy rate R 1 is at the cost of reducing the rate R 0 of the common message.

4. Conclusions

In this paper, we propose a new coding scheme for the BC-CM with noiseless feedback. From a binary example, we show that our new feedback scheme performs better than the existing feedback scheme in enhancing the secrecy level of the BC-CM. However, we should notice that this enhancement of the secrecy level is at the cost of reducing the rate of the common message.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 61671391, 61301121, 61571373 and the Open Research Fund of the State Key Laboratory of Integrated Services Networks, Xidian University (No. ISN17-13).

Author Contributions

Bin Dai, Xin Li and Zheng Ma did the theoretical work; Xin Li performed the experiments; Bin Dai and Xin Li analyzed the data; Xin Li wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree letter acronym
LDlinear dichroism
BC-CMbroadcast channel with confidential messages

Appendix A. Proof of Theorem 1

Appendix A.1. Preliminary

For a given probability P ( x ) , the identical independent distributed (i.i.d.) generated sequence x N is called ϵ -typical if
| N x N ( x ) N P ( x ) | ϵ P ( x ) ,
where N x N ( x ) N is the frequency of symbol x appearing in the sequence x N . The set, which is composed of all ϵ -typical x N , is denoted by T ϵ N ( P ( x ) ) , and it is called the typical set. The following four lemmas about the typical set are extensively used in information theory.
Lemma A1. (Covering Lemma [9]):
Let X N satisfy P ( X N T ϵ N ( P ( x ) ) ) 1 as N . Also let M be an integer larger than 2 N r for some r 0 , and let { Y N ( m ) } m = 1 M be a set composed of i.i.d. generated sequences Y N (according to the probability P ( y ) ) such that { X N , { Y N ( m ) } m = 1 M } are mutually independent. Then, for any probability P ( x , y ) with marginal probabilities P ( x ) and P ( y ) , there exists a ϵ > 0 such that
lim N P ( m { 1 , 2 , , M } , ( X N , Y N ( m ) ) T ϵ N ( P ( x , y ) ) ) = 0
if r > I ( X ; Y ) + δ ( ϵ ) , where δ ( ϵ ) 0 as ϵ 0 .
Lemma A2. (Packing Lemma [9]):
Let X N be an i.i.d. generated random vector with distribution P ( x ) . Also let M be an integer smaller than 2 N r for some r 0 , and let { Y N ( m ) } m = 1 M be a set composed of i.i.d. generated sequences Y N according to the probability P ( y ) , and each Y N ( m ) in the set is independent of X N . Then for any probability P ( x , y ) with marginal probabilities P ( x ) and P ( y ) , there exists a ϵ > 0 such that
lim N P ( m { 1 , 2 , , M } s . t . ( X N , Y N ( m ) ) T ϵ N ( P ( x , y ) ) ) = 0
if r < I ( X ; Y ) δ ( ϵ ) , where δ ( ϵ ) 0 as ϵ 0 .
Lemma A3. (Generalized Packing Lemma [9]):
For some r 1 , r 2 , r 3 0 , let M 1 , M 2 , M 3 be integers satisfying M 1 2 N r 1 , M 2 2 N r 2 and M 3 2 N r 3 , respectively. Also let { U i N ( m ) } m = 1 M i ( i = 1 , 2 , 3 ) be a set composed of i.i.d. generated sequences U i N (with respect to the distribution P ( u i ) ) such that ( U 1 N ( m 1 ) , U 2 N ( m 2 ) , U 3 N ( m 3 ) ) are mutually independent for any m 1 , m 2 , m 3 . Then for any probability P ( u 1 , u 2 , u 3 ) with marginal probabilities P ( u 1 ) , P ( u 2 ) and P ( u 3 ) , there exists a ϵ > 0 such that
lim N P m i { 1 , 2 , , M i } s . t . ( U 1 N ( m 1 ) , U 2 N ( m 2 ) , U 3 N ( m 3 ) ) T ϵ N ( P ( u 1 , u 2 , u 3 ) ) = 0
if r 1 + r 2 + r 3 < I ( U 1 ; U 2 ) + I ( U 3 ; U 1 , U 2 ) δ ( ϵ ) , where δ ( ϵ ) 0 as ϵ 0 .
Lemma A4. (Balanced coloring lemma [3]):
For any ϵ 1 , ϵ 2 , ϵ 3 , δ > 0 , sufficiently large N and all i.i.d. generated Y N according to the distribution P ( y ) , there exists a γ- coloring
c : T ϵ 1 N ( P ( y ) ) { 1 , 2 , . . , γ }
of T ϵ 1 N ( P ( y ) ) such that for all joint distribution P ( u , v , v 2 , y , z ) with marginal distribution P ( u , v , v 2 , z ) and | T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | γ > 2 N ϵ 2 , ( z N , u N , v N , v 2 N ) T ϵ 3 N ( P ( z , u , v , v 2 ) ) ,
| c 1 ( k ) | | T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | ( 1 + δ ) γ ,
for k = 1 , 2 , , γ , where c 1 is the inverse image of c.
Lemma A4 implies that if y N , z N , u N , v N and v 2 N are jointly typical, for given z N , u N , v N and v 2 N , the number of y N T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) for a certain color k ( k = 1 , 2 , , γ ), which is denoted by | c 1 ( k ) | , is upper bounded by | T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | ( 1 + δ ) γ . By using Lemma A1, it is easy to see that the typical set T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) maps into at least
| T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | | T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | ( 1 + δ ) γ = γ 1 + δ
colors. On the other hand, the typical set T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) maps into at most γ colors.

Appendix A.2. Code Construction

Definitions:
  • Transmission takes place over n blocks, and each block is of length N. Define the confidential message W 1 by W 1 = ( W 1 , 1 , . . . , W 1 , n ) , where W 1 , i ( 1 i n ) is for block i and takes values in { 1 , 2 , , 2 N R 1 } . Further divide W 1 , i into W 1 , i = ( W 1 , 1 , i , W 1 , 2 , i ) , where W 1 , j , i ( j = 1 , 2 ) takes values in { 1 , 2 , , 2 N R 1 , j } , and R 1 , 1 + R 1 , 2 = R 1 .
  • Define the common message W 0 by W 0 = ( W 0 , 1 , , W 0 , n ) , where W 0 , i ( 1 i n ) is for block i and takes values in { 1 , 2 , , 2 N R 0 } .
  • Let W be a randomly generated dummy message transmitted over all blocks, and it is denoted by W = ( W 1 , , W n ) , where W i ( 1 i n ) is for block i and it takes values in { 1 , 2 , , 2 N R } .
  • Let W 0 and W 1 be update information transmitted over all blocks, and they are respectively denoted by W 0 = ( W 0 , 1 , , W 0 , n ) and W 1 = ( W 1 , 1 , , W 1 , n ) , where W 0 , i and W 1 , i ( 1 i n ) are for block i and take values in { 1 , 2 , , 2 N R ˜ 0 } and { 1 , 2 , , 2 N R ˜ 1 } , respectively. Further divide W 0 , i into W 0 , i = ( W 0 , 0 , i , W 0 , 1 , i , W 0 , 2 , i ) , where W 0 , j , i ( j = 0 , 1 , 2 ) takes values in { 1 , 2 , , 2 N R ˜ 0 , j } , and R ˜ 0 , 0 + R ˜ 0 , 1 + R ˜ 0 , 2 = R ˜ 0 . Moreover, further divide W 1 , i into W 1 , i = ( W 1 , 0 , i , W 1 , 1 , i ) , where W 1 , j , i ( j = 0 , 1 ) takes values in { 1 , 2 , , 2 N R ˜ 1 , j } , and R ˜ 1 , 0 + R ˜ 1 , 1 = R ˜ 1 .
  • Let X ˜ i , Y ˜ i , Z ˜ i , U ˜ i , V ˜ i , V ˜ 0 , i , V ˜ 1 , i and V ˜ 2 , i be the random vectors for block i ( 1 i n ). Define X n = ( X ˜ 1 , , X ˜ n ) , and similar convention is applied to Y n , Z n , U n , V n , V 0 n , V 1 n and V 2 n . The specific values of the above random vectors are denoted by lower case letters.
Code construction:
  • In each block i ( 1 i n ), randomly produce 2 N ( R 0 + R ˜ 0 ) i.i.d. sequences u ˜ i according to the probability P ( u ) , and index them as u ˜ i ( w 0 , i , w 0 , 0 , i , w 0 , 1 , i , w 0 , 2 , i ) , where w 0 , i { 1 , 2 , , 2 N R 0 } , w 0 , 0 , i { 1 , 2 , , 2 N R ˜ 0 , 0 } , w 0 , 1 , i { 1 , 2 , , 2 N R ˜ 0 , 1 } and w 0 , 2 , i { 1 , 2 , , 2 N R ˜ 0 , 2 } . Here note that R ˜ 0 , 0 + R ˜ 0 , 1 + R ˜ 0 , 2 = R ˜ 0 .
  • For a given u ˜ i ( w 0 , i , w 0 , 0 , i , w 0 , 1 , i , w 0 , 2 , i ) , randomly produce 2 N ( R 1 + R + R ˜ 1 ) i.i.d. sequences v ˜ i according to the conditional probability P ( v | u ) , and index them as v ˜ i ( w 1 , 1 , i , w 1 , 2 , i , w i , w 1 , 0 , i , w 1 , 1 , i ) , where w 1 , 1 , i { 1 , 2 , , 2 N R 1 , 1 } , w 1 , 2 , i { 1 , 2 , , 2 N R 1 , 2 } , w i { 1 , 2 , , 2 N R } , w 1 , 0 , i { 1 , 2 , , 2 N R ˜ 1 , 0 } and w 1 , 1 , i { 1 , 2 , , 2 N R ˜ 1 , 1 } . Here note that R 1 , 1 + R 1 , 2 = R 1 and R ˜ 1 , 0 + R ˜ 1 , 1 = R ˜ 1 .
  • The sequence x ˜ i is i.i.d. produced according to a new discrete memoryless channel (DMC) with transition probability P ( x | u , v ) . The inputs and output of this new DMC are u ˜ i , v ˜ i and x ˜ i , respectively.
  • In each block i ( 1 i n ), generate v ˜ 0 , i in two ways: the first way is to produce 2 N ( R ˜ 0 , 0 + R ˜ 0 ) i.i.d. sequences v ˜ 0 , i according to the probability P ( v 0 | u , v , y ) , and index them as v ˜ 0 , i ( 1 ; w 0 , 0 , i , w 1 , 0 , i , l 1 , 0 , i ) , where 1 represents the first way to define v ˜ 0 , i , w 0 , 0 , i { 1 , 2 , , 2 N R ˜ 0 , 0 } , w 1 , 0 , i { 1 , 2 , , 2 N R ˜ 1 , 0 } and l 1 , 0 , i { 1 , 2 , , 2 N ( R ˜ 0 R ˜ 1 , 0 ) } ; the second way is to produce 2 N ( R ˜ 0 , 0 + R ˜ 0 ) i.i.d. sequences v ˜ 0 , i according to the probability P ( v 0 | u , v , y ) , and index them as v ˜ 0 , i ( 2 ; w 0 , 0 , i , l 2 , 0 , i ) , where 2 represents the second way to define v ˜ 0 , i , w 0 , 0 , i { 1 , 2 , , 2 N R ˜ 0 , 0 } , and l 2 , 0 , i { 1 , 2 , , 2 N R ˜ 0 } .
  • In each block i ( 1 i n ), produce 2 N ( R ˜ 0 , 1 + R ˜ 1 , 1 + R ˜ 1 ) i.i.d. sequences v ˜ 1 , i according to the probability P ( v 1 | u , v , y ) , and index them as v ˜ 1 , i ( w 0 , 1 , i , w 1 , 1 , i , l 1 , i ) , where w 0 , 1 , i { 1 , 2 , , 2 N R ˜ 0 , 1 } , w 1 , 1 , i { 1 , 2 , , 2 N R ˜ 1 , 1 } and l 1 , i { 1 , 2 , , 2 N R ˜ 1 } .
  • In each block i ( 1 i n ), produce 2 N ( R ˜ 0 , 2 + R ˜ 2 ) i.i.d. sequences v ˜ 2 , i according to the probability P ( v 2 | u , v , y ) , and index them as v ˜ 2 , i ( w 0 , 2 , i , l 2 , i ) , where w 0 , 2 , i { 1 , 2 , , 2 N R ˜ 0 , 2 } and l 2 , i { 1 , 2 , , 2 N R ˜ 2 } .
Encoding scheme:
  • In block 1, the transmitter chooses u ˜ 1 ( w 0 , 1 , 1 , 1 , 1 ) and v ˜ 1 ( w 1 , 1 , 1 , w 1 , 2 , 1 = 1 , w 1 , 1 , 1 ) to transmit.
  • In block i ( 2 i n 1 ), the transmitter receives the feedback y ˜ i 1 , and he tries to select a pair of sequences ( v ˜ 0 , i 1 , v ˜ 1 , i 1 ) such that ( v ˜ 0 , i 1 ( 1 ; w 0 , 0 , i 1 , w 1 , 0 , i 1 , l 1 , 0 , i 1 ) , v ˜ 1 , i 1 ( w 0 , 1 , i 1 , w 1 , 1 , i 1 , l 1 , i 1 ) , u ˜ i 1 , v ˜ i 1 , y ˜ i 1 ) are jointly typical sequences. If there are more than one pair ( v ˜ 0 , i 1 , v ˜ 1 , i 1 ) , randomly choose one; if there is no such pair, an error is declared. Based on Lemma A1, it is easy to see that the error probability goes to 0 if
    R ˜ 0 , 0 + R ˜ 0 I ( V 0 ; U , V , Y ) ,
    R ˜ 0 , 1 + R ˜ 1 , 1 + R ˜ 1 I ( V 1 ; U , V , Y , V 0 ) .
    Moreover, the transmitter also tries to select a pair of sequences ( v ˜ 0 , i 1 , v ˜ 2 , i 1 ) such that ( v ˜ 0 , i 1 ( 2 ; w 0 , 0 , i 1 , l 2 , 0 , i 1 ) , v ˜ 2 , i 1 ( w 0 , 2 , i 1 , l 2 , i 1 ) , u ˜ i 1 , v ˜ i 1 , y ˜ i 1 ) are jointly typical sequences. If there are more than one pair ( v ˜ 0 , i 1 , v ˜ 2 , i 1 ) , randomly choose one; if there is no such pair, an error is declared. Based on Lemma A1, it is easy to see that the error probability goes to 0 if (A3) and
    R ˜ 0 , 2 + R ˜ 2 I ( V 2 ; U , V , Y , V 0 )
    hold. Once the transmitter selects such pairs ( v ˜ 0 , i 1 , v ˜ 1 , i 1 ) and ( v ˜ 0 , i 1 , v ˜ 2 , i 1 ) , he chooses u ˜ i ( w 0 , i , w 0 , 0 , i 1 , w 0 , 1 , i 1 , w 0 , 2 , i 1 ) to transmit.
    Before choosing the transmitted codeword v ˜ i , produce a mapping g i : y ˜ i 1 { 1 , 2 , , 2 N R 1 , 2 } . Furthermore, we define K i = g i ( Y ˜ i 1 ) as a random variable uniformly distributed over { 1 , 2 , , 2 N R 1 , 2 } , and it is independent of all the random vectors and messages of block i. Here note that K i is the secret key known by the transmitter and the legitimate receiver, and k i = g i ( y ˜ i 1 ) { 1 , 2 , , 2 N R 2 } is a specific value of K i . Reveal the mapping g i to the transmitters, legitimate receiver and the wiretapper. Once the transmitter finds a pair ( v ˜ 0 , i 1 , v ˜ 1 , i 1 ) such that ( v ˜ 0 , i 1 ( 1 ; w 0 , 0 , i 1 , w 1 , 0 , i 1 , l 1 , 0 , i 1 ) , v ˜ 1 , i 1 ( w 0 , 1 , i 1 , w 1 , 1 , i 1 , l 1 , i 1 ) , u ˜ i 1 , v ˜ i 1 , y ˜ i 1 ) are jointly typical sequences, and finds a pair ( v ˜ 0 , i 1 , v ˜ 2 , i 1 ) such that ( v ˜ 0 , i 1 ( 2 ; w 0 , 0 , i 1 , l 2 , 0 , i 1 ) , v ˜ 2 , i 1 ( w 0 , 2 , i 1 , l 2 , i 1 ) , u ˜ i 1 , v ˜ i 1 , y ˜ i 1 ) are jointly typical sequences, he chooses v ˜ i ( w 1 , 1 , i , w 1 , 2 , i k i , w i , w 1 , 0 , i 1 , w 1 , 1 , i 1 ) to transmit.
  • In block n, the transmitter receives y ˜ n 1 , and he finds a pair ( v ˜ 0 , n 1 , v ˜ 1 , n 1 ) such that ( v ˜ 0 , n 1 ( 1 ; w 0 , 0 , n 1 , w 1 , 0 , n 1 , l 1 , 0 , n 1 ) , v ˜ 1 , n 1 ( w 0 , 1 , n 1 , w 1 , 1 , n 1 , l 1 , n 1 ) , u ˜ n 1 , v ˜ n 1 , y ˜ n 1 ) are jointly typical sequences. Moreover, he also finds a pair ( v ˜ 0 , n 1 , v ˜ 2 , n 1 ) such that ( v ˜ 0 , n 1 ( 2 ; w 0 , 0 , n 1 , l 2 , 0 , n 1 ) , v ˜ 2 , n 1 ( w 0 , 2 , n 1 , l 2 , n 1 ) , u ˜ n 1 , v ˜ n 1 , y ˜ n 1 ) are jointly typical sequences. Then he chooses u ˜ n ( 1 , w 0 , 0 , n 1 , w 0 , 1 , n 1 , w 0 , 2 , n 1 ) and v ˜ n ( 1 , 1 , 1 , w 1 , 0 , n 1 , w 1 , 1 , n 1 ) to transmit.
Decoding scheme for the legitimate receiver: The legitimate receiver does backward decoding after the transmission of all n blocks is finished. For block n, first, he tries to select a unique u ˜ n such that ( u ˜ n , y ˜ n ) are jointly typical. If there is no u ˜ n or multiple ones exist, an decoding error is declared. Using Lemma A2, the error probability goes to 0 if
R ˜ 0 , 0 + R ˜ 0 , 1 + R ˜ 0 , 2 I ( U ; Y ) .
Then, he tries to select a unique v ˜ n such that ( u ˜ n , v ˜ n , y ˜ n ) are jointly typical. If there is no v ˜ n or multiple ones exist, an decoding error is declared. Using Lemma A2, the error probability goes to 0 if
R ˜ 1 , 0 + R ˜ 1 , 1 I ( V ; Y | U ) .
When u ˜ n and v ˜ n are successfully decoded, the legitimate receiver extracts w 0 , 0 , n 1 , w 0 , 1 , n 1 , w 0 , 2 , n 1 , w 1 , 0 , n 1 , w 1 , 1 , n 1 from them. Then using Wyner-Ziv’s decoding scheme [8] for the source coding with side information, the legitimate receiver tries to find unique v ˜ 0 , n 1 and v ˜ 1 , n 1 such that given w 0 , 0 , n 1 , w 0 , 1 , n 1 , w 0 , 2 , n 1 , w 1 , 0 , n 1 and w 1 , 1 , n 1 , ( v ˜ 0 , n 1 , v ˜ 1 , n 1 , y ˜ n 1 ) are jointly typical sequences. If there is no v ˜ 1 , n 1 or multiple ones exist, an decoding error is declared. Based on Lemma A2 and Lemma A3, the error probability goes to 0 if
R ˜ 1 I ( V 1 ; V 0 , Y ) ,
R ˜ 1 + R ˜ 0 R ˜ 1 , 0 I ( V 0 ; Y ) + I ( V 1 ; V 0 , Y ) .
For block n 1 , after v ˜ 1 , n 1 is successfully decoded, the legitimate receiver tries to select a unique u ˜ n 1 such that ( u ˜ n 1 , y ˜ n 1 , v ˜ 1 , n 1 ) are jointly typical. Based on Lemma A2, the error probability goes to 0 if
R 0 + R ˜ 0 I ( U ; Y , V 1 ) .
Then he tries to select a unique v ˜ n 1 such that ( u ˜ n 1 , v ˜ n 1 , y ˜ n 1 , v ˜ 1 , n 1 ) are jointly typical. If there is no v ˜ n 1 or multiple ones exist, an decoding error is declared. Using Lemma A2, the error probability goes to 0 if
R 1 , 1 + R 1 , 2 + R + R ˜ 1 , 0 + R ˜ 1 , 1 I ( V ; Y , V 1 | U ) .
When u ˜ n 1 and v ˜ n 1 are successfully decoded, the legitimate receiver extracts w 0 , n 1 , w 1 , 1 , n 1 , w 1 , 2 , n 1 k n 1 , w n 1 , w 0 , 0 , n 2 , w 0 , 1 , n 2 , w 0 , 2 , n 2 , w 1 , 0 , n 2 and w 1 , 1 , n 2 from it. Since the legitimate receiver knows the key k n 1 = g n 1 ( y ˜ n 2 ) , the transmitted messages w 0 , n 1 , w 1 , 1 , n 1 and w 1 , 2 , n 1 for block n 1 are obtained. Repeat the above decoding scheme, the entire transmitted messages for all blocks are obtained by the legitimate receiver.
Decoding scheme for the wiretapper: The wiretapper also does backward decoding after the transmission of all n blocks is finished. For block n, first, he tries to select a unique u ˜ n such that ( u ˜ n , z ˜ n ) are jointly typical. If there is no u ˜ n or multiple ones exist, an decoding error is declared. Using Lemma A2, the error probability goes to 0 if
R ˜ 0 , 0 + R ˜ 0 , 1 + R ˜ 0 , 2 I ( U ; Z ) .
When u ˜ n is successfully decoded, the wiretapper extracts w 0 , 0 , n 1 , w 0 , 1 , n 1 and w 0 , 2 , n 1 from them. Then using Wyner-Ziv’s decoding scheme [8] for the source coding with side information, the wiretapper tries to find unique v ˜ 0 , n 1 and v ˜ 2 , n 1 such that given w 0 , 0 , n 1 , w 0 , 1 , n 1 and w 0 , 2 , n 1 , ( v ˜ 0 , n 1 , v ˜ 2 , n 1 , z ˜ n 1 ) are jointly typical sequences. If there is no v ˜ 2 , n 1 or multiple ones exist, an decoding error is declared. Based on Lemma A2 and Lemma A3, the error probability goes to 0 if
R ˜ 2 I ( V 2 ; V 0 , Z ) ,
R ˜ 2 + R ˜ 0 I ( V 0 ; Z ) + I ( V 2 ; V 0 , Z ) .
For block n 1 , after v ˜ 2 , n 1 is successfully decoded, the wiretapper tries to select a unique u ˜ n 1 such that ( u ˜ n 1 , z ˜ n 1 , v ˜ 2 , n 1 ) are jointly typical. Based on Lemma A2, the error probability goes to 0 if
R 0 + R ˜ 0 I ( U ; Z , V 2 ) .
When u ˜ n 1 is successfully decoded, the legitimate receiver extracts w 0 , n 1 , w 0 , 0 , n 2 , w 0 , 1 , n 2 and w 0 , 2 , n 2 from it. Repeat the above decoding scheme, the entire common messages for all blocks are obtained by the wiretapper.

Appendix A.3. Equivocation Analysis

For all blocks, the equivocation Δ is bounded by
Δ = 1 n N H ( W 0 , W 1 | Z n ) 1 n N H ( W 1 | Z n , W 0 ) = ( a ) 1 n N ( H ( W 11 | Z n , W 0 ) + H ( W 12 | Z n , W 0 , W 11 ) ) ,
where (a) is from the definitions W 11 = ( W 1 , 1 , 1 , , W 1 , 1 , n ) and W 12 = ( W 1 , 2 , 1 , , W 1 , 2 , n ) .
The first part H ( W 11 | Z n , W 0 ) of (A16) can be lower bounded by
H ( W 11 | Z n , W 0 ) H ( W 11 | Z n , W 0 , U n , V 2 n ) = ( b ) H ( W 11 | Z n , U n , V 2 n ) = H ( W 11 , Z n , U n , V 2 n ) H ( Z n , U n , V 2 n ) = H ( W 11 , Z n , U n , V 2 n , V n ) H ( V n | W 11 , Z n , U n , V 2 n ) H ( Z n , U n , V 2 n ) = ( c ) H ( Z n , V 2 n | U n , V n ) + H ( U n , V n ) H ( V n | W 11 , Z n , U n , V 2 n ) H ( Z n , V 2 n | U n ) H ( U n ) = H ( V n | U n ) H ( V n | W 11 , Z n , U n , V 2 n ) I ( Z n , V 2 n ; V n | U n ) = ( d ) ( n 1 ) N R 1 , 1 + ( n 2 ) N R 1 , 2 + ( n 1 ) N R + ( n 1 ) N ( R ˜ 1 , 0 + R ˜ 1 , 1 ) H ( V n | W 11 , Z n , U n , V 2 n ) I ( Z n , V 2 n ; V n | U n ) ( e ) ( n 1 ) N R 1 , 1 + ( n 2 ) N R 1 , 2 + ( n 1 ) N R + ( n 1 ) N ( R ˜ 1 , 0 + R ˜ 1 , 1 ) n N I ( V ; Z , V 2 | U ) H ( V n | W 11 , Z n , U n , V 2 n ) ( f ) ( n 1 ) N R 1 , 1 + ( n 2 ) N R 1 , 2 + ( n 1 ) N R + ( n 1 ) N ( R ˜ 1 , 0 + R ˜ 1 , 1 ) n N I I ( V ; Z , V 2 | U ) n N ϵ ,
where (b) is from H ( W 0 | U n ) = 0 , (c) is from H ( W 11 | V n ) = 0 , (d) is from the construction of U n and V n , (e) is from the fact that the channel is memoryless, and (f) is from the fact that given w 11 , u n , v 2 n and z n , the wiretapper tries to select unique v n such that ( v n , z n , v 2 n , u n ) are jointly typical, and based on Lemma A2, the wiretapper’s decoding error probability tends to 0 if
R 1 , 2 + R + R ˜ 1 , 0 + R ˜ 1 , 1 I ( V ; Z , V 2 | U ) ,
then using Fano’s inequality, we have 1 n N H ( V n | W 11 , Z n , U n , V 2 n ) ϵ , where ϵ 0 as n , N .
Moreover, the second part H ( W 12 | Z n , W 0 , W 11 ) of (A16) can be lower bounded by
H ( W 12 | Z n , W 0 , W 11 ) i = 2 n 1 H ( W 1 , 2 , i | Z n , W 0 , W 11 , W 1 , 2 , 1 = 1 , , W 1 , 2 , i 1 , W 1 , 2 , i K i ) = ( g ) i = 2 n 1 H ( W 1 , 2 , i | Z ˜ i 1 , W 1 , 2 , i K i ) i = 2 n 1 H ( W 1 , 2 , i | Z ˜ i 1 , U ˜ i 1 , W 1 , 2 , i K i ) = i = 2 n 1 H ( K i | Z ˜ i 1 , U ˜ i 1 , W 1 , 2 , i K i ) = ( f ) i = 2 n 1 H ( K i | Z ˜ i 1 , U ˜ i 1 ) i = 2 n 1 H ( K i | Z ˜ i 1 , U ˜ i 1 , V ˜ i 1 , V ˜ 2 , i 1 ) ,
where (e) is from the Markov chain W 1 , 2 , i ( Z ˜ i 1 , W 1 , 2 , i K i ) ( W 0 , W 11 , W 1 , 2 , 1 , , W 1 , 2 , i 1 , Z ˜ 1 , , Z ˜ i 2 , Z ˜ i , , Z ˜ n ) , (f) is from K i ( Z ˜ i 1 , U ˜ i 1 ) W 1 , 2 , i K i . Now it remains to bound H ( K i | Z ˜ i 1 , U ˜ i 1 , V ˜ i 1 , V ˜ 2 , i 1 ) in (A19), see the followings.
From Lemma A4 and (A2), we know that the typical set T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) maps into at least γ 1 + δ colors. Choosing γ = | T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | and notice that
| T P ( y | z , u , v , v 2 ) N ( z N , u N , v N , v 2 N ) | ( 1 ϵ 1 ) 2 N ( 1 ϵ 2 ) H ( Y | U , V , V 2 , Z ) ,
where ϵ 1 , ϵ 2 0 as N , thus we can conclude that
H ( K i | Z ˜ i 1 , U ˜ i 1 , V ˜ i 1 , V ˜ 2 , i 1 ) log γ 1 + δ log 1 ϵ 1 1 + δ + N ( 1 ϵ 2 ) H ( Y | U , V , V 2 , Z ) .
Substituting (A21) into (A19), we have
H ( W 12 | Z n , W 0 , W 11 ) ( n 2 ) log 1 ϵ 1 1 + δ + ( n 2 ) N ( 1 ϵ 2 ) H ( Y | U , V , V 2 , Z ) .
Finally, substituting (A17) and (A22) into (A16), we have
Δ n 1 n ( R 1 , 1 + R + R ˜ 1 , 0 + R ˜ 1 , 1 ) + n 2 n R 1 , 2 I ( V ; Z , V 2 | U ) ϵ + n 2 n N log 1 ϵ 1 1 + δ + n 2 n ( 1 ϵ 2 ) H ( Y | U , V , V 2 , Z ) .
The bound (A23) implies that if
R + R ˜ 1 , 0 + R ˜ 1 , 1 I ( V ; Z , V 2 | U ) H ( Y | U , V , V 2 , Z )
we can prove that Δ R 1 , 1 + R 1 , 2 ϵ by choosing sufficiently large n and N.
The achievable secrecy rate region can be obtained from (A3), (A4), (A5), (A6), (A7), (A8), (A9), (A10), (A11), (A12), (A13), (A14), (A15), (A18) and (A24). To be specific, first, using R ˜ 0 = R ˜ 0 , 0 + R ˜ 0 , 1 + R ˜ 0 , 2 , R ˜ 1 = R ˜ 1 , 0 + R ˜ 1 , 1 , the Markov chain ( V 0 , V 1 , V 2 ) ( U , V , Y ) ( Y , Z ) , and applying Fourier-Motzkin elimination to eliminate R ˜ 0 , 0 , R ˜ 0 , 1 , R ˜ 0 , 2 , R ˜ 1 , 0 , R ˜ 1 , 1 , R ˜ 0 , R ˜ 1 and R ˜ 2 from (A3), (A4), (A5), (A6), (A7), (A8), (A9), (A12), (A13) and (A14), we have
R ˜ 0 I ( U , V , Y ; V 0 , V 2 | Z ) ,
R ˜ 0 + R ˜ 1 I ( U , V , Y ; V 1 | V 0 , Y ) + I ( U , V , Y ; V 2 | V 0 , Z ) + max { I ( V 0 ; U , V , Y | Y ) , I ( V 0 ; U , V , Y | Z ) } .
Then, using R 1 = R 1 , 1 + R 1 , 2 , R ˜ 1 = R ˜ 1 , 0 + R ˜ 1 , 1 , and applying Fourier-Motzkin elimination to eliminate R 1 , 1 , R 1 , 2 , R ˜ 1 , 0 , R ˜ 1 , 1 , R from (A10), (A11), (A15), (A18), (A24), (A25) and (A26), the achievable secrecy rate region C s f n e w in Theorem 1 is obtained. The proof of Theorem 1 is completed.

References

  1. Wyner, A.D. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [Google Scholar] [CrossRef]
  2. Csiszar, I.; Korner, J. Broadcast channels with confidential messages. IEEE Trans. Inf. Theory 1978, 24, 339–348. [Google Scholar]
  3. Ahlswede, R.; Cai, N. Transmission, identification and common randomness capacities for wire-tap channels with secure feedback from the decoder. Gen. Theory Inf. Trans. Comb. 2006, 258–275. [Google Scholar]
  4. Ardestanizadeh, E.; Franceschetti, M.; Javidi, T.; Kim, Y. Wiretap channel with secure rate-limited feedback. IEEE Trans. Inf. Theory 2009, 55, 5353–5361. [Google Scholar] [CrossRef]
  5. Lai, L.; El Gamal, H.; Poor, V. The wiretap channel with feedback: encryption over the channel. IEEE Trans. Inf. Theory 2008, 54, 5059–5067. [Google Scholar] [CrossRef]
  6. Yin, X.; Xue, Z.; Dai, B. Capacity-equivocation regions of the DMBCs with noiseless feedback. Math. Probl. Eng. 2013, 2013, 102069. [Google Scholar] [CrossRef]
  7. Dai, B.; Han Vinck, A.J.; Luo, Y.; Zhuang, Z. Capacity region of non-degraded wiretap channel with noiseless feedback. In Proceedings of the 2012 IEEE International Symposium on Information Theory (ISIT), Cambridge, MA, USA, 1–6 July 2012. [Google Scholar]
  8. Wyner, A.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theory 1976, 22, 1–10. [Google Scholar] [CrossRef]
  9. El Gamal, A.; Kim, Y.H. Information measures and typicality. In Network Information Theory; Cambridge University Press: Cambridge, UK, 2011; pp. 17–37. [Google Scholar]
Figure 1. Broadcast channel with confidential messages and noiseless feedback.
Figure 1. Broadcast channel with confidential messages and noiseless feedback.
Entropy 19 00529 g001
Figure 2. The comparison of our new scheme with Ahlswede-Cai’s scheme and Csiszar-Korner’s scheme of the BC-CM without feedback for p = 0.05 and q = 0.01 .
Figure 2. The comparison of our new scheme with Ahlswede-Cai’s scheme and Csiszar-Korner’s scheme of the BC-CM without feedback for p = 0.05 and q = 0.01 .
Entropy 19 00529 g002
Figure 3. The comparison of our new scheme with Ahlswede-Cai’s scheme and Csiszar-Korner’s scheme of the BC-CM without feedback for p = 0.05 and q = 0.1 .
Figure 3. The comparison of our new scheme with Ahlswede-Cai’s scheme and Csiszar-Korner’s scheme of the BC-CM without feedback for p = 0.05 and q = 0.1 .
Entropy 19 00529 g003

Share and Cite

MDPI and ACS Style

Li, X.; Dai, B.; Ma, Z. How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages. Entropy 2017, 19, 529. https://doi.org/10.3390/e19100529

AMA Style

Li X, Dai B, Ma Z. How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages. Entropy. 2017; 19(10):529. https://doi.org/10.3390/e19100529

Chicago/Turabian Style

Li, Xin, Bin Dai, and Zheng Ma. 2017. "How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages" Entropy 19, no. 10: 529. https://doi.org/10.3390/e19100529

APA Style

Li, X., Dai, B., & Ma, Z. (2017). How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages. Entropy, 19(10), 529. https://doi.org/10.3390/e19100529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop