Next Article in Journal
Kappa Distributions and Isotropic Turbulence
Next Article in Special Issue
Multiplexing Gains under Mixed-Delay Constraints on Wyner’s Soft-Handoff Model
Previous Article in Journal
Variational Autoencoder Reconstruction of Complex Many-Body Physics
Previous Article in Special Issue
Ergodic Rate for Fading Interference Channels with Proper and Improper Gaussian Signaling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Secure Retrospective Interference Alignment

Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(11), 1092; https://doi.org/10.3390/e21111092
Submission received: 21 September 2019 / Revised: 16 October 2019 / Accepted: 24 October 2019 / Published: 7 November 2019
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives)

Abstract

:
In this paper, the K-user interference channel with secrecy constraints is considered with delayed channel state information at transmitters (CSIT). We propose a novel secure retrospective interference alignment scheme in which the transmitters carefully mix information symbols with artificial noises to ensure confidentiality. Achieving positive secure degrees of freedom (SDoF) is challenging due to the delayed nature of CSIT, and the distributed nature of the transmitters. Our scheme works over two phases: Phase one, in which each transmitter sends information symbols mixed with artificial noises, and repeats such transmission over multiple rounds. In the next phase, each transmitter uses the delayed CSIT of the previous phase and sends a function of the net interference and artificial noises (generated in previous phase), which is simultaneously useful for all receivers. These phases are designed to ensure the decodability of the desired messages while satisfying the secrecy constraints. We present our achievable scheme for three models, namely: (1) K-user interference channel with confidential messages (IC-CM), and we show that 1 2 ( K 6 ) SDoF is achievable; (2) K-user interference channel with an external eavesdropper (IC-EE); and (3) K-user IC with confidential messages and an external eavesdropper (IC-CM-EE). We show that for the K-user IC-EE, 1 2 ( K 3 ) SDoF is achievable, and for the K-user IC-CM-EE, 1 2 ( K 6 ) is achievable. To the best of our knowledge, this is the first result on the K-user interference channel with secrecy constrained models and delayed CSIT that achieves an SDoF which scales with K , square-root of number of users.

1. Introduction

Delayed channel state information at transmitters (CSIT) can impact the spectral efficiency of wireless networks, and this problem has received significant recent attention. Maddah Ali and Tse in [1] studied the delayed CSIT model for the K-user multiple-input single-output (MISO) broadcast channel, and showed that the optimal sum degrees of freedom (DoF) is given by K / ( 1 + 1 2 + + 1 K ) which is strictly greater than one DoF (with no CSIT) and less than K DoF (with perfect CSIT). For the K-user single-input single-output (SISO) X network, K 2 2 K 1 is maximum DoF with perfect CSIT [2]. In [3], Ghasemi et al. devised a transmission scheme for the X channel with delayed CSIT, and showed that for the K-user SISO X channel under delayed CSIT, 4 3 2 3 ( 3 K 1 ) DoF are achievable. The problem of delayed CSIT for interference channels has been studied in several works [3,4,5,6,7]. The main drawback of these schemes is that the achievable DoF does not scale with the number of users. In a recent work [8], a novel transmission scheme for the K-user SISO interference channel is presented which achieves K 2 DoF almost surely under delayed CSIT model. The result in [8] is particularly interesting, as it shows that the sum DoF for the K-user interference channel does scale with K , even with delayed CSIT.
Another important aspect in wireless networks is ensuring secure communication between transmitters and receivers. Many seminal works in the literature (see comprehensive surveys [9,10,11]) studied the secure capacity regions for multi-user settings such as wiretap channel, broadcast, and interference channels. Since the exact secure capacity regions for many multi-user networks are not known, secure degrees of freedom (SDoF) for a variety of models have been studied in [12,13,14,15,16,17,18]. More specifically, for the K-user MISO broadcast channel with confidential messages, the authors in [18] showed that the optimal sum SDoF with delayed CSIT is given by K / ( 1 + 1 2 + + 1 K + K 1 K ) . The achievability scheme is based on a modification of the (insecure) Maddah Ali and Tse’s scheme in [1] along with a key generation and exploitation method which uses delayed CSIT. The expression of the sum SDoF in [18] is almost the same as in [1] except a penalty term due to secrecy constraints. For the K-user SISO interference channel with confidential messages under perfect CSIT, Xie and Ulukus showed in [19] that the optimal sum SDoF is K ( K 1 ) 2 K 1 . In [20], for the K-user interference channel with an external eavesdropper, it has been showed K 1 2 SDoF is optimal for the interference channel with no eavesdropper CSIT. Also, there are various other works for different CSIT assumptions such as MIMO wiretap channel with no eavesdropper CSIT [21], broadcast channel with alternating CSIT [22].
In this work, we consider the K-user SISO interference channel with secrecy constraints and delayed CSIT. More specifically, we study three channel models (see Figure 1), namely: (1) K-user interference channel with confidential messages, (2) K-user interference channel with an external eavesdropper, and (3) K-user interference channel with confidential messages and an external eavesdropper. We focus on answering the following fundamental questions regarding these channel models: (a) is positive SDoF achievable for the interference channel with delayed CSIT?, and (b) if yes, then how does the SDoF scale with K, the number of users?
Contributions: We answer the above questions for all the three channel models in the affirmative by showing that positive SDoF is indeed achievable for all these models, for a large number of users, K. We show that for the K-user interference channel with confidential messages (IC-CM), 1 2 ( K 6 ) SDoF is achievable. Also, we show that for the K-user interference channel with an external eavesdropper (IC-EE), 1 2 ( K 3 ) SDoF is achievable, and for the K-user with confidential messages and an external eavesdropper (IC-CM-EE), 1 2 ( K 6 ) is achievable. In Table 1, we summarize the main results for the K-user IC under various secrecy constraints, and different CSIT assumptions (i.e., perfect CSIT, delayed CSIT and no CSIT).
These results highlight the fact that in presence of delayed CSIT, there is negligible DoF scaling loss due to the secrecy constraints in the network compared to the no secrecy case [8]. Our main contribution is a novel secure retrospective interference alignment scheme, that is specialized for the interference channel with delayed CSIT. Our transmission scheme is inspired by the work of [8] in terms of the organization of the transmission phases. One of the main differences is that the transmitters mix their information symbols with artificial noises so that the signals at each unintended receiver are completely immersed in the space spanned by artificial noise. However, this mixing must be done with only delayed CSIT, and it should also allow successful decoding at the respective receiver. Our scheme works over two phases: Phase one, in which each transmitter sends information symbols mixed with artificial noises, and repeats such transmission over multiple rounds. Subsequently, in the next phase, each transmitter carefully sends a function of the net interference and artificial noises (generated in previous phase), which is simultaneously useful to all receivers. The equivocation analysis of the proposed scheme is non-trivial due to the repetition and retransmission strategies employed by the transmitters.
Organization of the paper: The rest of the paper is organized as follows. Section 2 describes the system models. The main results and discussions are presented in Section 3. Section 4 provides the achievable scheme under delayed CSIT and confidential messages. Section 5 and Section 6 discuss two other secrecy constraints: (1) K-user interference channel with an external eavesdropper (IC-EE), and (2) K-user interference channel with confidential messages and an external eavesdropper (IC-CM-EE), respectively. We conclude the paper and discuss the future directions in Section 7. Finally, the detailed proofs are deferred to the Appendices.
Notations: Boldface uppercase letters denote matrices (e.g., A ), boldface lowercase letters are used for vectors (e.g., a ), we denote scalars by non-boldface lowercase letters (e.g., x), and sets by capital calligraphic letters (e.g., X ). The set of natural numbers, integer numbers, real numbers, and complex numbers are denoted by N , Z , R , and C , respectively. For a general matrix A with dimensions of M × N , A T , and A H denote the transpose and Hermitian transpose of A , respectively. We denote the partitioned matrix of two matrices A L × N and B L × M as ( A : B ) . We denote the identity matrix of the order M with I M . Let h ( x ) denote the differential entropy of a random vector x , and I ( x ; y ) denote the mutual information between two random vectors x and y . We denote a complex-Gaussian distribution with a mean μ and a variance σ 2 by CN ( μ , σ 2 ) . For rounding operations on a variable x, we use x as the floor rounding operator on x and x as the ceiling rounding operator on x.

2. System Model

We consider the K-user interference channel with secrecy constraints and delayed CSIT (shown in Figure 1). The input–output relationship at time slot t is
y k ( t ) = h k k ( t ) x k ( t ) + j = 1 , j k K h k j ( t ) x j ( t ) + n k ( t ) ,
z ( t ) = j = 1 K g j ( t ) x j ( t ) + n z ( t ) ,
where y k ( t ) is the signal received at receiver k at time t, h k j ( t ) CN ( 0 , 1 ) is the channel coefficient at time t between transmitter j and receiver k, and x k ( t ) is the transmitted signal from transmitter k at time t with an average power constraint E { | x k ( t ) | 2 } P . The additive noise n k ( t ) CN ( 0 , 1 ) at receiver k is independent and identically distributed (i.i.d.) across users and time. z ( t ) is the received signal at the eavesdropper at time t, g j ( t ) CN ( 0 , 1 ) is the channel coefficient at time t between transmitter j and the external eavesdropper, and n z ( t ) CN ( 0 , 1 ) is the additive noise at the eavesdropper. The channel coefficients are assumed to be i.i.d. across time and users. We assume perfect CSI at all the receivers. We further assume that the CSIT is delayed, i.e., CSI is available at each transmitter after one time slot without error. Also, we assume that the external eavesdropper’s CSI is not available at the transmitters (i.e., no eavesdropper CSIT).
Let R k = log 2 W k τ denote the rate of message W k intended for receiver k, where | W k | is the cardinality of the kth message. A ( 2 τ R 1 , 2 τ R 2 , , 2 τ R K , τ ) code is described by the set of encoding and decoding functions as follows: the set of encoders at the transmitters are given as: { ψ t ( k ) : W k × { H ( t ) } t = 1 t 1 x k ( t ) } t = 1 τ , k = 1 , , K , where the message W k is uniformly distributed over the set W k , and H ( t ) { h k j ( t ) } k = 1 , j = 1 K is the set of all channel gains at time t . The transmitted signal from transmitter k at time slot t is given as: x k ( t ) = ψ t ( W k , { H ( t ) } t = 1 t 1 ) . The decoding function at receiver k is given by the following mapping: ϕ ( k ) : y k ( τ ) × { H ( t ) } t = 1 τ W k , and the estimate of the message at receiver k is defined as: W ^ k = ϕ ( k ) ( { y k ( t ) , H ( t ) } t = 1 τ ) . The rate tuple ( R 1 , , R K ) is achievable if there exists a sequence of codes which satisfy the decodability constraints at the receivers, i.e.,
lim τ sup Prob W ^ k W k ϵ τ , k = 1 , , K
and the corresponding secrecy requirement is satisfied. We consider three different secrecy requirements:
  • IC-CM, Figure 1a, all unintended messages are kept secure against each receiver, i.e.,
    lim τ sup 1 τ I W k K ; y k ( τ ) | W k , Ω ϵ τ , k = 1 , , K ,
    where ϵ τ 0 as τ , W k K { W 1 , W 2 , , W K } { W k } , and Ω { H ( t ) , G ( t ) } t = 1 τ is the set of all channel gains (i.e., legitimate receivers and external eavesdropper) over the channel uses.
  • IC-EE, Figure 1b, all of the messages are kept secure against the external eavesdropper, i.e.,
    lim τ sup 1 τ I W 1 , W 2 , , W K ; z ( τ ) | Ω ϵ τ , k = 1 , , K .
  • IC-CM-EE, Figure 1c, all of the messages are kept secure against both the K 1 unintended receivers and the external eavesdropper, i.e., we impose both secrecy constraints in Equations (4) and (5).
The supremum of the achievable sum rate, R s k = 1 K R k , is defined as the secrecy sum capacity C s . The optimal sum secure degrees of freedom ( SDoF ) is then defined as follows:
SDoF lim P C S log P .
SDoF represents the optimal scaling of the secrecy capacity with log ( P ) , where P is the transmitted power, i.e., it is the pre-log factor of the secrecy capacity at high SNR.
In the next Section, we present our main results on the achievable sum SDoF with the three different secrecy constraints and delayed CSIT.

3. Main Results

Theorem 1.
For the K-user IC-CM with delayed CSIT, the following secure sum degrees of freedom is achievable:
SDoF IC CM SDoF IC CM ach . = K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K ,
where,
R = K + K × 1 + ( K 1 ) ( K 2 ) K K 1 .
We next simplify the above expression and present a lower bound on the achievable SDoF. Using this lower bound, we observe that the achievable SDoF scales with K , where K is the number of users.
Corollary 1.
For the K-user IC-CM with delayed CSIT, the achievable SDoF in Equation (7) is lower bounded as
SDoF IC CM ach . > 1 2 ( K 6 ) + ,
where ( x ) + max ( x , 0 ) .
We present the proof of Theorem 1 and Corollary 1 in Section 4.
Theorem 2.
For the K-user IC-EE with delayed CSIT, the following secure sum degrees of freedom is achievable:
SDoF IC EE SDoF IC EE ach . = R ( K R 1 ) R ( R + 1 ) + K ,
where,
R = K 1 .
We next simplify the above expression and present a lower bound on the SDoF sum ach . .
Corollary 2.
For the K-user IC-EE with delayed CSIT, the achievable SDoF in Equation (7) is lower bounded as
SDoF IC EE ach . > 1 2 ( K 3 ) + .
We present the proof of Theorem 2 and Corollary 2 in Section 5.
Theorem 3.
For the K-user IC-CM-EE with delayed CSIT, the following secure sum degrees of freedom is achievable:
SDoF IC CM EE SDoF IC CM EE ach . = K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K ,
where,
R = K + K × 1 + ( K 1 ) ( K 2 ) K K 1 .
In the next Corollary, we simplify the above expression and present a lower bound on the SDoF sum ach . .
Corollary 3.
For the K-user IC-CM-EE with delayed CSIT, the achievable SDoF in Equation (7) is lower bounded as
SDoF IC CM EE ach . > 1 2 ( K 6 ) + .
We present the proof of Theorem 3 and Corollary 3 in Section 6.
Remark 1.
We next compare the secure sum DoF of the previous Theorems to that of [8] (i.e., without secrecy constraints). For the K-user interference channel without secrecy constraints, the achievable sum DoF in [8] is given as:
DoF sum ach . = K K 1 + K K ,
K 2 ,
> ( a ) 1 2 ( K 1 ) + ,
where (a) follows from the fact that x > x 1 . Comparing these results together, we can conclude that the scaling behavior of the sum SDoF is still attainable and there is negligible scaling loss in sum SDoF compared to no secrecy case for sufficiently large K (see Figure 2). Also, we present numerical evaluations for the sum SDoF in Table 2.

4. Proof of Theorem 1

In this Section, we present the proof of Theorem 1. The transmission scheme consists of τ transmission blocks, where each block is of duration B. Across blocks, we employ stochastic wiretap coding (similar to the techniques employed in the literature on compound wiretap channels, see [19,23,24]). Within each block, the transmission is divided into two phases, which leverages delayed CSIT. In order to analyze the rate of the proposed scheme, we first take the limit of number of blocks τ , followed by the limit B . For a given block, if we denote the (B-length) input of transmitter i as x i , and output of receiver i as y i , then the secure rate achievable by stochastic wiretap coding is given by:
R i = I ( s i ; y i | Ω ) max j i I ( s i ; y j | s i K , Ω ) B , i = 1 , 2 , , K ,
where s i is the vector of information symbols sent by transmitter i. Figure 3 gives an overview for these steps: stochastic encoding over blocks, and the two-phase scheme within each block that leverages delayed CSIT.

4.1. Overview of the Achievability Scheme and SDoF Analysis

In this subsection, we present our secure transmission scheme. We consider a transmission block of length R T + K ( n + 1 ) N = R ( R n N + ( n + 1 ) N ) + K ( n + 1 ) N , where R denotes the number of transmission rounds and N = R K ( K 1 ) , and n is an integer. The transmission scheme works over two phases. The goal of each transmitter is to securely send T 1 = R n N + ( n + 1 ) N R T + ( K 1 ) ( n + 1 ) N K 1 information symbols to its corresponding receiver. In the first phase, each transmitter sends random linear combinations of the T 1 information symbols and the T 2 = R T + ( K 1 ) ( n + 1 ) N K 1 artificial noise symbols in T time slots. Each transmitter repeats such transmission for R rounds, and hence, phase one spans R T time slots.
By the end of phase one, each receiver applies local interference alignment on its received signal to reduce the dimension of the aggregate interference. In the second phase, each transmitter knows the channel coefficients of phase one due to delayed CSIT. Subsequently, each transmitter sends a function of the net interference and artificial noises (generated in previous phase) which is simultaneously useful to all receivers. More specifically, each transmitter seperately sends ( n + 1 ) N linear equations of the past interference to all receivers. Therefore, phase 2 spans K ( n + 1 ) N time slots.
By the end of both phases, each receiver is able to decode its desired T 1 information symbols while satisfying the confidentiality constraints. The main aspect is that the parameters of the scheme (i.e., number of artificial noise symbols, number of repetition rounds, and durations of the phases) must be carefully selected to allow for reliable decoding of legitimate symbols, while satisfying the confidentiality constraints.
Therefore, the transmission scheme spans R T + K ( n + 1 ) N time slots, this scheme leads to the following achievable SDoF:
SDoF IC CM ach . = K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K .
We calculate the achievable sum SDoF of this scheme in full detail in Section 4.3. Before we present the details of the scheme, we first optimize the achievable SDoF in Equation (20) with respect to the number of rounds R and also simplify the above expression, which leads to the expression in Corollary 1.
Lemma 1.
The optimal value of R which maximizes Equation (20) is given by
R = K + K × 1 + ( K 1 ) ( K 2 ) K K 1 .
Now, in order simplify the obtained expression in Equation (20), we state the following Corollary.
Corollary 4.
The optimal value of number of rounds R is lower bounded by
R > K 5 = R l b .
We present the proof of Lemma 1 and Corollary 4 in Appendix A.
Figure 4 depicts a comparison of between the two values of R (i.e., optimal R and lower bound R lb ). By substituting R lb in Equation (20) leads to a lower bound on SDoF IC CM ach . as follows:
SDoF IC CM ach . = K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K ,
> R ( K R 2 ) R ( R + 1 ) + K = ( K 5 ) ( K K + 3 ) ( K 5 ) ( K 4 ) + K ,
= ( a ) K K 6 K + 8 K 15 2 K 9 K + 20 ,
> K K 6 K + 8 K 15 2 K ,
= K 6 2 + 8 K 15 2 K ,
> ( b ) 1 2 ( K 6 ) + ,
where in (a), the term 9 K + 20 in the denominator is negative, K 5 , so neglecting this term gives us Equation (26). In step (b), since the term 8 K 15 is positive, K 4 , hence omitting this term gives Equation (28). To this end, we get Equation (28) which shows the scaling of the achievable SDoF with K, the number of users.

4.2. Detailed Description of the Achievability Scheme

Figure 5 depicts an overview of the two transmission phases. We now present the transmission scheme in full detail. For our scheme, we collectively denote the L symbols transmitted over L time slots as a super symbol and call this as the L symbol extension of the channel. With the extended channel, the signal vector at the kth receiver can be expressed as
y k = j = 1 K H k j x j + n k ,
where x j is a L × 1 column vector representing the L symbols transmitted by transmitter k in L time slots. H k j is a L × L diagonal matrix representing the L symbol extension of the channel as follows:
H k j = diag h k j ( 1 ) , h k j ( 2 ) , , h k j ( L ) ,
where h k j ( t ) is the channel coefficient between transmitter j and receiver k at time slot t. Now we proceed to the proposed scheme which works over two phases.

4.2.1. Phase 1: Interference Creation with Information Symbols and Artificial Noises

Recall that the goal of each transmitter is to send T 1 information symbols securely to its respective receiver. This phase is comprised of R rounds, where, in each round, every transmitter j sends linear combinations of the T 1 information symbols s j C T 1 × 1 , mixed with T 2 artificial noises u j C T 2 × 1 , where the elements of u j are drawn from complex-Gaussian distribution with average power P. Hence, the signal sent by transmitter j in each round r can be written as
x j = V j s j u j , j = 1 , 2 , , K ,
where V j , j = 1 , 2 , , K is a random mixing matrix of dimension T × T whose elements are drawn from complex-Gaussian distribution with zero mean and unit variance at transmitter j. V j , j = 1 , 2 , , K is known at all terminals (all transmitters and receivers). The received signal at receiver k for round r { 1 , 2 , , R } is given by
y k r = j = 1 K H k j r x j + n k r ,
where x j is the T × 1 column vector representing the T symbol extension of the transmitted symbols from transmitter j, and n k r represents the receiver noise in round r at receiver k. This phase spans R T time slots where R N is the number of transmission rounds and T = R n N + ( n + 1 ) N time slots where n N and N = R K ( K 1 ) .
⋄ Interference Aggregation at Receivers
At the end of phase 1, each receiver k has the signals y k = { y k r } r = 1 R , over R rounds. Each receiver performs a linear post-processing of its received signals in order to align the aggregate interference (generated from symbols and artificial noises) from the ( K 1 ) unintended transmitters. In particular, each receiver multiplies its received signals in the rth block with a matrix W (of dimension T × n N ) as follows:
y ˜ k r = W H y k r = W H ( j = 1 K H k j r x j + n k r ) ,
= W H H k k r x k + j k W H H k j r x j + W H n k r ,
= W H H k k r x k + j k W H H k j r x j + n ˜ k r .
The goal is to design the matrices W and X such that
W H H k j r X , k = 1 , 2 , , K , k j , r = 1 , 2 , , R ,
where X C ( n + 1 ) N × T . Here the notation A B means that the set of row vectors of matrix A is a subset of the row vectors of matrix B . To this end, we choose W and X as follows:
W = ( r , m , i ) S ( H m i r ( n m i r ) ) H 𝟙 : 0 n m i r n 1 ,
X = ( r , m , i ) S ( H m i r ( n m i r ) ) H 𝟙 : 0 n m i r n H ,
where 𝟙 is the all ones column vector and the set S = { ( r , m , i ) : r { 1 , , R } , m i { 1 , , K } } . Note that the set S does not contain the channel matrix H k k r that carries the information symbols intended to receiver k. However, multiplying with any channel gain that appears in W results in aligning this signal within X asymptotically. It is worth noting that, X defines all the possible interference generated by the transmitters at all receivers. Hence, this choice of X and W guarantees that the alignment condition Equation (36) is satisfied. Therefore, the received signal after post-processing in round r at receiver k can be written as
y ˜ k r = W H H k k r x k + j k W H H k j r x j + W H n k r ,
= W H H k k r x k + j k Π k j r X x j + W H n k r ,
where Π k j r C n N × ( n + 1 ) N is a selection and permutation matrix. Now after the end of phase 1, receiver k has R n N equations of T desired symbols (which are composed of T 1 information symbols and T 2 artificial noises generated by the transmitter k) plus ( K 1 ) interference terms, which are of dimension ( n + 1 ) N . Figure 6 gives a detailed structure for the first phase of the transmission scheme.

4.2.2. Phase 2: Re-Transmission of Aggregate Interference with Delayed CSIT

For the second phase, each transmitter k uses ( n + 1 ) N time slots to re-transmit the aggregated interference ( X x k ) generated in the first phase at the receivers, which is sufficient to cancel out the interference term at receiver j k , and to provide additional ( n + 1 ) N equations of the desired symbols to receiver k. Then, this phase spans K ( n + 1 ) N time slots. The transmitted signal from transmitter k is as follows:
z k = X x k , k = 1 , 2 , , K .
⋄ Decoding at Receivers
At the end of phase 2, the interference at receiver k is removed by subtracting the terms j = 1 , j k Π k j r X x j from the equalized signal y ˜ k r , i.e., (ignoring the additive noise n k r )
W H H k k r x k = y ˜ k r j = 1 , j k Π k j r X x j .
Canceling the interference terms leaves each receiver k , k { 1 , , K } with R n N useful linear equations besides ( n + 1 ) N useful equations from transmitter k (from phase 2). At the end of phase 2, receiver k will collectively get the following signal,
X H , ( W H H k k 1 ) H , , ( W H H k k R ) H B k H V k s k u k .
Therefore, at the end of phase 2, each receiver has enough linear equations of the desired symbols. In order to ensure decodability, we need to prove that B k V k is full rank and hence each receiver will be able to decode its desired T 1 information symbols. First, we notice that V k is full rank matrix and hence rank ( B k V k ) = rank ( B k ) [25]. In Appendix B, we show that B k is full rank. Figure 7 gives a detailed structure for the second phase of the transmission scheme.
Before we start the achievable secure rate analysis, we want to highlight first on the dimensions of the information symbols s i C T 1 × 1 and the artificial noises u i C T 2 × 1 , i = 1 , 2 , , K .

4.2.3. Choice of T 1 and T 2 to Satisfy the Confidentiality Constraints

Without loss of generality, let us consider receiver 1. After decoding s 1 and u 1 , receiver 1 will have R T equations of { s i } i = 2 K , { u i } i = 2 K from phase one, and ( K 1 ) ( n + 1 ) N equations of { s i } i = 2 K , { u i } i = 2 K from phase two. Then, the total number of equations seen at receiver 1 is R T + ( K 1 ) ( n + 1 ) N . Hence, in order to keep the unintended information symbols of ( K 1 ) transmitters at this receiver secure, we require that the number of these equations must be at most equal to the total number of the artificial noise dimensions of the ( K 1 ) transmitters, i.e.,
R T + ( K 1 ) ( n + 1 ) N ( K 1 ) T 2 .
Therefore, we choose T 2 as
T 2 = R T + ( K 1 ) ( n + 1 ) N K 1 .
Note that since T = T 1 + T 2 , so we can get T 1 as follows:
T 1 = R n N + ( n + 1 ) N R T + ( K 1 ) ( n + 1 ) N K 1 .
We next compute the achievable secrecy rates and SDoF for the K-user interference channel with confidential messages and delayed CSIT.

4.3. Secrecy Rate and SDoF Calculation

Using stochastic encoding described in Appendix XII of [26], for a block length B = R T + K ( n + 1 ) N , the following secure rate is achievable:
R i = I ( s i ; y i | Ω ) max j i I ( s i ; y j | s i K , Ω ) R T + K ( n + 1 ) N , i = 1 , 2 , , K ,
where I ( s i ; y i | Ω ) is the mutual information between the information symbols vector s i and y i , the received composite signal vector at the intended receiver i, given the knowledge of the channel coefficients. I ( s i ; y j | s i K , Ω ) is the mutual information between s i and y j , the received composite signal vector at the unintended receiver j, i.e., the strongest adversary with respect to transmitter i, conditioned on the information symbols s i K . In terms of differential entropy, we can write,
Term A : I ( s i ; y i | Ω ) = h ( y i | Ω ) h ( y i | s i , Ω ) , i = 1 , 2 , , K ,
Term B : I ( s i ; y j | s i K , Ω ) = h ( y j | s i K , Ω ) h ( y j | s 1 K , Ω ) , j = 1 , 2 , , K , j i .
We collectively write the received signal y i at receiver i over R T + K ( n + 1 ) N time slots as follows:
y i = A i V q + n i , i = 1 , 2 , , K ,
where,
Entropy 21 01092 i002
where A i has dimensions of ( R T + K ( n + 1 ) N ) × K T . Note that A i is partitioned into two sub matrices C i and D i . C i consists of block matrices, where each block matrix has dimensions of T × T whose elements are i.i.d. drawn from a continuous distribution and hence, it is full rank, almost surely (i.e., rank ( C i ) = R T ). D i has a block diagonal structure (each block matrix has dimensions of ( n + 1 ) N × T ) since the transmission in phase two of the scheme is done in TDMA fashion. Note that each block is a full rank matrix (i.e., rank ( H ˜ i j X ) = rank ( X ) = ( n + 1 ) N , j = 1 , , K ). The matrix X is a full rank matrix as proved in [26]. The matrix V can be written as follows:
V = blkdiag ( V 1 , V 2 , , V K ) ,
where V is the block diagonal matrix with dimensions of K T × K T . Furthermore, we write
q = s 1 T u 1 T s 2 T u 2 T s K T u K T T
as a column vector of length K T , which contains the information symbols and the artificial noises of transmitters 1 , , K .
Before we proceed, we present two Lemmas; we provide the proof of Lemma 2 in Appendix C. For Lemma 3, we follow similar steps as in [17], the proof of this Lemma is provided in [26].
Lemma 2.
Let A be a matrix with dimension M × N and X = ( x 1 , , x N ) T be a zero-mean jointly complex Gaussian random vector with covariance matrix P I . Also, let N = ( n 1 , , n M ) T be a zero-mean jointly complex Gaussian random vector with covariance matrix σ 2 I , independent of X , then
h ( A X + N ) = log ( π e ) M + i = 1 rank ( A ) log ( λ i P + σ 2 ) ,
where { λ i } i = 1 rank ( A ) are the singular values of A .
Lemma 3.
Consider two matrices A M × N and B N × M where M N . The elements of matrix B are chosen independently from the entries of A at random from a continuous distribution. Then,
rank ( A B ) = rank ( A ) , a l m o s t s u r e l y .
Without loss of generality, let us consider the first transmitter. The received signal at the first receiver after removing the ( K 1 ) interference terms is written as follows:
y ˜ 1 = W H 0 0 0 0 W H 0 0 0 0 W H 0 0 0 0 I Ψ H 11 1 H 11 2 H 11 R H ˜ 11 X V 1 F 1 s 1 u 1 x 1 + n 1 1 n 1 2 n 1 R n ˜ 1 n 1 .
⋄ Lower Bounding Term A
We note that s 1 x 1 y 1 y ˜ 1 forms a Markov chain, thus
I ( x 1 ; y 1 | Ω ) I ( s 1 ; y 1 | Ω ) I ( s 1 ; y ˜ 1 | Ω ) ,
= h ( y ˜ 1 | Ω ) h ( y ˜ 1 | s 1 , Ω ) .
Using Equation (56), we can write h ( y ˜ 1 | Ω ) as follows:
h ( y ˜ 1 | Ω ) = h ( Ψ ( F 1 x 1 + n 1 ) ) ,
= h ( F 1 x 1 + n 1 ) + log ( det ( Ψ ) ) ,
= log ( π e ) R T + ( n + 1 ) N + i = 1 r ( F 1 ) log ( λ i P + σ 2 ) + log ( det ( Ψ ) ) ,
= ( a ) log ( π e ) R T + ( n + 1 ) N + i = 1 T log ( λ i P + σ 2 ) + log ( det ( Ψ ) ) ,
where { λ i } i = 1 r ( F 1 ) are the singular values of F 1 . In (a), we note that B 1 = Ψ F 1 is full rank. Using full rank decomposition Theorem [25], we conclude that F 1 is also full rank, i.e., rank ( F 1 ) = T .
Now, we write h ( y ˜ 1 | s 1 , Ω ) as follows:
h ( y ˜ 1 | s 1 , Ω ) = h ( Ψ ( F ˜ 1 u 1 + n 1 ) ) ,
= h ( F ˜ 1 u 1 + n 1 ) + log ( det ( Ψ ) ) ,
= log ( π e ) R T + ( n + 1 ) N + i = 1 r ( F ˜ 1 ) log ( λ i P + σ 2 ) + log ( det ( Ψ ) ) ,
log ( π e ) R T + ( n + 1 ) N + i = 1 T 2 log ( λ i P + σ 2 ) + log ( det ( Ψ ) ) ,
where,
F ˜ 1 = H 11 1 H 11 2 H 11 R H ˜ 11 X V 1 , u 1 ,
where F ˜ 1 has dimensions of R T + ( n + 1 ) N × T 2 , and { λ i } i = 1 r ( F ˜ 1 ) are the singular values of F ˜ 1 . V 1 , u 1 has dimensions of T × T 2 . Note that, we can view the mixing matrix V i being composed of two parts i.e., V i = ( V i , s i : V i , u i ) , i 1 , 2 , , K where V i , s i corresponding to the information symbol s i and V i , u i corresponding to the artificial noise u i .
From the substitution of Equations (62) and (66) into Equation (58), we obtain
I ( x 1 ; y 1 | Ω ) I ( s 1 ; y 1 | Ω ) I ( s 1 ; y ˜ 1 | Ω ) ,
i = 1 T log ( λ i P + σ 2 ) i = 1 T 2 log ( λ i P + σ 2 ) .
Before calculating the second term, i.e., Term B. We collectively write the received signal y j at receiver j over R T + K ( n + 1 ) N time slots as follows:
y j = A j V q + n j ,
where A j is written as follows:
Entropy 21 01092 i003
where A j has dimensions of ( R T + K ( n + 1 ) N ) × K T . The matrix V is written as follows:
V = blkdiag ( V 1 , V 2 , , V K ) ,
where V is the block diagonal matrix with dimensions of K T × K T . Furthermore, we write
q = s 1 T u 1 T s 2 T u 2 T s K T u K T T
as a column vector of length K T , which contains the information symbols and the artificial noises of transmitters 1 , , K .
⋄ Upper Bounding Term B
Now, we can compute Term B, i.e., I ( s 1 ; y j | s 1 K , Ω ) as follows:
I ( s 1 ; y j | s 1 K , Ω ) = h ( y j | s 1 K , Ω ) h ( y j | s 1 K , Ω ) ,
= h ( A j V ¯ q ¯ + n j ) h ( A j V ˜ q ˜ + n j ) ,
where V ¯ is a truncated version of V with dimensions K T × ( T 1 + K T 2 ) . Furthermore, we write
q ¯ = s 1 T u 1 T u K T T
as a column vector of length T 1 + K T 2 , which contains T 1 information symbols and K T 2 artificial noises of transmitters. Also, V ˜ is a truncated version of V with dimensions K T × K T 2 . Furthermore, we write
q ˜ = u 1 T u 2 T u K T T
as a column vector of length K T 2 , which contains only K T 2 artificial noises of transmitters.
Using Equation (70), we can write h ( y j | Ω ) as follows:
h ( y j | s 1 K , Ω ) = ( a ) h ( A j V ¯ q ¯ + n j )
= log ( π e ) R T + K ( n + 1 ) N + i = 1 r ( A j V ¯ ) log ( Λ i P + σ 2 ) ,
log ( π e ) R T + K ( n + 1 ) N + i = 1 r ( A j V ) log ( Λ i P + σ 2 ) ,
= ( b ) log ( π e ) R T + K ( n + 1 ) N + i = 1 r ( A j ) log ( Λ i P + σ 2 ) .
In (a), we used Lemma 2. { Λ i } i = 1 r ( A j V ) are the singular values of A j V . Note that in (b), V is an invertible matrix with rank K T , therefore rank ( A j V ) = rank ( A j ) .
Now, we write h ( y j | s 1 K , Ω ) as follows:
h ( y j | s 1 K , Ω ) = log ( π e ) R T + K ( n + 1 ) N + i = 1 r ( A j V ˜ ) log ( Λ i P + σ 2 ) ,
( a ) log ( π e ) R T + K ( n + 1 ) N + i = 1 r ( A j V ˜ ˜ ) log ( Λ i P + σ 2 ) ,
= ( b ) log ( π e ) R T + K ( n + 1 ) N + i = 1 r ( A j ) log ( Λ i P + σ 2 ) ,
where { Λ i } i = 1 r ( A j V ˜ ) are the singular values of A j V ˜ . In (a), V ˜ ˜ is a truncated version of V ˜ with dimensions of K T × ( R T + K ( n + 1 ) N ) , therefore, r ( A j V ˜ ) r ( A j V ˜ ˜ ) . In (b), we used Lemma 3, i.e., rank ( A j V ˜ ˜ ) = rank ( A j ) . The multiplication of A j and V ˜ ˜ can be viewed as R T + K ( n + 1 ) N linear combinations of the K T rows of matrix V ˜ ˜ , whose elements are generated independently of A j from a continuous distribution. In Appendix XI of [26], we show that multiplying A j with a non-square random matrix V ˜ ˜ does not reduce the rank of matrix A j , almost surely. Hence, from the above argument, in order to ensure that rank ( A j V ˜ ˜ ) = rank ( A j ) , we must pick R T + ( K 1 ) ( n + 1 ) N ( K 1 ) T 2 , which gives the reasoning behind the choice of the parameter T 2 .
From the substitution of Equations (81) and (84) into Equation (75), we obtain
I ( s 1 ; y j | Ω ) i = 1 r ( A j ) log ( Λ i P + σ 2 ) i = 1 r ( A j ) log ( Λ i P + σ 2 ) .
Combining Equations (69) and (85), we have
R 1 i = 1 T log ( λ i P + σ 2 ) i = 1 T 2 log ( λ i P + σ 2 ) i = 1 r ( A j ) log ( Λ i P + σ 2 ) i = 1 r ( A j ) log ( Λ i P + σ 2 ) R T + K ( n + 1 ) N ,
T log ( λ min P + σ 2 ) T 2 log ( λ max P + σ 2 ) R T + K ( n + 1 ) N Term 1 r ( A j ) log ( Λ max P + σ 2 ) r ( A j ) log ( Λ min P + σ 2 ) R T + K ( n + 1 ) N Term 2 ,
where λ min = min i { λ i } i = 1 r ( F 1 ) , λ max = max i { λ i } i = 1 r ( F ˜ 1 ) , Λ min = min i { Λ i } i = 1 r ( A j V ˜ ) and Λ max = max i { Λ i } i = 1 r ( A j V ) .
We now simplify the two terms as follows:
Term 1 = T log ( λ min P + σ 2 ) T 2 log ( λ max P + σ 2 ) R T + K ( n + 1 ) N ,
= R n N + ( n + 1 ) N log ( λ min P + σ 2 ) R ( R n N + ( n + 1 ) N ) + ( K 1 ) ( n + 1 ) N K 1 log ( λ max P + σ 2 ) R ( R n N + ( n + 1 ) N ) + K ( n + 1 ) N ,
( a ) R n N + ( n + 1 ) N log ( λ min P + σ 2 ) R ( R n N + ( n + 1 ) N ) + ( K 1 ) ( n + 1 ) N K 1 + 1 log ( λ max P + σ 2 ) R ( R n N + ( n + 1 ) N ) + K ( n + 1 ) N ,
where in (a), we used the property that x < x + 1 . Also,
Term 2 = r ( A j ) log ( Λ max P + σ 2 ) log ( Λ min P + σ 2 ) R T + K ( n + 1 ) N ,
= r ( A j ) log ( Λ max P + σ 2 ) log ( Λ min P + σ 2 ) R ( R n N + ( n + 1 ) N ) + K ( n + 1 ) N ,
R ( R n N + ( n + 1 ) N ) + K ( n + 1 ) N log ( Λ max P + σ 2 ) log ( Λ min P + σ 2 ) R ( R n N + ( n + 1 ) N ) + K ( n + 1 ) N ,
= log ( Λ max P + σ 2 ) log ( Λ min P + σ 2 ) .
Combining Equations (90) and (94) in Equation (87) and taking the limit n , we get the following:
lim n R 1 = ( R + 1 ) log ( λ min P + σ 2 ) R ( R + 1 ) K 1 + 1 log ( λ max P + σ 2 ) R ( R + 1 ) + K log ( Λ max P + σ 2 ) + log ( Λ min P + σ 2 ) .
Dividing R 1 by log ( P ) and letting P , we get
d 1 = lim P R 1 log ( P ) = ( R + 1 ) R 2 + R K 1 + 1 R ( R + 1 ) + K , = R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K .
Therefore, the achievable secure sum degrees of freedom ( SDoF IC CM ach . ) is obtained as
SDoF IC CM ach . = K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K .
Hence, this completes the proof of Theorem 1.

5. Proof of Theorem 2

We follow a similar achievability scheme presented in Section 4, however, the main differences are the number of information symbols, the artificial noises used for transmission and the number of rounds in the first phase of the scheme. The goal of each transmitter is to securely send T 1 = R n N + ( n + 1 ) N R T + K ( n + 1 ) N K information symbols to its corresponding receiver and keeping all messages secure against the external eavesdropper.
The total number of equations seen at the eavesdropper is R T + K ( n + 1 ) N of { s i } i = 1 K , { u i } i = 1 K . Hence, in order to keep the unintended information symbols of K transmitters at this receiver secure, we require that the number of these equations must be at most equal to the total number of the artificial noise dimensions of the K transmitters, i.e.,
R T + K ( n + 1 ) N K T 2 .
Therefore, we choose T 2 as
T 2 = R T + K ( n + 1 ) N K .
Since T = T 1 + T 2 , so we can get T 1 as follows:
T 1 = R n N + ( n + 1 ) N R T + K ( n + 1 ) N K , T = R n N + ( n + 1 ) N .
To this end, this scheme leads to the following achievable SDoF:
SDoF IC EE ach . = ( b ) R ( K R 1 ) R ( R + 1 ) + K .
Since the achieved SDoF in Equation (101) is a concave function of R. Hence, getting the optimal R is obtained by equating the first derivative of the function with zero. Therefore, the optimal R is
R = K 1 ,
> K 2 .
Now we approximate the obtained SDoF as follows:
SDoF IC EE ach . = ( K 2 ) ( K K + 1 ) ( K 2 ) ( K 1 ) + K ,
= ( a ) K K 3 K + 3 K 2 2 K 3 K + 2 ,
> K K 3 K + 3 K 2 2 K ,
= ( b ) K 3 2 + 3 K 2 2 K ,
> 1 2 ( K 3 ) + ,
where in (a), the term 3 K + 2 in the denominator is negative, K 1 , so neglecting this term gives us Equation (106). In step (b), since the term 3 K 2 is positive, hence omitting this term gives Equation (108).

Secrecy Rate and SDoF Calculation

For a transmission of block length B = R T + K ( n + 1 ) N , the achievable secure rate R i , i = 1 , 2 , , K is defined as
R i = I ( s i ; y i | Ω ) I ( s i ; z | s i K , Ω ) R T + K ( n + 1 ) N , i = 1 , 2 , , K ,
where I ( s i ; y i | Ω ) is the mutual information between the information symbols vector s i and y i , the received composite signal vector at the intended receiver i, given the knowledge of the channel coefficients. I ( s i ; z | s i K , Ω ) is the mutual information between s i and z , the received composite signal vector at the external eavesdropper, conditioned on the information symbols s i K . Note that z is collectively written as
z = A z V q z + n z ,
where,
Entropy 21 01092 i004
where A z has dimensions of ( R T + K ( n + 1 ) N ) × K T = K T 2 × K T . Each G is a matrix represents the channel gains between each transmitter and the external eavesdropper.
The analysis of the achievable secure rate and SDoF follows similar steps as those in Section 4.3. This completes the proof of Theorem 2.

6. Proof of Theorem 3

We follow the same transmission scheme presented in Section 4. The goal of each transmitter is to securely send T 1 = R n N + ( n + 1 ) N R T + ( K 1 ) ( n + 1 ) N K 1 information symbols to its corresponding receiver and keeping all messages secure against the external eavesdropper and the unintended receivers.
We have two secrecy constraints must be satisfied, i.e.,
R T + ( K 1 ) ( n + 1 ) N ( K 1 ) T 2 , ( confidential messages ) ,
R T + K ( n + 1 ) N K T 2 , ( eavesdropper ) ,
Equation (113) can be re-written as
R T + ( K 1 ) ( n + 1 ) N + ( n + 1 ) N ( K 1 ) T 2 + T 2 .
So if we pick T 2 as
T 2 = R T + ( K 1 ) ( n + 1 ) N K 1
we need to check that T 2 ( n + 1 ) N . T 2 can be written as
T 2 = R ( R n N + ( n + 1 ) N ) + ( K 1 ) ( n + 1 ) N K 1 ,
> R ( R n N + ( n + 1 ) N ) + ( K 1 ) ( n + 1 ) N K 1 1 ,
= R 2 n N + R ( n + 1 ) N + ( K 1 ) ( n + 1 ) N K 1 1 ,
= ( a ) R 2 ( K 1 ) n N + R K 1 ( n + 1 ) N + ( n + 1 ) N 1 > ( n + 1 ) N ,
where in (a), the first two terms are positive, hence, T 2 is strictly greater than ( n + 1 ) N . To this end, we conclude that the two secrecy constraints are satisfied. Hence, we achieve the same SDoF of Theorem 1, i.e.,
SDoF IC CM EE ach . = K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K > 1 2 ( K 6 ) + .

Secrecy Rate and SDoF Calculation

For a transmission of block length B = R T + K ( n + 1 ) N , the achievable secure rate R i , i = 1 , 2 , , K is defined as
R i = I ( s i ; y i | Ω ) max max j i I ( s i ; y j | s i K , Ω ) , I ( s i ; z | s i K , Ω ) R T + K ( n + 1 ) N , i = 1 , 2 , , K .
The analysis of the achievable secure rate and SDoF follows similar steps as those in Section 4.3. This completes the proof of Theorem 3.

7. Conclusions

In this paper, we studied the K-user interference channel with three secrecy constrained channel models and delayed CSIT: We showed that for the K-user interference channel with confidential messages, the sum secure degrees of freedom (SDoF) is at least 1 2 ( K 6 ) , and scales with square root of the number of users. Also, we showed that for the K-user interference channel with an external eavesdropper, 1 2 ( K 3 ) SDoF is achievable. For the K-user interference channel with confidential messages and an external eavesdropper, we showed that 1 2 ( K 6 ) is achievable. To achieve these results, we have proposed novel secure retrospective interference alignment schemes which satisfy both secrecy and decodability at receivers. To the best of our knowledge, this is the first result showing scaling of SDoF for the interference channel with secrecy constraints and delayed CSIT. An interesting open problem is to investigate the optimality of these schemes, and finding upper bounds on SDoF with delayed CSIT for these channel models.

Author Contributions

Conceptualization, M.S., R.T. and M.L.; Formal analysis, M.S., R.T. and M.L.; Investigation, M.S., R.T. and M.L.; Methodology, M.S., R.T. and M.L.; Validation, M.S., R.T. and M.L.; Visualization, M.S., R.T. and M.L.; Writing—original draft, M.S., R.T. and M.L.; Writing—review and editing, M.S., R.T. and M.L.

Funding

This work of M.S., R.T., and M.L. was supported in part by U.S. NSF through grants CCF-1559758 and CNS-1715947, CNS-1564477, and ONR YIP grant N00014-16-1-2650, and was presented in part at ISIT conference, Colorado, USA, June 2018.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. (Proof of Lemma 1 and Corollary 4)

By taking the first derivative of Equation (20) with respect to the number of rounds R, we get
R SDoF IC CM ach . ( K , R ) ) = K ( K 2 K ( R 2 + 2 R + 2 ) + R 2 ) ( K 1 ) ( K + R 2 + R ) 2 .
For R < R , the function SDoF IC CM ach . ( K , R ) strictly increases and for R > R the function strictly decreases, where R is given by
R = K + K × 1 + ( K 1 ) ( K 2 ) K K 1 .
Alternatively,
R SDoF IC CM ach . ( K , R ) > 0 , R < R ,
R SDoF IC CM ach . ( K , R ) < 0 , R > R .
Figure A1 shows the behavior of the achievable sum SDoF as a function of the number of rounds R. The optimal value of R can be obtained by equating the first derivative of SDoF sum to zero as follows:
R SDoF IC CM ach . ( K , R ) K , R ) = R K R ( K R 2 ) ( K 1 ) × R ( R + 1 ) + K = 0 ,
R R ( K R 2 ) R ( R + 1 ) + K = 0 .
Figure A1. Plot for the achievable sum SDoF as a function of the number of rounds R for different number of user K.
Figure A1. Plot for the achievable sum SDoF as a function of the number of rounds R for different number of user K.
Entropy 21 01092 g0a1
After differentiating Equation (A6), we will get the following:
R 2 ( K 1 ) + 2 K R K ( K 2 ) = 0 .
The solution of the previous equation is
R = ( a ) K + K × 1 + ( K 1 ) ( K 2 ) K K 1 ,
> K + K × 1 + ( K 1 ) ( K 2 ) K K ,
= 1 + 1 + ( K 1 ) ( K 2 ) K ,
> ( b ) ( K 1 ) ( K 2 ) K 2 ,
= K 2 3 K + 2 K 2 ,
> K 2 3 K K 2
= K 3 2 ,
> ( c ) K 3 2
= K 5 = R lb ,
where in (a), since R N , we apply the floor rounding operator on the obtained value of R. In (b), we used the property of the floor operator, i.e., x > x 1 . In (c), the term K 3 is greater than K 3 , K 1 .

Appendix B. (Proof of Linear Independence in Equation (43))

In this Appendix, we show that by the end of the transmission scheme, each receiver gets T = R n N + ( n + 1 ) N linear independent equations of the desired signals (i.e., the information symbols and the artificial noises). Then, we need to show that the following matrix
B k = X H , ( W H H k k 1 ) H , , ( W H H k k R ) H H
is full rank. Since B k is a square matrix, then it is sufficient to show that det ( B k ) 0 , k = 1 , 2 , , K . Without loss of generality, we consider receiver 1 which has the following matrix
B 1 = X H , ( W H H 11 1 ) H , , ( W H H 11 R ) H H .
Since det ( B 1 ) = det ( B 1 H ) , we will instead show that det ( B 1 H ) 0 , which is given as follows:
B 1 H = X H , ( H 11 1 ) H W , , ( H 11 R ) H W .
Note that W and X are function of the diagonal entries of the channels { ( H k j r ) H } k j , k , j = 1 , , K and r = 1 , 2 , , R . More specifically, the entries of ( H k j r ) H are h k j r ( t ) , t = 1 , 2 , , T . B 1 depends on { ( H k j r ) H } k j plus ( H 11 r ) H whose elements are h 11 r ( t ) , t = 1 , , T and, r = 1 , , R . For notation convenience, let us denote these channel coefficients as μ m t , t = 1 , 2 , , T and m = 1 , 2 , , N + R . The elements of B 1 H are written as a monomial function of the random variables μ i t , i = 1 , , N + R and t = 1 , 2 , , T as follows:
B 1 H ( t , p ) = i = 1 N + R ( μ i t ) n i ( p ) ,
where n i ( p ) Z + is the exponent of the random variable μ i t . Note that for two different columns p 1 and p 2 , ( n 1 ( p 1 ) , n 2 ( p 1 ) , , n N + R ( p 1 ) ) ( n 1 ( p 2 ) , n 2 ( p 2 ) , , n N + R ( p 2 ) ) . More specifically, the structure of X H is as follows:
X H = 1 μ 11 μ 21 μ 11 n μ 21 n μ N 1 n 1 μ 12 μ 22 μ 12 n μ 22 n μ N 2 n 1 μ 1 T n μ 2 T n μ 1 T n μ 2 T n μ N T n ,
and for ( H 11 r ) H W as
( H 11 r ) H W = κ 1 r κ 1 r μ 11 κ 1 r μ 21 κ 1 r χ 1 n 1 κ 2 r κ 2 r μ 12 κ 2 r μ 22 κ 2 r χ 2 n 1 κ T r κ T r μ 1 T n κ T r μ 2 T n κ T r χ T n 1 ,
where κ t r = μ N + r , t , t = 1 , 2 , , T , r = 1 , 2 , , R and χ t n 1 = μ 1 t n μ 2 t n μ N t n . The full matrix B 1 H is written as follows:
B 1 H = 1 χ 1 n κ 1 1 κ 1 1 χ 1 n 1 κ 1 R κ 1 R χ 1 n 1 1 χ 2 n κ 2 1 κ 2 1 χ 2 n 1 κ 2 R κ 2 R χ 2 n 1 1 χ T n κ T 1 κ T 1 χ T n 1 κ T R κ T R χ T n 1 .
The determinant of matrix B 1 H can be written as follows:
det ( B 1 H ) = a 1 , 1 C 1 , 1 + a 1 , 2 C 1 , 2 + + a 1 , T C 1 , T ,
where C 1 , j is the cofactor matrix corresponding after removing the 1st row and the jth column with coefficient a 1 , j . Now we will show that B 1 H is full rank by contradiction. The zero determinant assumption implies one of the following two events:
  • μ m 1 , m { 1 , 2 , , N + R } takes a value equal to one of the roots of the polynomial equation.
  • All the cofactors of the polynomials are zero.
For the first event, none of the cofactors depends on the random variables μ m 1 , m { 1 , 2 , , N + R } . Note that μ m 1 , m { 1 , 2 , , N + R } are drawn from a continuous distribution, then the probability of these random variables that take finitely many values as a solution for the polynomial is zero almost surely. Therefore, the second event happens with probability greater than zero, which implies
C 1 , p = 0 , p { 1 , , T } .
Then C 1 , T = 0 with probability higher than zero. C 1 , T = 0 implies that the determinant of the matrix obtained by stripping off the first row and last column of B 1 H is equal to zero with non-zero probability. Repeating the process of stripping off each row and column, it will end up with 1 × 1 matrix with value one which contradicts the assumption that C 1 , T = 0 . It is worth noting that stripping off the rows and columns procedure preserves the structure of the matrix which means that the cofactors do not depend on the coefficients. To conclude, the determinant of B 1 H does not equal zero almost surely, which implies that the desired symbols are decoded successfully at the receiver side with probability one.

Appendix C. (Proof of Lemma 2)

Note that A X + N is a jointly complex Gaussian vector with zero mean and covariance P A A H + σ 2 I . From [27], h ( A X + N ) is written as
h ( A X + N ) = log ( π e ) M det P A A H + σ 2 I M .
It is worth noting that A A H is positive semi-definite, with eigenvalue decomposition Q D Q H , where D is a diagonal matrix with r non-zero eigenvalues λ 1 , λ 2 , , λ r where r = rank ( A A H ) = rank ( A ) . Then,
h ( A X + N ) = log ( π e ) M det P Q D Q H + σ 2 I M .
Next, we use the Sylvester’s identity for determinants, i.e., det ( I + A B ) = det ( I + B A ) . Using this identity, we have
h ( A X + N ) = log ( π e ) M det P D Q H Q + σ 2 I M .
Since Q is a unitary matrix, (i.e., Q H Q = Q Q H = I ), we have the following:
h ( A X + N ) = log ( π e ) M i = 1 r λ i P + σ 2 ,
= log ( π e ) M + i = 1 r log λ i P + σ 2 .
This completes the proof of Lemma 2.

References

  1. Maddah-Ali, M.A.; Tse, D. Completely stale transmitter channel state information is still very useful. IEEE Trans. Inf. Theory 2012, 58, 4418–4431. [Google Scholar]
  2. Cadambe, V.R.; Jafar, S.A. Interference alignment and degrees of freedom of the K-user interference channel. IEEE Trans. Inf. Theory 2008, 54, 3425–3441. [Google Scholar] [CrossRef]
  3. Ghasemi, A.; Motahari, A.S.; Khandani, A.K. On the degrees of freedom of X channel with delayed CSIT. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July–5 August 2011; pp. 767–770. [Google Scholar]
  4. Maggi, L.; Cottatellucci, L. Retrospective interference alignment for interference channels with delayed feedback. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 1–4 April 2012; pp. 453–458. [Google Scholar]
  5. Abdoli, M.J.; Ghasemi, A.; Khandani, A.K. On the degrees of freedom of K-user SISO interference and X channels with delayed CSIT. IEEE Trans. Inf. Theory 2013, 59, 6542–6561. [Google Scholar] [CrossRef]
  6. Kang, M.G.; Choi, W. Ergodic interference alignment with delayed feedback. IEEE Signal Process. Lett. 2013, 20, 511–514. [Google Scholar] [CrossRef]
  7. Maleki, H.; Jafar, S.A.; Shamai, S. Retrospective interference alignment over interference networks. IEEE J. Sel. Top. Signal Process. 2012, 6, 228–240. [Google Scholar] [CrossRef]
  8. Castanheira, D.; Silva, A.; Gameiro, A. Retrospective Interference Alignment: Degrees of Freedom Scaling With Distributed Transmitters. IEEE Trans. Inf. Theory 2017, 63, 1721–1730. [Google Scholar] [CrossRef]
  9. Yener, A.; Ulukus, S. Wireless physical-layer security: Lessons learned from information theory. Proc. IEEE 2015, 103, 1814–1825. [Google Scholar] [CrossRef]
  10. Mukherjee, P.; Tandon, R.; Ulukus, S. Physical Layer Security with Delayed, Hybrid and Alternating Channel State Knowledge, Information Theoretic Security and Privacy of Information Sources; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  11. Liang, Y.; Poor, H.V.; Shamai, S. Information theoretic security. Found. Trends® Commun. Inf. Theory 2009, 5, 355–580. [Google Scholar] [CrossRef]
  12. He, X.; Yener, A. K-user interference channels: Achievable secrecy rate and degrees of freedom. In Proceedings of the IEEE Information Theory Workshop (ITW) on Networking and Information Theory, Volos, Greece, 10–12 June 2009; pp. 336–340. [Google Scholar]
  13. Koyluoglu, O.O.; El Gamal, H.; Lai, L.; Poor, H.V. Interference alignment for secrecy. IEEE Trans. Inf. Theory 2011, 57, 3323–3332. [Google Scholar] [CrossRef]
  14. Xie, J.; Ulukus, S. Real interference alignment for the K-user Gaussian interference compound wiretap channel. In Proceedings of the 48th Annual Allerton Conference on Communication, Control, and Computing, Allerton, IL, USA, 29 September–1 October 2010; pp. 1252–1257. [Google Scholar]
  15. Bassily, R.; Ulukus, S. Ergodic secret alignment. IEEE Trans. Inf. Theory 2012, 58, 1594–1611. [Google Scholar] [CrossRef]
  16. Gou, T.; Jafar, S.A. On the secure degrees of freedom of wireless X networks. In Proceedings of the 46th Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA, 23–26 September 2008; pp. 826–833. [Google Scholar]
  17. Nafea, M.; Yener, A. How many antennas does a cooperative jammer need for achieving the degrees of freedom of multiple antenna Gaussian channels in the presence of an eavesdropper? In Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 2–4 October 2013; pp. 774–780. [Google Scholar]
  18. Yang, S.; Kobayashi, M. Secure communication in K-user multi-antenna broadcast channel with state feedback. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015; pp. 1976–1980. [Google Scholar]
  19. Xie, J.; Ulukus, S. Secure degrees of freedom of K-user Gaussian interference channels: A unified view. IEEE Trans. Inf. Theory 2015, 61, 2647–2661. [Google Scholar] [CrossRef]
  20. Xie, J.; Ulukus, S. Secure degrees of freedom of one-hop wireless networks. IEEE Trans. Inf. Theory 2014, 60, 3359–3378. [Google Scholar] [CrossRef]
  21. Mukherjee, P.; Ulukus, S. Secrecy in MIMO Networks with no eavesdropper CSIT. IEEE Trans. Commun. 2017, 65, 4382–4391. [Google Scholar] [CrossRef]
  22. Mukherjee, P.; Tandon, R.; Ulukus, S. Secure degrees of freedom region of the two-user MISO broadcast channel with alternating CSIT. IEEE Trans. Inf. Theory 2017, 63, 3823–3853. [Google Scholar] [CrossRef]
  23. Liang, Y.; Kramer, G.; Poor, H.V.; Shamai, S. Compound wiretap channels. EURASIP J. Wirel. Commun. Netw. 2009, 2009, 5. [Google Scholar] [CrossRef]
  24. Liu, R.; Maric, I.; Spasojevic, P.; Yates, R.D. Discrete memoryless interference and broadcast channels with confidential messages: Secrecy rate regions. IEEE Trans. Inf. Theory 2008, 54, 2493–2507. [Google Scholar] [CrossRef]
  25. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  26. Seif, M.; Tandon, R.; Li, M. Secure Retrospective Interference Alignment. arXiv 2018, arXiv:1801.03494. [Google Scholar]
  27. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
Figure 1. K-user interference channel (IC) with secrecy and delayed CSIT. The model in (a) has confidential message (CM) constraints, model in (b) assumes the presence of an external eavesdropper (EE), and the model in (c) has both confidentiality and secrecy constraints (CM and EE).
Figure 1. K-user interference channel (IC) with secrecy and delayed CSIT. The model in (a) has confidential message (CM) constraints, model in (b) assumes the presence of an external eavesdropper (EE), and the model in (c) has both confidentiality and secrecy constraints (CM and EE).
Entropy 21 01092 g001
Figure 2. Comparison of achievable degrees of freedom (DoF) with delayed CSIT: with and without secrecy constraints.
Figure 2. Comparison of achievable degrees of freedom (DoF) with delayed CSIT: with and without secrecy constraints.
Entropy 21 01092 g002
Figure 3. Stochastic encoding over transmission blocks for our proposed scheme.
Figure 3. Stochastic encoding over transmission blocks for our proposed scheme.
Entropy 21 01092 g003
Figure 4. Comparison between the optimal value of R (number of rounds in phase one of the scheme) and its lower bound.
Figure 4. Comparison between the optimal value of R (number of rounds in phase one of the scheme) and its lower bound.
Entropy 21 01092 g004
Figure 5. (a) Block diagram for the transmission scheme and (b) time duration of the phases.
Figure 5. (a) Block diagram for the transmission scheme and (b) time duration of the phases.
Entropy 21 01092 g005
Figure 6. Graphical representation for the first phase of the proposed scheme.
Figure 6. Graphical representation for the first phase of the proposed scheme.
Entropy 21 01092 g006
Figure 7. Graphical representation for the second phase of the proposed scheme.
Figure 7. Graphical representation for the second phase of the proposed scheme.
Entropy 21 01092 g007
Table 1. Summary of results on the K-user interference channel with different secrecy constraints and channel state information at transmitters (CSIT) models. The highlighted results are from this paper. These results show that sum secure degrees of freedom (SDoF) scales with K .
Table 1. Summary of results on the K-user interference channel with different secrecy constraints and channel state information at transmitters (CSIT) models. The highlighted results are from this paper. These results show that sum secure degrees of freedom (SDoF) scales with K .
Entropy 21 01092 i001
Table 2. Numerical comparison between the sum SDoF with the sum DoF in [8].
Table 2. Numerical comparison between the sum SDoF with the sum DoF in [8].
K56789101112
SDoF IC CM ach . = SDoF IC CM EE ach . 0.35710.45000.51850.57140.61360.83330.90590.9697
SDoF IC EE ach . 0.42860.50000.55560.60000.80000.87500.94121.0000
DoF ach . [8]1.42861.51.5561.61.81.8751.94122

Share and Cite

MDPI and ACS Style

Seif, M.; Tandon, R.; Li, M. Secure Retrospective Interference Alignment. Entropy 2019, 21, 1092. https://doi.org/10.3390/e21111092

AMA Style

Seif M, Tandon R, Li M. Secure Retrospective Interference Alignment. Entropy. 2019; 21(11):1092. https://doi.org/10.3390/e21111092

Chicago/Turabian Style

Seif, Mohamed, Ravi Tandon, and Ming Li. 2019. "Secure Retrospective Interference Alignment" Entropy 21, no. 11: 1092. https://doi.org/10.3390/e21111092

APA Style

Seif, M., Tandon, R., & Li, M. (2019). Secure Retrospective Interference Alignment. Entropy, 21(11), 1092. https://doi.org/10.3390/e21111092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop