Next Article in Journal
Composite Discordant States and Quantum Darwinism
Next Article in Special Issue
Network Coding Approaches for Distributed Computation over Lossy Wireless Networks
Previous Article in Journal
Algebraic Persistent Fault Analysis of SKINNY_64 Based on S_Box Decomposition
Previous Article in Special Issue
Three Efficient All-Erasure Decoding Methods for Blaum–Roth Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scalable Network Coding for Heterogeneous Devices over Embedded Fields

1
Department of Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
Network Technology Lab, Huawei Technologies Co., Ltd., Shenzhen 518000, China
3
Institute for Network Sciences and Cyberspace, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(11), 1510; https://doi.org/10.3390/e24111510
Submission received: 29 September 2022 / Revised: 12 October 2022 / Accepted: 17 October 2022 / Published: 22 October 2022
(This article belongs to the Special Issue Information Theory and Network Coding II)

Abstract

:
In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this framework, the source linearly combines the original packets over embedded fields based on a precoding matrix and then encodes the precoded packets over GF(2) before transmission to the network. After justifying the arithmetic compatibility over different finite fields in the encoding process, we derive a sufficient and necessary condition for decodability over different fields. Moreover, we theoretically study the construction of an optimal precoding matrix in terms of decodability. The numerical analysis in classical wireless broadcast networks illustrates that the proposed scalable RLNC not only guarantees a better decoding compatibility over different fields compared with classical RLNC over a single field, but also outperforms Fulcrum RLNC in terms of a better decoding performance over GF(2). Moreover, we take the sparsity of the received binary coding vector into consideration, and demonstrate that for a large enough batch size, this sparsity does not affect the completion delay performance much in a wireless broadcast network.

1. Introduction

In a communication network, linear network coding (LNC) advocates intermediate nodes to linearly combine received messages before transmission, so as to improve various network performances, such as increasing network throughput, reliability, and reducing transmission delay. Random linear network coding (RLNC) provides a distributed and asymptotically optimal approach for linear coding with coefficients randomly selected from a base field [1]. It shows the potential to improve the performance of unreliable or topologically unknown networks such as D2D networks [2], ad hoc networks [3], and wireless broadcast networks [4,5,6,7].
One of the reasons that hinder the large-scale practical applications of RLNC is the compatibility issue of different computational overheads. In complex network environments, there exist heterogeneous devices with different computational powers. Specifically, sources and certain receivers usually have ample computational powers while a large number of intermediate nodes and other receivers are computationally constrained such as the data collectors in ad hoc networks or low-cost devices in the Internet of Things paradigm [8]. It turns out that the coding compatibility among heterogeneous devices with different computational powers has to be considered in RLNC design.
This paper proposes a novel framework for scalable RLNC design based on embedded fields. The adjective scalable means that the finite fields chosen in the encoding process are not limited to a single base field but a set of embedded fields which consists of a large finite field and all its subfields. The encoding process at the source consists of two stages. In stage 1, based on a precoding matrix, all original packets are linearly combined over different finite fields to form precoded packets. In stage 2, the final packets to be transmitted are formed by randomly combining the precoded packets over GF(2). The heterogeneous receivers can recover the original packets over different fields under different computational constraints.
It is worthwhile to remark that prior to this work, there have been studies [9,10,11,12,13,14] that have taken different fields into account in the course of RLNC design. On one hand, the so-called Telescopic codes [9,10,11] and Revolving codes [12] considered different fields aiming at reducing the decoding complexity. However, they assume that all receivers have the same decoding capability, that is, they need support the arithmetic over the largest defined finite field. On the other hand, a flexible RLNC scheme called Fulcrum [13,14] makes use of GF(2) and its extension field GF( 2 8 ) for code design and it supports receivers to decode over both fields. Actually, Fulcrum can be regarded as a special instance in our proposed framework, while the decoding rule over GF(2) considered therein is weaker than the one proposed in this paper. In addition, there is limited discussion on the construction of an optimal encoding matrix in Fulcrum.
The main contributions of this paper are summarized as follows.
  • We mathematically justify how to make the arithmetic over different finite fields compatible.
  • We derive a necessary and sufficient condition for decodability at a receiver over different finite fields. In particular, the proposed decoding rule over GF(2) is stronger than the one proposed in Fulcrum.
  • We theoretically study the construction of an optimal precoding matrix in terms of the decodability performance.
  • By numerical analysis in classical wireless broadcast networks, we demonstrate that the proposed scalable RLNC not only guarantees a better decoding compatibility over different fields compared with classical RLNC over a single field, but also provides a better decoding performance over GF(2) in terms of smaller average completion delay compared with Fulcrum.
  • In numerical analysis, we also take the sparsity of the received binary coding vector into consideration, and demonstrate that for a large enough batch size, this sparsity does not affect the completion delay performance much in a wireless broadcast network.
This paper is structured as follows. Section 2 reviews the mathematical fundamentals of embedded fields. Section 3 first presents the general principles of the proposed scalable RLNC framework and then formulates the encoding and decoding process. Section 4 investigates the design of an optimal precoding matrix. Section 5 numerically analyzes the proposed scalable RLNC and compares its performance with classical RLNC over a single finite field as well as Fulcrum. Moreover, we take the sparsity into consideration and illustrate the influence on its performance. Conclusion is given in Section 6.

2. Mathematical Fundamentals

In our proposed scalable RLNC framework, different receivers will be able to recover the original packets over different finite fields, upon their different computational powers. In order to make the arithmetic over different finite fields compatible, we need the concept of embedded fields, which will be briefly reviewed in this section. One may refer to [15] for a detailed introduction on finite fields.
Recall that a finite field GF( 2 d 1 ) is a subfield of GF( 2 d 2 ) if and only if d 1 | d 2 . Thus, GF ( 2 d 1 ) , GF ( 2 d 2 ) , , GF ( 2 d D ) are said to form embedded fields F if d 1 < d 2 < < d D and d 1 | d 2 | | d D . For arbitrary GF( 2 d i ) and GF( 2 d j ) in F with i < j , as GF( 2 d j ) can be regarded as GF ( 2 d i ) d j / d i , it can be expressed not only as a d d j -dimensional vector space over GF(2), but also as a d j / d i -dimensional vector space over GF( 2 d i ) at the same time.
Example 1.
Assume that d 1 = 1 , d 2 = 2 , d 3 = 4 . The field GF( 2 4 ) can be expressed as a four-dimensional vector space over GF(2) as well as a two-dimensional vector space over GF( 2 2 ). Let α be a root of the irreducible polynomial x 2 + x + 1 over GF(2) so that GF ( 2 2 ) = { 0 , 1 , α , α 2 } . The polynomial g ( x ) = x 4 + x + 1 is irreducible over GF(2) but reducible over GF( 2 2 ) and can be factorized as g ( x ) = ( x 2 + x + α ) ( x 2 + x + α 2 ) . Let β be a root of the irreducible polynomial f ( x ) = x 2 + x + α over GF( 2 2 ) and β a root of f ( x ) , so that g ( β ) = β 4 + β + 1 = 0 as well. Then, every element in GF ( 2 4 ) can be expressed as a 0 + a 1 β + a 2 β 2 + a 3 β 3 with a j { 0 , 1 } . Moreover, α = β 2 + β = β 5 , so that GF ( 2 2 ) = { 0 , β 0 , β 5 , β 10 } . Based on this, every element in GF( 2 4 ) can also be uniquely expressed as b 0 + b 1 β , b 0 , b 1 GF ( 2 2 ) , which is summarized in Figure 1. In Figure 1, the integers 0 to 15 are the decimal representation of the binary 4-tuple ( a 3 , a 2 , a 1 , a 0 ) , e.g., 13 refers to 1 + β 2 + β 3 , which can be expressed 1 + α β .

3. Framework Description

3.1. General Principles

In this paper, we focus on the construction of a general scalable RLNC framework over embedded fields, so we attempt to alleviate the influence of specific models of networks. In the course of framework description, we merely classify the nodes in a network into three types: a unique source node, intermediate nodes and receiver nodes. Assume that the source has the highest computational power, so that it can generate coded packets over embedded fields. The intermediate nodes in the network just recode the received data packets over GF(2), so as to fully reduce the overall computational complexities in the network. The heterogeneous receivers have different decoding capabilities. Under its own computational constraint, every receiver can judge whether sufficient coded packets have been received for decoding. More importantly, even though a receiver may not have sufficient computational power to deal with the arithmetic in a larger field over which some received packets are coded, it can still fully utilize these packets instead of directly throwing away in the process of decoding. For instance, assume that two received packets w 1 and w 2 are respectively equal to p 1 + p 2 + α p 3 and p 2 + α p 3 , where p 1 , p 2 , p 3 are original packets generated by the source node and α is an element not equal to 0 and 1 in the field GF( 2 d D ). For the receiver under the strongest field constraint GF(2), the original packet p 1 can be recovered by w 1 + w 2 instead of directly throwing w 1 , w 2 away. Consequently, the proposed scalable RLNC framework not only ensures the decoding capabilities of heterogeneous network devices but also fully reduces the required number of received packets for decoding.

3.2. Encoding and Recoding

In every batch, the source s has n original packets p i , 1 i n , each of which is an M-dimensional column vector over GF(2), to be transmitted to receivers. Without loss of generality, assume M is divisible by 2 2 D , which can be achieved by padding dummy bits into every packet. With increasing D, the double exponentially increasing packet length M may cause the practical issue of an excessive padding overhead. Such an issue can be effectively solved based on the methods proposed in [16,17].
The encoding process at s has two stages. First, based on p i , 1 i n , for each 1 d D , extra r d precoded packets are generated based on coding coefficients selected from GF( 2 2 d ). In this process, every original packet p i is regarded as a vector of m d = M / 2 d symbols, each of which consists of 2 d bits and represents an element in GF( 2 2 d ). The multiplication of p i by a coefficient in GF( 2 2 d ) is thus realized by symbol-wise multiplication. Note that when d 1 < d 2 , the coefficients in GF( 2 2 d 1 ) also appear in GF( 2 2 d 2 ), but the coding arithmetic changes. The mathematical fundamentals in the previous section guarantee the coding compatibility which will be illustrated in the next example.
Example 2.
Assume M = 4 , n = 2 , d 1 = 1 and d 2 = 2 . Based on two original packets p 1 = [ 1 0 0 0 ] T and p 2 = [ 1 1 0 1 ] T , a precoded packet is to be generated over GF ( 4 ) = { 0 , 1 , α , α 2 } by the linear combination α p 1 + α 2 p 2 . First regard p 1 and p 2 as vectors of 2 symbols over GF( 2 2 ), that is, p 1 = α 0 and p 2 = α + 1 1 = α 2 1 . Then,
α p 1 + α 2 p 2 = α 2 0 + α α 2 = 1 α + 1 = [ 0 1 1 1 ] T
According to Figure 1, in GF( 2 4 ), α = β 2 + β = β 5 and α 2 = β 2 + β + 1 = β 10 . As every element in GF( 2 4 ) = GF( 4 2 ) can be uniquely expressed as b 0 + b 1 β , b 0 , b 1 GF ( 4 ) , every four-dimensional vector [ a 3 a 2 a 1 a 0 ] T over GF(2) as the following element in GF(16)
[ a 3 a 2 a 1 a 0 ] T = a 3 β 6 + a 2 β + a 1 β 5 + a 0 .
Based on this rule, p 1 = β 6 and p 2 = β 6 + β + 1 . Consequently, β 5 p 1 + β 10 p 2 = β + β 10 = β + β 5 + 1 , which is [ 1 α + 1 ] T over GF(4) and [ 0 1 1 1 ] T over GF(2), same as (1) obtained by the GF(4) arithmetic.
After stage 1, there are a total of N = n + r 1 + r 2 + + r D precoded packets, the first n of which are just the original packets. Let G = I n A 1 A D denote the n × N precoding matrix for the N precoded packets, where I n refers to the n × n identity matrix and A d is a coefficient matrix defined over GF( 2 2 d ).
In stage 2, every coded packet c the source finally sends out is a random GF(2)-linear combination of the N precoded packets, that is,
c = [ p 1 p 2 p n ] G h ,
for some randomly generated N-dimensional column vector h over GF(2), which is referred to as the coding vector for packet c . For a systematic scheme, the first n coded packets c 1 , , c n transmitted by the source are just n original packets, that is, the coding vector for c j is just an N-dimensional unit vector with the j th position equal to 1. Every coded packet will affix its coding vector to its header. In contrast, the information of precoding matrix G can either be affixed to the header of every packet or presettled to be known at every receiver.
At an intermediate node, the coded packets it transmits are GF(2)-linear combinations of its received packets. Specifically, if an intermediate node receives coded packets c 1 , , c l with respective coding vectors h 1 , , h l , then it will recode them to generate a new coded packet c to be transmitted as
c = a 1 c 1 + + a l c l ,
where a 1 , , a l are random binary coefficients. The concomitant coding vector for c is a 1 h 1 + + a l h l .
It is worthwhile to note that prior to this work, a flexible RLNC scheme called Fulcrum has been investigated in [13,14]. Fulcrum can be regarded as a special instance in our proposed framework with the setting D = 3 and r 1 = r 2 = 0 .

3.3. Decoding

Define a linear map φ : GF ( 2 ) N GF ( 2 2 D ) n by
φ ( v ) = G v .
for every column vector v GF ( 2 ) N . The notation φ also applies to a set V of vectors: φ ( V ) = { φ ( v ) : v V } .
Moreover, let U d , 0 d D , denote the vector subspace of GF ( 2 ) N spanned by unit vectors u 1 , u 2 , , u d = 0 d r d where a unit vector u j refers to an N-dimensional vector with the only nonzero entry at position j.
For a receiver t, assume d t is the largest field for computation, and m packets have been received. Let H denote the N × m matrix over GF(2) obtained by columnwise juxtaposition of the coding vectors of the m received packets, and H the column space (over GF(2)) of H .
In order to recover original packets under the field constraint GF( 2 2 d t ), we need make use of coding packets with coding vectors in U d t H rather than in H . This is because the lower d > d t r d entries in every coding vector corresponds to the original precoded packets generated by the source over a larger field than GF( 2 2 d t ). We next characterize the following necessary and sufficient condition for decodability at t up to field constraint GF( 2 2 d t ).
Theorem 1.
Based on the m received packets, the original n source packets can be recovered at t if and only if
dim ( φ ( U d t H ) ) = n .
Proof. 
First assume (2) holds. Then, there must exist n vectors, denoted by v 1 , , v n in U d t H such that
dim ( φ ( { v 1 , , v n } ) ) = n .
Consequently, there exists an m × n matrix K over GF(2) such that [ v 1 v n ] = H K , and (3) implies the full rank n of G H K . As the last d > d t r d rows in H K are all zero, the elements in G H K belong to GF( 2 2 d t ), and hence there exists an n × n matrix D over GF( 2 2 d t ) subject to G H K D = I n , that is, the original packets can be recovered at t.
Next assume that the original n packets can be recovered at t. Then, there exists an m × n matrix D over GF( 2 2 d t ) such that G H D = I n . Further, D can be written as D 1 D 2 , where D 1 , D 2 are over GF( 2 2 d t ) and of respective size m × n and n × n . Thus, G H D 1 is a matrix over GF( 2 2 d t ) of full rank n. Recall that none of the elements in the last d > d t r d columns in G is in GF( 2 2 d t ). Thus, every element in G H D 1 belonging to GF( 2 2 d t ) implies that the last d > d t r d rows in H D 1 are all zero. Moreover, as H is defined over GF(2), we can further deduce that D 1 can be written as D 1 D 1 for an m × n matrix D 1 over GF(2) and an n × n matrix D 1 over GF( 2 2 d t ), such that the last d > d t r d rows in H D 1 are all zero too, that is, the columns in H D 1 belong to U d t H . In addition, the full rank of G H D 1 implies the full rank of G H D 1 . Equation (2) is thus proved to hold. ☐
Based on the above theorem, we can further characterize the following equivalent condition for decodability at a receiver from the perspective of matrix rank. For 0 d D , denote by H d t the d > d t r d × m submatrix of H obtained by restricting H to the last d > d t r d rows.
Corollary 1.
Based on the m received packets, the original n source packets can be recovered at t if and only if
rank ( G ( H K d t ) ) = n ,
where K d t is an m × ( m rank ( H d t ) ) matrix whose columns constitute a basis for the kernel of the column space of H d t such that H d t K d t = 0 .
Note that the column space of H K d t are exactly the subspace U d t H in (2), and all entries in the last d > d t r d rows of H K d t are zero, so the computation of (4) only involve arithmetic over GF( 2 2 d t ). Moreover, in order to check (4), it suffices to select rank ( H K d t ) linearly independent column vectors in H K d t , juxtapose them into a matrix H , and check whether rank ( G H ) = n . With the number m of received packets at t increasing, the matrix K d t and H can be established in the following iterative way.
Algorithm 1.
Denote by h m the N-dimensional coding vector over GF(2) for the m t h received packet at receiver t. Without loss of generality, assume that there is at least one non-zero entry in h m . Let h d t m denote the vector restricted from h m to the last d > d t r d entries. The next procedure efficiently produces desired K d t and H .
Initialization. Let K d t , H , B and B d t be empty matrices. They are to consist of a m rows, N rows, N rows and d > d t r d rows respectively.
Iteration. Consider the case that the m t h packet with coding vector h m is just received, and assume receiver t has dealt with the former m 1 coding vectors h j , 1 j < m . Perform either of the following two depending on h d t m .
  • If h d t m is a zero vector, then update K d t as
    K d t = K d t 0 0 0 1 ,
    and respectively append a zero column vector to B and to B d t on the right. Further check whether h m is a GF(2)-linear combination of columns in H . If so, keep H unchanged. Otherwise, update H as [ H h m ] . The iteration for the current value of m completes.
  • If h d t m is not a zero vector, check whether it is a GF(2)-linear combination of columns in B d t . If no, respectively update B , B d t and K d t as
    B = [ B h m ] , B d t = [ B d t h d t m ] , K d t = K d t 0 0 ,
    and the iteration for the current value of m completes. Otherwise, perform the following steps. First compute an ( m 1 ) -dimensional vector k subject to B d t k = h d t m , and then update K d t as
    K d t = K d t k 0 0 1 .
    Further compute a new vector v = B k + h m , and respectively append a zero column vector to B and to B d t on the right. Check whether v is a GF(2)-linear combination of columns in H . If so, keep H unchanged. Otherwise, update H as [ H v ] . The iteration for the current value of m completes.
Note that after the above procedure, the sum of the number of nonzero columns in B d t and the number of columns in K d t is m. The nonzero columns of B d t keep to form a basis of the column space of H d t = [ h d t 1 h d t m ] . The columns of K d t keep to form a basis of the null space spanned by columns of H d t . The columns in H keep to be a basis of the column space of H K d t , where H = [ h 1 h m ] .
Example 3.
Assume that D = 2 , n = r 0 = 3 , and r 1 = r 2 = 1 . The 3 × 5 precoding matrix G is designed as
G = 1 0 0 α β 0 1 0 α 2 β 0 0 1 1 β
where β is a primitive element in GF( 2 4 ) and α = β 5 , which can be regarded as a primitive element of GF ( 2 2 ) GF ( 2 4 ) .
Assume that at a receiver t, GF( 2 2 ) is the largest field for computation, and 4 packets have been received with the columnwise juxtaposition of the respective coding vectors prescribed by
H = 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0 1 1 1 1 1
As H d t = [ 1 1 1 1 ] herein, the aforementioned iterative approach can yield the following K d t and concomitant H :
K d t = 1 1 1 1 0 0 0 1 0 0 0 1 , H = 1 1 0 1 0 1 0 1 1 0 0 1 0 0 0 ,
where the columns of H form a basis for the subspace U d t H . Consequently, G H = 1 1 α 1 0 1 + α 2 0 1 0 . Since 1 + α + α 2 = 0 in GF( 2 2 ), rank ( G H ) = 2 , that is, (4) does not hold. Therefore, the receiver requires to receive more packets before decoding all original packets.
Assume h 5 = [ 1 0 0 1 1 ] T is the coding vector for the 5 th received packet. Then, the matrix K d t is dynamically updated to
K d t = 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ,
but there is no change for H , because H · [ 1 0 0 0 1 ] T belongs to the column space of H .
Assume h 6 = [ 0 0 1 0 0 ] T is the coding vector for the 6 th received packet. First, dynamically update K d t to
K d t = 1 1 1 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 ,
Then, as h 6 = [ 0 0 1 0 0 ] T does not belong to the column space of H , update H as [ H h 6 ] :
H = 1 1 0 0 1 0 1 0 0 1 1 1 0 0 1 0 0 0 0 0 .
Consequently, G H = 1 1 α 0 1 0 1 + α 2 0 0 1 0 1 , and it has full rank 3, so the receiver can recover the source packets. Actually, in this case, the source packets can be recovered by merely GF(2)-based operations.
In two special cases that d t = D and d t = 0 , i.e., receiver t has the highest and the lowest computational power respectively, (4) degenerates to a more concise form.
Corollary 2.
When d t = D , (4) is equivalent to
rank ( G H ) = n .
When d t = 0 , (4) is equivalent to
rank ( H ) rank ( H d t ) = n .
Recall that Fulcrum [13,14] can be regarded as a special RLNC scheme of our framework. One may notice that in Fulcrum, the decoding rule over GF(2) at a receiver is
rank ( H ) = N ,
which is sufficient but not necessary. In contrast, (9) is both necessary and sufficient. As to be seen in Section 5, there is an observable performance gain when (9) is adopted as the decoding rule instead of (10). Moreover, our proposed scalable RLNC is more flexible than Fulcrum, because the receivers with intermediate computational power can fully utilize its decoding capability to decode over intermediate fields (rather than only over GF(2)), so that the number of required coded packets can be reduced.

3.4. Decoding Complexity Analysis

In this subsection, we briefly analyze the computational complexity of the proposed scalable RLNC scheme at receiver t with the field constraint GF( 2 d t ). We assume that after a sufficiently large recoding process over GF(2), the last r positions in every received binary column vector h , which corresponds to the r precoded packets generated over the larger fields than GF(2), are nonzero. According to Corollary 1, when enough coded packets have been received such that the condition
rank ( G ( H K d t ) ) = n
is satisfied, receiver t can recover all original packets by linear combining n coded packets over GF( 2 d t ). Accordingly, it requires at most n 2 M / d t multiplications and n ( n 1 ) M / d t additions over GF( 2 d t ) in the decoding process. Following the same consideration in [4,18,19], we assume that it respectively takes d t and 2 d t 2 binary operations to realize addition and multiplication between two elements in GF( 2 d t ). Consequently, the total number of required binary operations can be characterized as O ( M n d t ) to recover every M-bit original packet.
Herein, we did not consider the complexity to compute the inverse matrix of G H K d t because in practice the packet length M is much larger than n, and this convention has also been adopted in [4,19] for computational complexity analysis.

4. Optimal Construction of Precoding Matrix G

Based on the analysis in the previous section, we are motivated to carefully design such a precoding matrix G that the full rank of H is equivalent to the full rank of G H , which can optimize the decodability performance for fixed parameters n and N. To achieve this goal, for the precoding matrix G , we first introduce the following condition that is stronger than the conventional maximal distance separable (MDS) property.
Definition 1.
An n × N matrix G over GF( 2 2 D ) is said to be MDS under GF(2)-mapping if for any full-rank N × n matrix H over GF(2), rank ( G H ) = n .
Recall that if G satisfies the conventional MDS property, all n columns in it are linearly independent. Obviously, the conventional MDS property is a prerequisite for the proposed MDS property under GF(2)-mapping. However, Example 3 demonstrates an MDS matrix G that is not MDS under GF(2)-mapping. To the best of our knowledge, except for a brief attempt in [13], there is no prior literature involving the construction of a matrix satisfying the MDS property under GF(2)-mapping. We next characterize an equivalent condition on the MDS property under GF(2)-mapping, so as to facilitate the explicit construction. Given an n × N matrix G , let C denote the set of row vectors generated by G :
C = { m G : m GF ( 2 2 D ) n } .
For every c C , let N c denote its null space in GF ( 2 2 D ) N .
Theorem 2.
An n × N matrix G is MDS under GF(2)-mapping if and only if
dim ( N c GF ( 2 ) N ) < n , c C \ { 0 }
Proof. 
We prove the theorem in a contrapositive argument. Assume that there exists a nonzero c C such that dim ( N c GF ( 2 ) N ) n , and let m be a row vector over GF( 2 2 D ) satisfying c = m G . Then, we can select n linearly independent column vectors h 1 , , h n over GF(2) from N c . Write H = [ h 1 h n ] . Thus, m G H = c H = 0 , so that G H is not full rank n, i.e., G is not MDS under GF(2)-mapping.
Assume that G is not MDS under GF(2)-mapping, and let H be a full rank N × n matrix over GF(2) subject to rank ( G H ) < n . Then, there exists an n-dimensional row vector m such that m G H = 0 . Write c = m G , so that c H = 0 . Since H is full rank n, there are at least n linearly independent vectors (which are the columns of H ) belonging to N c , i.e., dim ( N c GF ( 2 ) N ) n . ☐
For c C , let η ( c ) denote the number of elements in c belonging to GF ( 2 2 D ) \ { 0 , 1 } , and define an indicator δ which is set to 1 if c consists of an element equal to 1 and set to 0 otherwise. The following is a useful corollary of Theorem 2.
Corollary 3.
If an n × N matrix G is MDS under GF(2)-mapping, then the followings hold
η ( c ) + δ > N n , c C \ { 0 } .
C GF ( 2 ) N = { 0 } .
Proof. 
Assume there is a nonzero c C with η ( c ) + δ N n , i.e., N η ( c ) n + δ . Define a new vector c by restricting to its components belonging to GF(2), so that the dimension of c is N η ( c ) . Thus, the dimension of the null space of c in GF ( 2 ) N η ( c ) is N η ( c ) δ , which is no smaller than n. Correspondingly, dim ( N c GF ( 2 ) N ) n , a contradiction to the MDS property under GF(2)-mapping for G according to (12).
If there is a nonzero c C belonging to GF ( 2 ) N , then η ( c ) = 0 so that (13) cannot hold as N > n , and thus G cannot be MDS under GF(2)-mapping. ☐
Conditions (13) and (14) are insufficient for the MDS property under GF(2)-mapping. The key reason is the possibility of the following
< j > α j GF ( 2 ) , α j GF ( 2 2 D ) \ { 0 , 1 } .
For this reason, we should pay more attention in the matrix design to avoid the involvement of those elements in (15). The special case N = n + 1 is easier to manipulate.
Proposition 1.
When N = n + 1 , an n × N matrix G is MDS under GF(2)-mapping if and only if (14) holds.
Proof. 
The necessity has been shown in Corollary 3. To prove sufficiency, assume (14) holds for C defined in (11) based on G . Let c be an arbitrary vector in C . As (14) holds, η ( c ) > 0 . In the case η ( c ) = 1 , there must be at least one element in c equal to 1, because otherwise we can find another vector in C with all elements in GF(2), a contradiction to (14). Thus, dim ( N c GF ( 2 ) N ) < n for this case. Consider the case η ( c ) 2 . Without loss of generality, write c = [ c 1 c η ( c ) 0 0 ] with c j 0 . We can assume c j not all identical, because otherwise we can again find another vector in C with all elements in GF(2), a contradiction to (14). Moreover, for arbitrary two elements a , b GF ( 2 2 D ) , a + b = 0 if and only if a = b . Hence, there are at most η ( c ) 2 linearly independent vectors in GF ( 2 ) η ( c ) that are in the null space of c , which further implies dim ( N c GF ( 2 ) N ) < n . We have proved (12) and thus the considered G is MDS under GF(2)-mapping. ☐
Corollary 4.
When N = n + 1 , there exists a systematic n × N matrix G = [ I n A D ] over GF( 2 2 D ) that is MDS under GF(2)-mapping if and only if n < 2 D .
Proof. 
Assume n < 2 D . Define an n-dimensional column vector a = [ α , α 2 , , α n ] T , where α is a primitive element of GF( 2 2 D ). In this way, all elements in a are distinct and every GF(2)-combination 1 j n a j α j among them does not belong to GF(2). By Proposition 1, [ I n a ] is an MDS matrix under GF(2)-mapping. When n 2 D , let a = [ α 1 , , α n ] T be an arbitrary n-dimensional vector in GF( 2 2 D ). In order to make [ I n a ] MDS under GF(2)-mapping, according to (14) in Corollary 3, there is not any element α j belonging to GF(2). If there is a basis, say { α 1 , , α 2 D } of GF( 2 2 D ) in a , then 1 can be written as a GF(2)-linear combination of the basis, so that (14) does not hold. If there is not a basis of GF( 2 2 D ) in a , then there exists an n-dimensional nonzero row vector v over GF(2) subject to v a = 0 , so that (14) does not hold either. Thus, it is impossible for [ I n a ] to be MDS under GF(2)-mapping. ☐
Based on the above corollary, the required field size is exponentially larger than N in the construction of an n × N systematic MDS matrix under GF(2)-mapping. This implies that it is infeasible to construct such a practical precoding matrix G for large N. For this reason, it is alternative to choose to randomly generate G , which may cause a near-optimal decodability behavior as illustrated in the next example.
Example 4.
Define the following vectors a 1 = [ α α 2 α 3 α 7 ] T and a 2 = [ α 2 α 4 α 6 α 14 ] T over GF( 2 8 ) in which α is a primitive element. It can be checked that both matrices [ I 7 a 1 ] and [ I 7 a 2 ] are MDS under GF(2)-mapping. Although the 7 × 9 matrix G = [ I 7 a 1 a 2 ] is not MDS under GF(2)-mapping, among 42435 7-dimensional subspaces of GF ( 2 ) 9 , there are only 127 instances to break the desired MDS property, that is, every basis for each of the instances forms a 9 × 7 matrix H with rank ( G H ) < 7 .

5. Numerical Analysis

In this section, we numerically analyze the performance of applying the proposed systematic scalable RLNC scheme to a wireless broadcast network, which is a classical model to demonstrate the advantage of RLNC [4,5,6,7]. The number n of original packets in a batch is varied from n = 6 to 24. In every timeslot, the source broadcasts one packet to all receivers. The memoryless and independent packet loss probability for every receiver is p e = 0.2 , that is, in every timeslot, every receiver can successfully receive a packet with probability 1 p e . We consider the scheme with parameters D = 2 , r = 2 where r 1 = r 2 = 1 . In the n × N precoding matrix G = [ I n A 1 A 2 ] , the entries in A 1 and A 2 are randomly selected from GF( 2 2 ) and GF( 2 4 ), respectively. In the numerical analysis of scalable RLNC, the single source s has n original packets to be broadcast to a total of 30 receivers with different decoding capabilities. Specifically, the 30 receivers fall into 3 different groups and the 10 receivers in every group has the same decoding capability, and can decode based on the decoding rule (4) over GF(2), GF( 2 2 ) and GF( 2 4 ), respectively. In the first n timeslots, the source broadcasts n original packets, whose coding vectors are ( n + r ) -dimensional unit vectors, to all receivers. Starting from timeslot n + 1 , the source broadcasts coded packets, each of which is generated based on a random N-dimensional column vector h over GF(2), till all the receivers can recover the n original packets. Herein, for every parameter setting and every considered RLNC scheme, we conduct 1200 independent rounds of simulation which result in 95 % confidence intervals.
Figure 2 depicts the average group completion delay per packet for the 3 groups of receivers, respectively labeled as “Scalable-GF( 2 x )”, x { 1 , 2 , 4 } of the considered scalable RLNC scheme. The group completion delay means the number of extra coded packets the source broadcasts till all the 10 receivers in the group can recover n original packets. For a better comparison, the figure also depicts the average group completion delay per packet, labeled as “RLNC-GF( 2 x )”, for a group of 10 receivers of three different classical systematic RLNC schemes over different fields GF( 2 x ), x { 1 , 2 , 4 } . Recall that in the classical systematic RLNC scheme over GF( 2 x ), the source first broadcasts n original packets and then randomly coded packets with n-dimensional coding vectors over GF( 2 x ). One may observe from Figure 2 that for the case of GF( 2 4 ), the average completion delay of scalable RLNC is almost same as the classical RLNC. Over other smaller fields, even though scalable RLNC yields higher average completion delay than classical RLNC, it simultaneously guarantees the decoding compatibility at heterogeneous receivers, which cannot be endowed by classical RLNC schemes. For instance, assume that the source adopts classical RLNC over GF( 2 2 ) to generate coded packets. On one hand, the group of receivers with decoding capability constrained to GF(2) will fail to recover the original packets. On the other hand, the groups of receivers with decoding capability over GF( 2 4 ) cannot fully utilize their higher computational power so that the average completion delay cannot be further reduced compared with decoding over GF( 2 2 ). As a result, the performance loss for the cases of smaller fields in our proposed scalable RLNC compared to classical RLNC is the cost of decoding compatibility over different fields.
For the considered systematic scalable RLNC scheme, recall that for decoding over GF(2) in the proposed scalable RLNC, Equation (9) obtained in Sec. III is a necessary and sufficient rule while Equation (10), originally adopted in [13,14] for Fulcrum decoding, is a non-necessary rule. Figure 3 compares the average group completion delay per packet for 10 receivers as well as the average completion delay per packet at a single receiver when the receivers adopt different decoding rules (9) and (10) over GF(2). For the average completion delay at a single receiver, a noticeable performance gain can be observed. In particular, when the number of original packets is less than 10, the average completion delay at a single receiver is reduced by more than 20 % based on the decoding rule (9) instead of (10). For the average group completion delay, the performance gain by adopting (9) instead of (10) becomes less obvious because it is offset by the increasing number of receivers in a group. Compared with Fulcrum, which only supports decoding over the smallest field GF(2) or the largest field GF( 2 2 D ), in addition to the performance gain illustrated in Figure 3, our proposed scalable RLNC is more flexible. This is because the receivers with intermediate computational power can fully utilize its decoding capability to decode over intermediate fields (rather than only over GF(2)), so that the average completion delay can be reduced.
In the remaining part of this section, we shall analyze the performance of our scalable RLNC scheme by adjusting the sparsity  0 < P h < 1 of h , which controls to the probability for every component in h to be one. Specifically, for every packet to be transmitted by the source, the expected number of precoded packets to form it is P h ( n + r ) . In previous analysis of this section, P h is set to 1 / 2 . We next consider a more sparse h with P h 1 / 2 .
According to the work in [20], given that there are ( i 1 ) ( n + r ) -dimensional linearly independent binary vectors with sparsity P h , the probability that a new randomly generated ( n + r ) -dimensional binary coding vector h i with sparsity P h is linearly independent with them is lower bounded by
1 ( 1 P h ) n + r i .
This bound indicates that except for the case i close to ( n + r ) , the lower bound keeps very close to 1. Further, at the end of Sec. IV, we have illustrated that a random G will bring a near-optimal decodability behavior, that is, the full rank of H will lead to the full rank of GH with high probability. As a result, although our proposed scalable RLNC scheme with two-stage encoding process is different from the conventional sparse RLNC described in [20], we are motivated to bring the sparsity into our proposed scheme and attempt to meet a balance between completion delay and decoding complexity. The work in [14] has taken the sparsity into consideration in their performance analysis of Fulcrum, which is a special instance of our proposed scalable RLNC scheme.
In simulation, besides the consideration of sparsity P h , we also extend the value range of n from [ 6 , 24 ] to [ 8 , 64 ] and set r 1 = r 2 = 2 . All other parameter settings are same as those in Figure 2. The 3 solid curves in Figure 4 illustrate the average group completion delay per packet for the 3 groups of 10 receivers under different field constraints GF(2), GF( 2 2 ) and GF( 2 4 ) for scalable RLNC with sparsity P h = 1 / 2 . The 3 dotted curves in Figure 4 illustrate the completion delay performance under different field constraints GF(2), GF( 2 2 ) and GF( 2 4 ) for scalable RLNC with P h = 1 / 4 . It is interesting to observe that with the batch size n increasing, under the same decoding constraint (i.e., two curves in the same color), the completion delay performance for the case P h = 1 / 4 will converge to the case P h = 1 / 2 . This result indicates that the lower bound in (16) is rather loose when i is close to n + r , and moreover, for a large enough batch size n, a more sparse vector h will not affect the completion delay performance much in a wireless broadcast network.

6. Conclusions

In this work, the proposed scalable RLNC framework based on embedded fields aims at endowing heterogeneous receivers with different decoding capabilities in complex network environments. In this framework, we derive a general decodability condition by the arithmetic compatibility of embedded fields. Moreover, we theoretically study the specific construction of an optimal precoding matrix G and illustrate the rationality of the near-optimal behavior of a randomly generated G .
In numerical analysis, we demonstrate that the proposed scalable RLNC not only guarantees a better decoding compatibility compared with classical RLNC, but also provides a better decoding performance over GF(2) in terms of smaller average completion delay compared with Fulcrum. In addition, the numerical analysis also demonstrates that for a large enough batch size, the sparsity of the vector h does not affect the completion delay performance much. As a potential future work, the theoretical insight behind this observation deserves a further investigation so as to facilitate the design of a scalable RLNC scheme with a better tradeoff between decoding complexity and completion delay.
Last, the present scalable RLNC framework assumes block-based coding. It would also be interesting to make use of the embedded fields structure to generalize the design of sliding window-based random linear coding schemes such as the ones studied in [21,22,23].

Author Contributions

R.Z. and Q.S. conceived and designed the mathematical model. H.T. designed the whole coding framework and wrote the paper with the help of Q.S., K.L. and Z.L. All authors were involved in problem formulation, data analysis and editing of this paper. All authors have agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China under Grant 62101028 and 62271044, and by China Postdoctoral Science Foundation under Grant 2021TQ0031, and by Huawei TC20211126644 and by China Telecom 20222910016.

Acknowledgments

This paper was partly presented in [24] at the IEEE/CIC International Conference on Communications in China (ICCC) 2021.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ho, T.; Médard, M.; Koetter, R.; Karger, D.R.; Effros, M.; Shi, J.; Leong, B. A random linear network coding approach to multicast network. IEEE Trans. Inf. Theory 2006, 52, 4413–4430. [Google Scholar] [CrossRef] [Green Version]
  2. Huang, J.; Gharavi, H.; Yan, H.; Xing, C.C. Network coding in relay-based device-to-device communications. IEEE Netw. 2017, 31, 102–107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Asterjadhi, A.; Fasolo, E.; Rossi, M.; Widmer, J.; Zorzi, M. Toward network coding-based protocols for data broadcasting in wireless ad hoc networks. IEEE Trans. Wirel. Commun. 2010, 9, 662–673. [Google Scholar] [CrossRef]
  4. Su, R.; Sun, Q.; Zhang, Z. Delay-complexity trade-off of random linear network coding in wireless broadcast. IEEE Trans. Commun. 2020, 68, 5606–5618. [Google Scholar] [CrossRef]
  5. Eryilmaz, A.; Ozdaglar, A.; Médard, M.; Ahmed, E. On the delay and throughput gains of coding in unreliable networks. IEEE Trans. Inf. Theory 2008, 54, 5511–5524. [Google Scholar] [CrossRef]
  6. Swapna, B.T.; Eryilmaz, A.; Shroff, N.B. Throughput-delay analysis of random linear network coding for wireless broadcasting. IEEE Trans. Inf. Theory 2013, 59, 6328–6341. [Google Scholar] [CrossRef] [Green Version]
  7. Zhu, H.; Ouahada, K. Investigating random linear coding from a pricing perspective. Entropy 2018, 20, 548. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Wunderlich, S.; Cabrera, J.A.; Fitzek, F.H.; Reisslein, M. Network coding in heterogeneous multicore IoT nodes with DAG scheduling of parallel matrix block operations. IEEE Internet Things J. 2017, 4, 917–933. [Google Scholar] [CrossRef]
  9. Heide, J.; Lucani, D.E. Composite extension finite fields for low overhead Network Coding: Telescopic codes. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015. [Google Scholar]
  10. Marcano, N.J.H.; Heide, J.; Lucani, D.E.; Fitzek, F.H. On the overhead of telescopic codes in network coded cooperation. In Proceedings of the 2015 IEEE 82nd Vehicular Technology Conference (VTC2015-Fall), Boston, MA, USA, 6–9 September 2015. [Google Scholar]
  11. Heide, J. Composite extension finite fields for distributed storage erasure coding. In Proceedings of the 2016 IEEE International Conference on Communications (ICC), Kuala Lumpur, Malaysia, 22–27 May 2016. [Google Scholar]
  12. Yazdani, V.; Lucani, D. Revolving codes: Overhead and computational complexity analysis. IEEE Commu. Lett. 2021, 25, 374–378. [Google Scholar] [CrossRef]
  13. Lucani, D.E.; Pedersen, M.V.; Ruano, D.; Sørensen, C.W.; Fitzek, F.H.; Heide, J.; Geil, O.; Nguyen, V.; Reisslein, M. Fulcrum: Flexible network coding for heterogeneous devices. IEEE Access 2018, 6, 77890–77910. [Google Scholar] [CrossRef]
  14. Nguyen, V.; Tasdemir, E.; Nguyen, G.T.; Lucani, D.E.; Fitzek, F.H.; Reisslein, M. DSEP Fulcrum: Dynamic sparsity and expansion packets for fulcrum network coding. IEEE Access 2020, 8, 78239–78314. [Google Scholar] [CrossRef]
  15. Lidl, R.; Niederreiter, H. Finite Fields, 3rd ed.; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  16. Schutz, B.; Aschenbruck, N. Packet-preserving network coding schemes for padding overhead reduction. In Proceedings of the 2019 IEEE 44th Conference on Local Computer Networks (LCN), Osnabrueck, Germany, 14–17 October 2019. [Google Scholar]
  17. Taghouti, M.; Lucani, D.E.; Cabrera, J.A.; Reisslein, M.; Pedersen, M.V.; Fitzek, F.H. Reduction of padding overhead for RLNC media distribution with variable size packets. IEEE Trans. Broadcast. 2019, 65, 558–576. [Google Scholar] [CrossRef]
  18. Tang, H.; Sun, Q.T.; Li, Z.; Yang, X.; Long, K. Circular-shift linear network coding. IEEE Trans. Inf. Theory 2019, 65, 65–80. [Google Scholar] [CrossRef] [Green Version]
  19. Hou, H.; Shum, K.W.; Chen, M.; Li, H. BASIC codes: Low-complexity regenerating codes for distributed storage systems. IEEE Trans. Inf. Theory 2016, 62, 3053–3069. [Google Scholar] [CrossRef]
  20. Feizi, S.; Lucani, D.E.; Sørensen, C.W.; Makhdoumi, A.; Médard, M. Tunable Sparse Network Coding for Multicast Networks. In Proceedings of the 2014 IEEE International Symposium on Network Coding (NetCod), Aalborg Oest, Denmark, 27–28 June 2014; pp. 1–6. [Google Scholar]
  21. Karetsi, F.; Papapetrou, E. Lightweight network-coded ARQ: An approach for ultra-reliable low latency communication. Comput. Commun. 2022, 185, 118–129. [Google Scholar] [CrossRef]
  22. Ma, S.; Liu, X.; Yan, Y.; Zhang, B.; Zheng, J. Sliding-window based batch forwarding using intra-flow random linear network coding. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020. [Google Scholar]
  23. Tasdemir, E.; Nguyen, V.; Nguyen, G.T.; Fitzek, F.H.; Reisslein, M. FSW: Fulcrum sliding window coding for low-latency communication. IEEE Access 2022, 10, 54276–54290. [Google Scholar] [CrossRef]
  24. Tang, H.; Zheng, R.; Li, Z.; Sun, Q.T. Scalable Network Coding over Embedded Fields. In Proceedings of the 2021 IEEE/CIC International Conference on Communications in China (ICCC), Xiamen, China, 28–30 July 2021; pp. 641–646. [Google Scholar]
Figure 1. Every element a 0 + a 1 β + a 2 β 2 + a 3 β 3 , a j { 0 , 1 } in GF( 2 4 ) has a unique expression in the form of b 0 + b 1 β , b 0 , b 1 { 0 , 1 , α , α 2 } = GF ( 4 ) , where α 2 + α + 1 = β 2 + β + α = β 4 + β + 1 = 0 . The integers 0 to 15 represent the decimal expression of the binary 4-tuple ( a 3 , a 2 , a 1 , a 0 ) .
Figure 1. Every element a 0 + a 1 β + a 2 β 2 + a 3 β 3 , a j { 0 , 1 } in GF( 2 4 ) has a unique expression in the form of b 0 + b 1 β , b 0 , b 1 { 0 , 1 , α , α 2 } = GF ( 4 ) , where α 2 + α + 1 = β 2 + β + α = β 4 + β + 1 = 0 . The integers 0 to 15 represent the decimal expression of the binary 4-tuple ( a 3 , a 2 , a 1 , a 0 ) .
Entropy 24 01510 g001
Figure 2. The average group completion delay per packet for the receivers of different systematic RLNC schemes in a wireless broadcast network with r 1 = r 2 = 1 and packet loss probability p e = 0.2 .
Figure 2. The average group completion delay per packet for the receivers of different systematic RLNC schemes in a wireless broadcast network with r 1 = r 2 = 1 and packet loss probability p e = 0.2 .
Entropy 24 01510 g002
Figure 3. The average group completion delay per packet for 10 receivers as well as the average completion delay per packet at a single receiver when the receivers adopt different decoding rules (9) and (10) over GF(2).
Figure 3. The average group completion delay per packet for 10 receivers as well as the average completion delay per packet at a single receiver when the receivers adopt different decoding rules (9) and (10) over GF(2).
Entropy 24 01510 g003
Figure 4. The average group completion delay per packet for scalable RLNC with different sparsity P h .
Figure 4. The average group completion delay per packet for scalable RLNC with different sparsity P h .
Entropy 24 01510 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, H.; Zheng, R.; Li, Z.; Long, K.; Sun, Q. Scalable Network Coding for Heterogeneous Devices over Embedded Fields. Entropy 2022, 24, 1510. https://doi.org/10.3390/e24111510

AMA Style

Tang H, Zheng R, Li Z, Long K, Sun Q. Scalable Network Coding for Heterogeneous Devices over Embedded Fields. Entropy. 2022; 24(11):1510. https://doi.org/10.3390/e24111510

Chicago/Turabian Style

Tang, Hanqi, Ruobin Zheng, Zongpeng Li, Keping Long, and Qifu Sun. 2022. "Scalable Network Coding for Heterogeneous Devices over Embedded Fields" Entropy 24, no. 11: 1510. https://doi.org/10.3390/e24111510

APA Style

Tang, H., Zheng, R., Li, Z., Long, K., & Sun, Q. (2022). Scalable Network Coding for Heterogeneous Devices over Embedded Fields. Entropy, 24(11), 1510. https://doi.org/10.3390/e24111510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop