entropy-logo

Journal Browser

Journal Browser

Information Theory and Network Coding II

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 June 2024) | Viewed by 13397

Special Issue Editors


E-Mail Website
Guest Editor
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen 518172, China
Interests: information theory; network coding; coding theory; network communication; network computation; quantum information

E-Mail Website
Guest Editor
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen 518172, China
Interests: information theory; coding for distributed storage; distributed computation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Network coding is a more general way of information processing at intermediate network nodes in comparison to the transitional store-and-forward, and enables the use of information theory to guide the design of network information systems. In the past ten years, significant progress has been made in refining network coding technologies toward real-world applications. For example, network-coding-based distributed storage and network caching systems have been adopted in data centers and content distribution systems. Random linear network coding schemes have been applied in network communication applications to improve communication efficiency.

New networking systems like vehicular networks, industrial networks and underwater networks are emerging, and network applications such as federated learning, virtual reality and metaverse impose new demands for networking. The traditional network communication infrastructures face the challenges of network coverage, communication bandwidth and latency. It is promising that network coding can play significant roles in the future of networking systems.

This Special Issue aims to be a forum for the presentation of innovative network coding theories and technologies for various emerging networking systems. In addition to communications, topics about network coding for computation, distributed storage, caching, security and message integrity are also welcome.

Dr. Shenghao Yang
Prof. Dr. Kenneth Shum
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information theory
  • network coding
  • network communication
  • network computation
  • distributed storage

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1320 KiB  
Article
Finite-Blocklength Analysis of Coded Modulation with Retransmission
by Ming Jiang, Yi Wang, Fan Ding and Qiushi Xu
Entropy 2024, 26(10), 863; https://doi.org/10.3390/e26100863 - 14 Oct 2024
Viewed by 454
Abstract
The rapid developments of 5G and B5G networks have posed higher demands on retransmission in certain scenarios. This article reviews classical finite-length coding performance prediction formulas and proposes rate prediction formulas for coded modulation retransmission scenarios. Specifically, we demonstrate that a recently proposed [...] Read more.
The rapid developments of 5G and B5G networks have posed higher demands on retransmission in certain scenarios. This article reviews classical finite-length coding performance prediction formulas and proposes rate prediction formulas for coded modulation retransmission scenarios. Specifically, we demonstrate that a recently proposed model for correcting these prediction formulas also exhibits high accuracy in coded modulation retransmissions. To enhance the generality of this model, we introduce a range variable Pfinal to unify the predictions with different SNRs. Finally, based on simulation results, the article puts forth recommendations specific to retransmission with a high spectral efficiency. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

12 pages, 332 KiB  
Article
On Matrix Representation of Extension Field GF(pL) and Its Application in Vector Linear Network Coding
by Hanqi Tang, Heping Liu, Sheng Jin, Wenli Liu and Qifu Sun
Entropy 2024, 26(10), 822; https://doi.org/10.3390/e26100822 - 26 Sep 2024
Viewed by 466
Abstract
For a finite field GF(pL) with prime p and L>1, one of the standard representations is L×L matrices over GF(p) so that the arithmetic of GF(pL) can be realized by [...] Read more.
For a finite field GF(pL) with prime p and L>1, one of the standard representations is L×L matrices over GF(p) so that the arithmetic of GF(pL) can be realized by the arithmetic among these matrices over GF(p). Based on the matrix representation of GF(pL), a conventional linear network coding scheme over GF(pL) can be transformed to an L-dimensional vector LNC scheme over GF(p). Recently, a few real implementations of coding schemes over GF(2L), such as the Reed–Solomon (RS) codes in the ISA-L library and the Cauchy-RS codes in the Longhair library, are built upon the classical result to achieve matrix representation, which focuses more on the structure of every individual matrix but does not shed light on the inherent correlation among matrices which corresponds to different elements. In this paper, we first generalize this classical result from over GF(2L) to over GF(pL) and paraphrase it from the perspective of matrices with different powers to make the inherent correlation among these matrices more transparent. Moreover, motivated by this correlation, we can devise a lookup table to pre-store the matrix representation with a smaller size than the one utilized in current implementations. In addition, this correlation also implies useful theoretical results which can be adopted to further demonstrate the advantages of binary matrix representation in vector LNC. In the following part of this paper, we focus on the study of vector LNC and investigate the applications of matrix representation related to the aspects of random and deterministic vector LNC. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

33 pages, 506 KiB  
Article
Fundamental Limits of Coded Caching in Request-Robust D2D Communication Networks
by Wuqu Wang, Zhe Tao, Nan Liu and Wei Kang
Entropy 2024, 26(3), 250; https://doi.org/10.3390/e26030250 - 12 Mar 2024
Viewed by 1208
Abstract
D2D coded caching, originally introduced by Ji, Caire, and Molisch, significantly improves communication efficiency by applying the multi-cast technology proposed by Maddah-Ali and Niesen to the D2D network. Most prior works on D2D coded caching are based on the assumption that all users [...] Read more.
D2D coded caching, originally introduced by Ji, Caire, and Molisch, significantly improves communication efficiency by applying the multi-cast technology proposed by Maddah-Ali and Niesen to the D2D network. Most prior works on D2D coded caching are based on the assumption that all users will request content at the beginning of the delivery phase. However, in practice, this is often not the case. Motivated by this consideration, this paper formulates a new problem called request-robust D2D coded caching. The considered problem includes K users and a content server with access to N files. Only r users, known as requesters, request a file each at the beginning of the delivery phase. The objective is to minimize the average and worst-case delivery rate, i.e., the average and worst-case number of broadcast bits from all users among all possible demands. For this novel D2D coded caching problem, we propose a scheme based on uncoded cache placement and exploiting common demands and one-shot delivery. We also propose information-theoretic converse results under the assumption of uncoded cache placement. Furthermore, we adapt the scheme proposed by Yapar et al. for uncoded cache placement and one-shot delivery to the request-robust D2D coded caching problem and prove that the performance of the adapted scheme is order optimal within a factor of two under uncoded cache placement and within a factor of four in general. Finally, through numerical evaluations, we show that the proposed scheme outperforms known D2D coded caching schemes applied to the request-robust scenario for most cache size ranges. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

18 pages, 483 KiB  
Article
Efficient Communications in V2V Networks with Two-Way Lanes Based on Random Linear Network Coding
by Yiqian Zhang, Tiantian Zhu and Congduan Li
Entropy 2023, 25(10), 1454; https://doi.org/10.3390/e25101454 - 17 Oct 2023
Viewed by 1294
Abstract
Vehicle-to-vehicle (V2V) communication has gained significant attention in the field of intelligent transportation systems. In this paper, we focus on communication scenarios involving vehicles moving in the same and opposite directions. Specifically, we model a V2V network as a dynamic multi-source single-sink network [...] Read more.
Vehicle-to-vehicle (V2V) communication has gained significant attention in the field of intelligent transportation systems. In this paper, we focus on communication scenarios involving vehicles moving in the same and opposite directions. Specifically, we model a V2V network as a dynamic multi-source single-sink network with two-way lanes. To address rapid changes in network topology, we employ random linear network coding (RLNC), which eliminates the need for knowledge of the network topology. We begin by deriving the lower bound for the generation probability. Through simulations, we analyzed the probability distribution and cumulative probability distribution of latency under varying packet loss rates and batch sizes. Our results demonstrated that our RLNC scheme significantly reduced the communication latency, even under challenging channel conditions, when compared to the non-coding case. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

26 pages, 417 KiB  
Article
Multiple Linear-Combination Security Network Coding
by Yang Bai, Xuan Guang and Raymond W. Yeung
Entropy 2023, 25(8), 1135; https://doi.org/10.3390/e25081135 - 28 Jul 2023
Cited by 1 | Viewed by 1212
Abstract
In this paper, we put forward the model of multiple linear-combination security multicast network coding, where the wiretapper desires to obtain some information about a predefined set of multiple linear combinations of the source symbols by eavesdropping any one (but not more than [...] Read more.
In this paper, we put forward the model of multiple linear-combination security multicast network coding, where the wiretapper desires to obtain some information about a predefined set of multiple linear combinations of the source symbols by eavesdropping any one (but not more than one) channel subset up to a certain size r, referred to as the security level. For this model, the security capacity is defined as the maximum average number of source symbols that can be securely multicast to all sink nodes for one use of the network under the linear-combination security constraint. For any security level and any linear-combination security constraint, we fully characterize the security capacity in terms of the ratio of the rank of the linear-combination security constraint to the number of source symbols. Also, we develop a general construction of linear security network codes. Finally, we investigate the asymptotic behavior of the security capacity for a sequence of linear-combination security models and discuss the asymptotic optimality of our code construction. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

28 pages, 487 KiB  
Article
Design and Analysis of Systematic Batched Network Codes
by Licheng Mao, Shenghao Yang, Xuan Huang and Yanyan Dong
Entropy 2023, 25(7), 1055; https://doi.org/10.3390/e25071055 - 13 Jul 2023
Cited by 1 | Viewed by 1216
Abstract
Systematic codes are of important practical interest for communications. Network coding, however, seems to conflict with systematic codes: although the source node can transmit message packets, network coding at the intermediate network nodes may significantly reduce the number of message packets received by [...] Read more.
Systematic codes are of important practical interest for communications. Network coding, however, seems to conflict with systematic codes: although the source node can transmit message packets, network coding at the intermediate network nodes may significantly reduce the number of message packets received by the destination node. Is it possible to obtain the benefit of network coding while preserving some properties of the systematic codes? In this paper, we study the systematic design of batched network coding, which is a general network coding framework that includes random linear network coding as a special case. A batched network code has an outer code and an inner code, where the latter is formed by linear network coding. A systematic batched network code must take both the outer code and the inner code into consideration. Based on the outer code of a BATS code, which is a matrix-generalized fountain code, we propose a general systematic outer code construction that achieves a low encoding/decoding computation cost. To further reduce the number of random trials required to search a code with a close-to-optimal coding overhead, a triangular embedding approach is proposed for the construction of the systematic batches. We introduce new inner codes that provide protection for the systematic batches during transmission and show that it is possible to significantly increase the expected number of message packets in a received batch at the destination node, without harm to the expected rank of the batch transfer matrix generated by network coding. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

39 pages, 766 KiB  
Article
BAR: Blockwise Adaptive Recoding for Batched Network Coding
by Hoover H. F. Yin, Shenghao Yang, Qiaoqiao Zhou, Lily M. L. Yung and Ka Hei Ng
Entropy 2023, 25(7), 1054; https://doi.org/10.3390/e25071054 - 13 Jul 2023
Cited by 3 | Viewed by 1216
Abstract
Multi-hop networks have become popular network topologies in various emerging Internet of Things (IoT) applications. Batched network coding (BNC) is a solution to reliable communications in such networks with packet loss. By grouping packets into small batches and restricting recoding to the packets [...] Read more.
Multi-hop networks have become popular network topologies in various emerging Internet of Things (IoT) applications. Batched network coding (BNC) is a solution to reliable communications in such networks with packet loss. By grouping packets into small batches and restricting recoding to the packets belonging to the same batch; BNC has much smaller computational and storage requirements at intermediate nodes compared with direct application of random linear network coding. In this paper, we discuss a practical recoding scheme called blockwise adaptive recoding (BAR) which learns the latest channel knowledge from short observations so that BAR can adapt to fluctuations in channel conditions. Due to the low computational power of remote IoT devices, we focus on investigating practical concerns such as how to implement efficient BAR algorithms. We also design and investigate feedback schemes for BAR under imperfect feedback systems. Our numerical evaluations show that BAR has significant throughput gain for small batch sizes compared with existing baseline recoding schemes. More importantly, this gain is insensitive to inaccurate channel knowledge. This encouraging result suggests that BAR is suitable to be used in practice as the exact channel model and its parameters could be unknown and subject to changes from time to time. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

16 pages, 633 KiB  
Article
Network Coding Approaches for Distributed Computation over Lossy Wireless Networks
by Bin Fan, Bin Tang, Zhihao Qu and Baoliu Ye
Entropy 2023, 25(3), 428; https://doi.org/10.3390/e25030428 - 27 Feb 2023
Cited by 1 | Viewed by 1721
Abstract
In wireless distributed computing systems, worker nodes connect to a master node wirelessly and perform large-scale computational tasks that are parallelized across them. However, the common phenomenon of straggling (i.e., worker nodes often experience unpredictable slowdown during computation and communication) and packet losses [...] Read more.
In wireless distributed computing systems, worker nodes connect to a master node wirelessly and perform large-scale computational tasks that are parallelized across them. However, the common phenomenon of straggling (i.e., worker nodes often experience unpredictable slowdown during computation and communication) and packet losses due to severe channel fading can significantly increase the latency of computational tasks. In this paper, we consider a heterogeneous, wireless, distributed computing system performing large-scale matrix multiplications which form the core of many machine learning applications. To address the aforementioned challenges, we first propose a random linear network coding (RLNC) approach that leverages the linearity of matrix multiplication, which has many salient properties, including ratelessness, maximum straggler tolerance and near-ideal load balancing. We then theoretically demonstrate that its latency converges to the optimum in probability when the matrix size grows to infinity. To combat the high encoding and decoding overheads of the RLNC approach, we further propose a practical variation based on batched sparse (BATS) code. The effectiveness of our proposed approaches is demonstrated by numerical simulations. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

14 pages, 331 KiB  
Article
Scalable Network Coding for Heterogeneous Devices over Embedded Fields
by Hanqi Tang, Ruobin Zheng, Zongpeng Li, Keping Long and Qifu Sun
Entropy 2022, 24(11), 1510; https://doi.org/10.3390/e24111510 - 22 Oct 2022
Viewed by 1235
Abstract
In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this [...] Read more.
In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this framework, the source linearly combines the original packets over embedded fields based on a precoding matrix and then encodes the precoded packets over GF(2) before transmission to the network. After justifying the arithmetic compatibility over different finite fields in the encoding process, we derive a sufficient and necessary condition for decodability over different fields. Moreover, we theoretically study the construction of an optimal precoding matrix in terms of decodability. The numerical analysis in classical wireless broadcast networks illustrates that the proposed scalable RLNC not only guarantees a better decoding compatibility over different fields compared with classical RLNC over a single field, but also outperforms Fulcrum RLNC in terms of a better decoding performance over GF(2). Moreover, we take the sparsity of the received binary coding vector into consideration, and demonstrate that for a large enough batch size, this sparsity does not affect the completion delay performance much in a wireless broadcast network. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

13 pages, 378 KiB  
Article
Three Efficient All-Erasure Decoding Methods for Blaum–Roth Codes
by Weijie Zhou and Hanxu Hou
Entropy 2022, 24(10), 1499; https://doi.org/10.3390/e24101499 - 20 Oct 2022
Cited by 1 | Viewed by 1573
Abstract
Blaum–Roth Codes are binary maximum distance separable (MDS) array codes over the binary quotient ring F2[x]/(Mp(x)), where [...] Read more.
Blaum–Roth Codes are binary maximum distance separable (MDS) array codes over the binary quotient ring F2[x]/(Mp(x)), where Mp(x)=1+x++xp1, and p is a prime number. Two existing all-erasure decoding methods for Blaum–Roth codes are the syndrome-based decoding method and the interpolation-based decoding method. In this paper, we propose a modified syndrome-based decoding method and a modified interpolation-based decoding method that have lower decoding complexity than the syndrome-based decoding method and the interpolation-based decoding method, respectively. Moreover, we present a fast decoding method for Blaum–Roth codes based on the LU decomposition of the Vandermonde matrix that has a lower decoding complexity than the two modified decoding methods for most of the parameters. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Back to TopTop