Next Article in Journal
A New Multivariate Approach for Real Time Detection of Routing Security Attacks in VANETs
Next Article in Special Issue
Distributed Edge Computing for Resource Allocation in Smart Cities Based on the IoT
Previous Article in Journal
Impact of Social Media Behavior on Privacy Information Security Based on Analytic Hierarchy Process
Previous Article in Special Issue
Atmospheric Propagation Modelling for Terrestrial Radio Frequency Communication Links in a Tropical Wet and Dry Savanna Climate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Analysis of Joint Source-Channel Code System with Fixed-Length Code

School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Information 2022, 13(6), 281; https://doi.org/10.3390/info13060281
Submission received: 9 April 2022 / Revised: 27 May 2022 / Accepted: 27 May 2022 / Published: 31 May 2022
(This article belongs to the Special Issue Advances in Wireless Communications Systems)

Abstract

:
As the demands of multimedia and data services increase, efficient communication systems are being investigated to meet the high data rate requirements. Joint source-channel coding (JSCC) schemes were proposed for improving overall system performance. However, existing JSCC systems may suffer a symbol error rate (SER) performance loss when residual source redundancy is not fully exploited. This paper presents a novel, low-complexity JSCC system, which consists of a fixed-length source block code and an irregular convolutional channel code. A simple approach is proposed to design source codes that minimize the SER of source detection and guarantee the convergence of iterative source-channel decoding (ISCD). To improve the waterfall performance of ISCD, the channel code is optimized by using the extrinsic information transfer (EXIT) chart and the concept of irregular code. The channel code is constituted by recursive non-systematic convolutional (RNSC) subcodes. The weights of subcodes are optimized to make the EXIT curves of the channel decoder and the source decoder well-matched, and therefore, a near-capacity performance is achieved. Simulation results show that the proposed system achieves more than 1 dB gains and 0.3 dB gains compared to the separate source-channel code system and the other optimal JSCC systems, respectively. Additionally, the performance of the proposed system is within 1 dB deviation from the Shannon limit capacity.

1. Introduction

With the escalation in the number of cellular mobile phones and the popularity of wireless networks, high data rate transmission is required. However, the wireless channel is restricted by transmission power and limited available bandwidth, so challenges still exist in designing reliable and efficient communication systems. In classical communication systems, the source encoder and the channel encoder are designed separately. The source encoder removes redundancy from the source to improve the transmission rate and the channel encoder re-inserts controlled redundancy to resist the channel noise. According to Shannon’s source and channel separation theorem, the system can achieve a near-capacity performance if the source and channel coding are both optimal [1]. However, due to the severe delay and computational complexity restrictions in practical applications, the theorem does not hold, and the separate source and channel code (SSCC) system usually inflicts a loss of optimality. This motivates the studies of joint source-channel code (JSCC) systems to narrow the gap to the global optimum.
JSCC systems can exploit the residual redundancy in source-coded parameters to realize a better transmission performance. In [2], the authors proposed a soft-bit source decoder (SBSD), which uses the a priori knowledge of source statistics to calculate the reliability of source-coded bits. Then, the SBSD was applied in an iterative source-channel decoding (ISCD) structure, improving the error-correction capability of the channel decoder [3]. Moreover, the source encoder and the bit-interleaved coded modulation were serially concatenated [4] to further improve the bandwidth and power efficiency of the system by exchanging extrinsic information between the source decoder, the channel decoder, and the demodulator.
These previous studies on ISCD systems usually used near-entropy codes for source compression. Because very limited redundancy is left in the source-coded bits after the compression encoding, the iteration gain of ISCD is limited [5]. It has been found that the application of source codes with introduced redundancy as an alternative to classical entropy codes can achieve significant performance improvement. Several redundant source codes were proposed, including the variable-length codes (VLCs) [6,7,8,9] and the fixed-length codes (FLCs). VLCs were designed to increase the free distance to improve the error-correction performance. [6]. Recently, several efficient algorithms were proposed to construct near-optimal VLCs that have the minimum average codeword length with low search complexity [7,8,9]. The VLCs are sensitive to error propagation and suffer from a high decoding complexity, while the FLCs have the advantages of inherent synchronization and low-complexity because the symbol positions in the coded bitstream are fixed. Thus, the FLCs have been investigated by many studies.
The non-redundant index assignment for quantized source symbols proposed in [10] is the first joint source-channel code scheme based on FLCs. However, this JSCC scheme has a limited error correction capability, since the index has nonredundant bits. To solve this problem, the authors in [11] utilized a linear block code (LBC) to realize a redundancy index assignment for source symbols. The introduced redundancy of source code was exploited by SBSD at the receiving side, and the convergence performance of ISCD was improved. Additionally, the generator metrics of LBCs were further optimized to maximize the minimum Hamming distance of codewords [12]. This optimization of source codes reduced the error floor of ISCD. Similarly, the authors in [13] proposed simple algorithms to generate a class of short block codes that have diverse minimum Hamming distance ( d min ) values. In multimedia communication, these block codes are used to deliberately impose additional redundancy in the source coded video bit-stream [14]. In [15], the effects of Hamming distance were investigated for an FLC-based JSCC system. An improvement in the SER performance was observed with the increase in the minimum Hamming distance. The authors in [16] compared the performance of three different types of JSCC schemes using FLCs and convolutional codes, namely, nonconvergent serial-concatenated coding, self-concatenated coding, and convergent serial concatenated coding. In [17], the author proposed a three-stage, serially concatenated JSCC scheme, which was constituted by the source block code, the unity-rate convolutional code and the space–time code. The bit error rate (BER) floor is averted due to the employment of a recursive unity-rate code. In addition, due to the outstanding error-correction performance of low-density parity-check (LDPC) codes and the polar codes, JSCC schemes where double low-density parity-check (D-LDPC) codes [18] and double polar codes [19] are used as source code and channel code for binary independent and identically distributed sources are proposed. Source statistics, which greatly affect the performance of the JSCC system, were subsequently raised in [20] and discussed in-depth in [21]. In the noisy communication with nonuniform sources, the source codeword assignment can greatly affect how well the JSCC decoder exploits the residual source redundancy, and the symbol error rate performance gap between the optimized codeword assignments and bad codeword assignments could be significant [22].
Moreover, the extrinsic information transfer (EXIT) chart can effectively analyze the convergence performance of JSCC. With the aid of an EXIT chart, near-capacity performance can be obtained by optimizing all components of the system. One optimization method is irregular codes, such as irregular convolutional codes (IrCCs) [23,24,25], irregular VLCs (IrVLCs) [26,27], irregular redundant index assignments [11], and irregular unity-rate codes (IrURCs) [28]. These irregular codes were employed in the JSCC system, making the EXIT curves of the source decoder and channel decoder accurately matched.
In this paper, we propose a novel joint source-channel code system based on fixed-length source code and irregular channel code in order to both improve symbol error rate performance and reduce calculation complexity. The main novelties and contributions are summarized as follows.
First, we derive the close-form expression of the source symbol error rate (SER) performance of a joint source-channel code system based on soft-bit source decoder.
Second, we propose a source coding scheme based on the fixed-length source code. By optimizing the fixed-length codeword assignment with the aid of a binary switch algorithm (BSA), the proposed scheme achieves nearly the same decoding complexity as the linear block code in [13], while the symbol error rate and convergence performance are much better than the traditional schemes.
Third, based on EXIT chart analysis, we innovatively design the joint source-channel code system by introducing a low-complexity, irregular recursive nonsystematic convolutional code (RNSC). The subcodes were chosen from unity-rate RNSCs with small constraint lengths, such that the overall decoding complexity of the channel decoder was reduced.
Last, via simulation results, we show that the proposed joint source-channel code system realizes a near-capacity iterative decoding performance with a relatively low calculation complexity as compared to benchmark schemes.
The rest of this paper is organized as follows: Section 2 introduces the model of our JSCC system. Section 3 discusses the design guidelines of source block codes and provides the algorithms for generating the fixed-length codewords. Section 4 presents the optimization of the irregular channel code based on EXIT chart analysis. Then, Section 5 discusses the simulations results. Finally, Section 6 concludes the study.

2. Joint Source-Channel Code System

The system model of our joint source-channel code (JSCC) scheme is illustrated in Figure 1.
We consider the case where nonuniform source-symbol distribution is inherent in the source signal. The memoryless source generates a frame of symbols, u = ( u 1 , u 2 , , u K ) , from a discrete alphabet, S = { S 1 , S 2 , S N S } . The source symbols are encoded by a source encoder, which is a block code with a fixed-length. The source encoder encodes each symbol, u k , to a binary codeword, b k , and concatenates them to a bit sequence, b . The block code maintains the nonuniform probabilities of source symbols and introduces some artificial redundancy. These nonuniform probabilities are transmitted as side information and are reliably protected against transmission errors by using a low-rate error-correct code. Thus, the probabilities are perfectly available at the receiver and can be later exploited by the soft source decoder as additional a priori knowledge, leading to an enhanced error protection.
After source-encoding, the bit sequence is permuted by a random bit-interleaver, and then the interleaved bit sequence, d , is encoded by a channel encoder. According to [29], the necessary condition for a serially concatenated system to be capacity-achieving is that the inner component should be a recursive convolutional code with rate R C 1 . We employ an irregular, recursive nonsystematic convolutional (RNSC) code with unity-rate as the channel encoder. The structure of the irregular encoder is depicted in Figure 2. The input sequence, d , is partitioned into different subsequences, d 1 , d 2 , , d J , according to the weight vector, α = ( α 1 , α 2 , , α J ) . Every subsequence is individually encoded by one of the RNSC sub-encoders. Then, all the outputs of the sub-encoders are multiplexed into a whole bit sequence, v . The rate of channel coding can be increased by randomly puncturing some bits of v .
Finally, the encoded sequence, v , is segmented into m-bit labels, v k . The mapper maps each label, v k , to a symbol from the M-ary constellation set, χ = { χ 1 , χ 2 , , χ M } , according to the Gray mapping. After modulation, the modulated symbols, x k ,   k = 1 , , N , are transmitted over a noisy memoryless channel. At the receiver, the received symbols, y k = x k + n k ,   k = 1 , , N , are obtained, where n k indicates independent and identically distributed complex Gaussian noise, with mean zero and variance σ 2 = N 0 .
The receiver processes the received symbols, y , to produce an estimate of the source symbols, u ^ , through iterative decoding between the channel decoder and the soft-bit source decoder (SBSD). In this study, the soft channel decoder and the SBSD are implemented in the logarithmic domain, and the reliability information exchanged between them is represented by the log-likelihood ratios (LLRs).
The received symbols, y , are demodulated to posterior LLRs, L M ( v ) , prior to being evaluated in the iterative decoding process. The irregular channel decoder demultiplexes the posterior LLRs into J , different subsequences according to the partitioning performed inside the irregular encoder. Each of these subsequences is individually decoded by applying the BCJR algorithm [30] to the trellis of the RNSC code, and the resulting extrinsic LLRs are multiplexed to a single LLRs sequence, L C D e ( d ) .
After channel decoding, the extrinsic LLRs, L C D e ( d ) , are deinterleaved to obtain a priori LLRs L S D a ( b ) of the source coded bits, b , which are then input to the SBSD. The SBSD is realized by calculating the a posteriori probabilities of each bit of the codewords. In the calculation, both the a priori LLRs of coded bits and the occurrence probabilities of the source symbols are exploited. The extrinsic output L S D e ( b ) is calculated as
L S D e ( b k , l ) = ln [ P ( b k , l = 1 | L S D a ( b ) ) P ( b k , l = 0 | L S D a ( b ) ) ] L S D a ( b k , l ) = ln c ( i ) C l 1 p i exp n = 1 , n l L c n ( i ) L D a ( b k , n ) c ( i ) C l 0 p i exp n = 1 , n l L c n ( i ) L S D a ( b k , n )
where, b k , n is the n-th bit of codeword b k , c n ( i ) is the n-th bit of the codeword c ( i ) , and p i is the a priori knowledge of codeword c ( i ) , which is determined by the occurrence probability of the corresponding source symbol S i . C l q is the subset of codebook, and it contains the codewords whose l -th bit is q , q { 0 , 1 } . The resulting extrinsic LLRs L S D e ( b ) are interleaved and fed back to the channel decoder.
This process is repeated until no more gains in the extrinsic information are achieved or an appropriate stopping criterion is satisfied. After the iterative process is finished, the ultimate LLRs L S D a ( b ) are used to perform a maximum posterior probability (MAP) estimation of source symbols, as follows:
u ˜ k = S i * , i * = arg max P ( u k = S i | y )
where the maximum posterior probability is calculated as
arg max P ( u k = S i | y ) = arg max P ( u k = S i | L S D a ( b k ) ) = arg max p i exp n = 1 N c n ( i ) L S D a ( b k , n )

3. Source Block Code Design

Denote the codebook of the source alphabet, S = { S 1 , S 2 , S N S } , as C = { c ( 1 ) , c ( 2 ) , c ( N S ) } , where c ( i ) = ( c 1 ( i ) , , c L ( i ) ) is the codeword assigned to symbol S i . All the codewords have the same length L , and the symbols in the alphabet have occurrence probabilities of p i , i = 1 , 2 , , N S .

3.1. Iterative Convergence and Error-Correction Criterion

For the JSCC system in Figure 1, iterative decoding is employed to make the source and channel decoder assist each other to gain maximum benefit in the form of extrinsic information, L C D e ( d ) and L S D e ( b ) . The near-entropy source codes with limited redundancy are not fit for the system since the iterative gain is limited. Hence, to improve the achievable ISCD performance gain, we artificially introduce redundancy in the source-coded bitstream using redundant fixed-length codes. In fact, iterative decoding is capable of attaining an infinitesimally low decoded bit error rate (BER) if the EXIT curves of the inner and outer decoder components intersect at the (1, 1) point [29]. The relevant design criterion of the fixed-length block codes is that the codewords should have a minimum Hamming distance of d H , min 2 . According to the SBSD in Equation (1), for the codewords having a minimum Hamming distance of d H , min 2 , when the a priori information, L D a ( b ) , is perfect, the considered bit, b k , n , is uniquely determined, and the perfect extrinsic information L D e ( b ) can be obtained. Thus, the EXIT curve of SBSD can reach the (1, 1) point.
Apart from the convergence behavior of iterative decoding, it also needs to consider the end-to-end source distortion for the source code’s design. The analog source is quantized and discretized before it is input in our source encoder. Thus, in this paper, we focus on a discrete source. Our goal is to perfectly reconstruct the discrete source symbols, i.e., minimize the symbol error rate (SER) [31],
D 1 K k = 1 K E { I { u ˜ k u k } }
where E { . } is the expectation of variables and I { . } is the indicator function, i.e., I { a } equals 1 if a is true, and 0 otherwise. For the memoryless source, the SER can be determined as
D P ( u ˜ k u k ) = i = 1 N S p i j = 1 , j i N S Pr ( u ˜ k = S j | u k = S i )
where p i is the occurrence probability of the source symbol S i and Pr ( u ˜ k = S j | u k = S i ) is the pairwise error probability (PEP) of erroneously detecting S i instead of S j . According to the estimate of the source symbol in (3). the symbol decision is determined by the input LLRs L S D a ( b k , n ) of each coded bit. Because the LLRs are permuted by a long random bit-interleaver during decoding, their statistic can be simplified and assumed to be the same and independent [32]. Hence, L S D a ( b ) can be characterized by the crossover probability, p c , of coded bits. Then, the sum of PEPs can be formulated as:
Pr ( u ˜ k S i | u k = S i ) = j = 1 , j i N S Pr ( c ( j ) | c ( i ) ) = c C i p ( c | c ( i ) ) = c C i ( 1 p c ) L d H ( c ( i ) , c ) p c d H ( c ( i ) , c )
where d H ( c ( i ) , c ) is the Hamming distance between the codewords c ( i ) and c . By substituting Equation (6) into Equation (5), the SER can be computed as follows:
D = i = 1 N S p i c C i ( 1 p c ) L d H ( c ( i ) , c ) p c d H ( c ( i ) , c )
It is difficult to calculate the value of crossover probability, p c , theoretically. However, since this paper focuses on the estimate of source symbols after a good convergent iterative decoding, the error analysis is under the assumption of perfect LLRs. It is reasonable to set a small value for the crossover probability p c .
In addition, to avoid the symbol decision error, the mapping from codewords back to the source symbols should be unique and unambiguous. This means different source symbols must be mapped to different codewords, i.e., c i c j , if i j . Hence, the design of source codewords, C = { c ( 1 ) , c ( 2 ) , c ( N S ) } , can be formulated as the following optimization problem:
Minimize   D Subject   to   d H , min 2 , c i c j , if   i j .
The cost function, D , can be further simplified by eliminating the irrelevant terms:
D = i = 1 N S p i c C i ( p c / ( 1 p c ) ) d H ( c ( i ) , c )
This paper proposes a simple solution to this optimization problem by using the binary switch algorithm (BSA) [33]. The BSA is the best-known method for searching bit mapping, and it attempts to minimize the cost function by switching every pair of bit-labels. Here, a set of binary codewords with fixed-length, L , is generated. Then, N S codewords are chosen from the set and assigned to the source symbols to minimize the cost function in Equation (9) while guaranteeing the codewords have the minimum Hamming distance, d H , min 2 , by using BSA. The algorithm for generating the source codebook is formulated as follows (Algorithm 1):
Algorithm 1 Binary Switch Algorithm (BSA)
Step   1 :   Determine   the   code   length   L   and   crossover   probability   p c .
Step   2 :   Generate   codewords   set   C = { c ( 1 ) , c ( 2 ) , c ( N S ) } ,   where   the   codeword   c ( i ) = ( c 1 ( i ) , , c L ( i ) )   is   the   natural   binary   representations   with   index   i .
Step   3 :   Extend   the   number   of   source   symbols   from   N S   to   2 L   by   adding   elements   { S N S + 1 , S N S + 2 , S 2 L }   with   probabilities   p k = 0 , N S < k 2 L .   Then ,   assign   each   symbol   S i   to   the   codeword   c ( i )   with   the   same   index .   Calculate   the   cos t   function   D according to (9).
Step 4: Rearrange the codewords to provide a minimum cost function by using the BSA:
For   p 2 L ;   p + +
   For   q = 1 ;   q 2 L ;   q p ;   q + +
     Try   switching   c ( p )   and   c ( q ) ;
     Calculate   the   new   cos t   function   D ˜ according to (9)
   If   D ˜ > D then
         switching   c ( p )   and   c ( q ) ;   D = D ˜
    end if
   end for
end for
Step   5 :   The   first   N S rearranged codewords form the set C . Check the minimum Hamming distance of each codeword in C . If all the codewords have a minimum Hamming distance d H , min 2 , then output the set C as the source code. If some codewords have a minimum Hamming distance d H , min = 1 , remove them from the set C . Then, search the codeword of C in order, and add the codeword that has a minimum Hamming distance, d H , min 2 , into C .
The source code can be obtained from the steps described above. It should be noted that the length of the codeword should be chosen to satisfy L log 2 N S + 1 . For the block code with a fixed-length of L , there are, at most, 2 L 1 possible codewords that satisfy the minimum distance, d H , min = 2 . Therefore, to ensure the codewords are unambiguous, the number of possible codewords must satisfy 2 L 1 N S , i.e., L log 2 N S + 1 .

3.2. Design Example

As an example, a Gaussian distribution source [26] is used. The discrete source symbols obey the occurrence probabilities that resulted from the 16-ary Lloyd–Max quantization of independent Gaussian-distributed source. The source entropy is H ( S ) = 3.76 bits/symbol. Set the crossover probability p c = 10 4 . For the codeword length L = 5 , 6 , 7 , 8 , the source codes are generated by applying Algorithm 1. The resulting codewords are detailed in Table 1, along with their corresponding minimum Hamming distances d H , min . For the proposed block codes, inverse EXIT curves of the SBSD are illustrated in Figure 3, and the EXIT curve of natural bit mapping [10] is also illustrated in the same figure.
The natural bit mapping approach maps the source symbols to the binary representation of indices. The codewords have a length of L = 4 and do not have any redundant bits. Even though this source code remainsthe nonuniform distribution redundancy of the source symbols, there is limited redundancy in the source codewords. As can be seen from Figure 3, the SBSD only provides a low amount of extrinsic information, even if perfect a priori information, i.e., I S D a = 1 , is provided. The early intersections with the EXIT curve of the decoders will lead to a poor convergence for iterative decoding. Thus, the source codes with low redundancy cannot benefit from iterative decoding.
By increasing the codeword’s redundancy, i.e., decreasing the source code rate, the SBSD can benefit more from a priori information provided by the channel decoder. This can be seen for our proposed code with length L = 5 , 6 , 7 , 8 . It is also shown that if the codeword’s minimum Hamming distance satisfies d min 2 , the EXIT curve of the SBSD reaches the (1, 1) point. In this case, it is possible to design an appropriate channel encoder that ensures reliable extrinsic information can be obtained by iterative decoding. In addition, the coding efficiency of FLCs may be reduced due to the high residential redundancy. In our joint system design, to keep the overall rate of the system constant, the code rate loss caused by source coding redundancy can be compensated by a channel code with code rate R C > 1 .
The convergence of the iterative decoder relies on the amount of redundancy at both sides of the interleaver. If a source decoder has neither redundancy nor channel measures, the iterative decoding is almost useless and does not perform much better than separate decoding.

4. Irregular Channel Code for Iterative Source Channel Decoding

The EXIT chart is an efficient tool for designing iterative decoding systems [34], and it provides a fast approach for predicting the performance of the iterative decoder accurately. In this section, the channel encoder is optimized based on the EXIT chart analysis. In particular, the EXIT chart of the proposed system is first analyzed concerning the design guidelines. Then, an appropriate irregular channel code is designed to make the iterative decoding have a good convergence.

4.1. EXIT Chart Analysis for Iterative Source Channel Decoding

Generally, the EXIT chart of a serially concatenated system has two EXIT curves, i.e., the EXIT curve of the inner decoder and the inverse EXIT curve of the outer decoder. The EXIT curve exploits the mutual information (MI) between the data bits at the transmitter and the LLRs at the receiver to describe the input–output relation of each constituent decoder [32].
For our JSCC system, illustrated in Figure 1, the irregular channel decoder and the SBSD serve as the inner decoder and the outer decoder, respectively. The EXIT curves of the channel decoder, Γ M , and the inverse EXIT curve of the SBSD, Γ D , can be expressed as:
I C D e = Γ C D ( I C D a | L M ( v ) )
and
I S D e = Γ S D ( I S D a | P S )
where I C D a is the mutual information between bits d and LLRs L C D a ( d ) , I C D e is the mutual information between bits d and LLRs L C D e ( d ) , I S D a is the mutual information between bits b and LLRs L S D a ( b ) , and I S D e is the mutual information between bits b and LLRs L S D e ( b ) . Meanwhile, the EXIT curve of the channel decoder depends on the posterior LLRs of demodulator, L M ( v ) , and the EXIT curve of the SBSD is also affected by the source statistic, P S . These EXIT curves can be obtained through the Monte Carlo simulation.
The design of the encoders is based on the property of the EXIT chart, i.e., the system can achieve an infinitesimally low BER by iterative decoding only if there is an open tunnel between the EXIT curves [29]. Thus, the encoders should be designed in such a way that their EXIT curves satisfy Γ C D ( I ) > Γ S D 1 ( I ) for I [ 0 , 1 ) at a low SNR to make the system achieve a near-capacity performance. An effective way to design very well-matching encoders is to apply the concept of irregular codes [23].
The irregular source block codes are not fit for our scheme. In the irregular block codes, high-rate codes have to be used together with low-rate codes. High-rate block codes naturally have a lower minimum Hamming distance and, therefore, a lower symbol error-correction capability. In order to ensure the error-correction capability, we still adopt regular source block code in our scheme. Furthermore, the well-matching EXIT curves are attained by designing an irregular inner channel code for the given source code.

4.2. Irregular Channel Code Design

Irregular convolutional codes as inner codes have been successfully employed in the context of iterative source-channel decoding in [10]. Here, we use the recursive nonsystematic convolutional (RNSC) codes with rate 1 as the subcodes. Considering the computational of the RNSC decoder, we limit the maximum constraint length to D 3 . There are 2 2 D 1 kinds of different RNSC codes. In addition, the RNSC subcodes should have diverse shapes in the EXIT curves, so that the EXIT curve of the irregular channel code is expected to match the EXIT curve of the source code more accurately. Thus, we select five kinds of RNSC codes having the most dissimilar EXIT curves as shown in Figure 4. Their octal generator polynomials are ( 3 , 2 ) 8 , ( 5 , 7 ) 8 , ( 6 , 7 ) 8 , ( 7 , 4 ) 8 , and ( 7 , 6 ) 8 .
Denote the weight vector α = ( α 1 , α 2 , α J ) of the different RNSC subcodes. The overall EXIT characteristic of the channel code is the weighted superposition of the EXIT characteristic of different RNSC subcodes, i.e.:
Γ C D ( I ) = j = 1 J α j Γ C D , j ( I )
Then, the EXIT curves of encoders should satisfy j = 1 J α j Γ C D , j ( I ) > Γ S D 1 ( I ) for I [ 0 , 1 ) at a low SNR. Considering P sample points of the EXIT curves, the curve-fitting problem can be formulated as the optimization problem:
α opt = arg min α C CD α c SD , inv 2
subject to
e i > 0 , i = 1 , , P
j = 1 J α j = 1
0 α j 1 , j { 1 , , J }
The matrix C CD of dimension P × J contains P sample points of each RNSC decoder EXIT curve. The vector c SD , inv contains P sample points of the SDSD inverse EXIT curves. The width of the decoding tunnel is e = C CD α c SD , inv . Constraint (14) ensures an open decoding tunnel while constraints (15) and (16) guarantee the validity of the weights. As all subcodes have the rate 1, no additional rate constraint needs to be considered.
The optimization problem can be solved by the steepest descent algorithm, referred to in [23]. As an example, the optimum weights for the source codes with length L = 6 are summarized in Table 2, and the EXIT curves of the irregular and regular RNSC decoder and SBSD are illustrated in Figure 5. This shows that an open tunnel between the inverse EXIT curve of SBSD and the EXIT curve of the irregular channel decoder is obtained at SNR E s / N 0 = 7.8 dB. Meanwhile, for the regular channel code, an open tunnel is obtained until the 8.2 dB. The waterfall performance of ISCD is improved by employing the irregular channel code.

5. Results and Discussion

5.1. Systems Parameters

In this section, to verify the performance of the proposed JSCC system in both symbol error rate and decoding complexity, we choose four benchmark schemes for comparison: (1) the separate source-channel code (SSCC), (2) the VLC-IrRNSC, (3) the LBC-RSC, and (4) LBC-IrRNSC. Simulation with the Gaussian distribution source [26] is performed. The discrete source symbols obey the occurrence probabilities that resulted from the 16-ary Lloyd–Max quantization of independent Gaussian distributed source. The source entropy is H ( S ) = 3.76 bits/symbol. Table 3 lists the constituent encoders of different systems. For all systems, 16QAM Gray mapping is used as the modulation scheme, and a random interleaver is used. The number of source symbols per frame is set to 5000. An additive white Gaussian noise (AWGN) channel is assumed for transmission. For a fair comparison between different systems, all systems are designed to have the same overall code rate of about R = 0.66, i.e., 2.64 bits/channel-use for 16QAM, and the Shannon limit capacity for the efficiency of 2.64 bits/channel-use is about E b / N 0 = 3.4 dB. To satisfy the requirement for overall code rate, the channel code rate and source code rate are accordingly adjusted in each system.
The system consisting of the Huffman code and Turbo code is taken as the benchmarks of the separate source-channel code (SSCC) system. The Turbo code is used to protect the Huffman coded bits from the channel error. The Turbo code consists of two recursive systematic convolutional (RSC) codes with generator polynomial ( 7 , 5 ) 8 , and its output bits are punctured to obtain the code rate 2/3. At the receiver, noniterative source-channel decoding is used.
Iterative source-channel decoding is used in JSCC systems, i.e., the VLC-IrRNSC scheme, the LBC-RSC scheme, the LBC-IrRNSC, and our scheme. The VLC-IrRNSC scheme employs a VLC as the source code of and an irregular RNSC code as the channel code. In this scheme, the VLC [6] is constructed for a given free distance of d f = 3 , and the attained codewords have an average length L ¯ = 5.92 . The output bits of VLC encoder are passed through an interleaver and forwarded to the irregular RNSC encoder with a rate 33/32. The irregular code is composed of RNSC subcodes ( 3 , 2 ) 8 and ( 6 , 7 ) 8 with optimum weights of 0.86 and 0.14. In the linear block code (LBC)-based JSCC schemes [13], the LBCs are generated based on the linear parity check code. The LBC-RSC scheme employs an LBC with codeword length L = 5 as the source code and an 8/9 rate RSC code as the channel code. The LBC adds a single parity bit to the 4-bit quantized source symbol, and the optimized RSC code with generator polynomial ( 5 , 7 ) 8 also introduces additional redundancy for error correction. The LBC-IrRNSC scheme employs an LBC with codeword length L = 6 as the source code and an irregular RNSC code as the channel code. The LBCs are generated based on the rate 2/3 linear parity check code in [15]. The subcodes of the irregular encoder used are ( 3 , 2 ) 8 and ( 7 , 6 ) 8 , and their optimum weights are 0.08 and 0.92. The output bits of the irregular code are randomly punctured such that the channel code rate was increased as a result of 22/21. In the proposed scheme, the optimized block code with length L = 6 is generated by Algorithm 1, and the parameters of the irregular RNSC code are listed in Table 4

5.2. Convergence and SER Performance

We use EXIT curves for analyzing the convergence performance of the schemes in Table 3. In Figure 6a–c, the well-matched EXIT curves of the outer SBSD and the irregular channel decoder are illustrated. The decoding trajectories are also acquired by recording the mutual information at the input and output of decoders during the Monte Carlo simulation. It can be seen that, at the bit SNR E b / N 0 of 4.2 dB, an open EXIT tunnel is obtained and the decoding trajectory can reach the (1,1) point in the proposed scheme. This indicates that the convergence threshold of iterative decoding is 4.2 dB. Similarly, open EXIT tunnels are obtained at 4.6 dB for both the LBC-IrRNSC scheme and the VLC-IrRNSC scheme. With the aid of an EXIT chart, the convergence threshold of each scheme is obtained, and we list them in Table 5 for comparison. It can be concluded that the SSCC scheme has the worst converge performance, and our scheme with L = 6 achieves the best converge performance compared to the other JSCC schemes.
The SER performance of all the schemes is illustrated in Figure 7. It can be seen that the proposed scheme achieves the best performance compared to benchmark schemes. The proposed scheme obtains the waterfall region at about E b / N 0 = 4.2 dB, which is within a 1 dB deviation from the Shannon limit. The simulation results are also verified in the EXIT chart predictions.
Compared to the SSCC scheme, the proposed scheme obtains about a 1 dB gain at the SER of 10−4. The parameters of source and channel encoders are jointly optimized in our scheme. Therefore, the residual redundancy in the source-coded bits is efficiently exploited by ISCD, resulting in SNR gains. In comparison with the VLC-IrRNSC scheme, our scheme has a 0.5 dB gain. The performance improvement can be attributed to the inherent synchronization of the fixed-length codes and the well-matched IrRNSC code. The VLC codes suffer from error propagation in source symbol decisions when some errors remain after iterative decoding, while our fixed-length block code avoids the problem. Additionally, the fixed-length codes also have a low decoding complexity.
Compared with the LBC-RSC scheme and LBC-IrRNSC scheme, our scheme also obtains a gain of 0.3 dB and 0.8 dB at the SER of 10−4. The improvement is obtained by optimizing the error-correction properties of the fixed-length codewords and the irregular channel code without decreasing the transmission efficiency.
For the quantitated source symbols, the reconstruction SNR (RSNR) is also illustrated in Figure 8. It can be seen that, in the region of E b / N 0 4 dB, our proposed scheme with L = 6 also has the best RSNR performance compared to the other schemes. These results can be attributed to its best SER performance.

5.3. Decoding Complexity Analysis

We tested the decoding time of each scheme in a PC with an Intel core i5 3.1 GHz CPU and 4 GB RAM. Monte Carlo simulations were on the matlab2018. All the schemes are single-core based. The simulation decoding time under different SNRs are listed in Table 6.
It can be seen that, for the JSCC schemes with a source block code, namely, the LBC-RSC scheme, the LBC-IrRNSC scheme, and our scheme, the decoding time varies from 8.7 s to 19.5 s. While the decoding time of VLC-IrRNSC scheme varies from 122.5 s to 217.9 s for different SNRs, the SSCC scheme consumes about 20 s for each frame. This indicates the propose scheme has a low decoding complexity.
The decoding complexity mainly depends on the computation complexity of the iterative decoders and on the number of decoding iterations. In our scheme, the decoder of source block code is realized by Equation (1), and its decoding complexity depends on the size of the source alphabet and the codeword length. The channel decoder employs the BCJR algorithm. In our simulations, the size of the source alphabet is 16 and the codeword length satisfies L 6 . Compared to the channel convolutional decoders which employ the BCJR algorithm on the long code trellis, the source block codes have a lower computational decoding complexity.
The VLC decoder is modeled by the bit-level trellis, whose state space is the set of the internal nodes of the VLC binary tree. The sizes of the trellises can be considered a measure of the computational decoding complexity [35]. Because the maximum constraint length of the channel code is limited to D 3 , the average state number of the channel code satisfies | T C | 4 . The VLC with free distance d f = 3 has | T S | ¯ = 48 states, which is at least 10 times | T C | ¯ . Hence, the decoding complexity of VLC decoder is much higher than the channel decoder and the source block code. Therefore, the VLC-IrRNSC scheme takes the longest decoding time.

6. Conclusions

This paper presents a novel, low-complexity joint source-channel code system consisting of a fixed-length source block code and an irregular channel code. Block codes with the minimum Hamming distance d H , min 2 are designed for guaranteeing the convergence of ISCD and reducing the SER of the source symbol decision. In addition, a well-matched irregular channel code is employed to further improve the waterfall performance of ISCD. The irregular channel code is constituted by unity-rate RNSC codes with small constraint length, and, therefore, it has a low decoding complexity. EXIT charts are used for the analysis of the convergence behavior of ISCD. The proposed system realizes a near-capacity performance by exploiting both the source statistic redundancy and the artificially introduced redundancy in the source code. As the simulation results of the Gaussian distribution sources show, the proposed system outperforms the SSCC system by more than 1 dB gains. Compared to the JSCC system based on LBC, the optimized block code has better error-correction property. Furthermore, in comparison with the JSCC system based on VLC, the proposed scheme achieves about 0.5 dB gains and also reduces the decoding complexity. Our future work will be the further extension of the JSCC system over other nonstandard coding channels and the investigation of the employment of the JSCC system in multimedia applications.

Author Contributions

Conceptualization, methodology, and writing: all authors; software and data curation: H.B.; supervision and project administration: C.Z. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers No. 61571416 and No. 61271282, and the Award Foundation of the Chinese Academy of Sciences, grant number No. 2017-6-17.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Champaign, IL, USA, 1949. [Google Scholar]
  2. Bauer, R.; Hagenauer, J. On variable length codes for iterative source/channel decoding. In Proceedings of the DCC 2001, Data Compression Conference, Snowbird, UT, USA, 27–29 March 2001; pp. 273–282. [Google Scholar]
  3. Hagenauer, J.; Bauer, R. The turbo principle in joint source channel decoding of variable length codes. In Proceedings of the 2001 IEEE Information Theory Workshop (Cat. No.01EX494), Cairns, QLD, Australia, 2–7 September 2001; pp. 33–35. [Google Scholar]
  4. Clevorn, T.; Brauers, J.; Adrat, M.; Vary, P. Turbo decodulation: Iterative combined demodulation and source-channel decoding. IEEE Commun. Lett. 2005, 9, 820–822. [Google Scholar] [CrossRef]
  5. Jaspar, X.; Vandendorpe, L. Design and performance analysis of joint source-channel turbo schemes with variable length codes. IEEE Int. Conf. Commun. 2005, 1, 526–530. [Google Scholar]
  6. Wu, T.Y.; Chen, P.N. On the Design of Variable-Length Error-Correcting Codes. IEEE Trans. Commun. 2013, 61, 3553–3565. [Google Scholar] [CrossRef] [Green Version]
  7. Huang, C.; Wu, T.; Chen, P.; Alajaji, F.; Han, Y.S. An Efficient Tree Search Algorithm for the Free Distance of Variable-Length Error-Correcting Codes. IEEE Commun. Lett. 2018, 22, 474–477. [Google Scholar] [CrossRef]
  8. Chen, Y.; Wu, F.; Li, C.; Varshney, P.K. An Efficient Construction Strategy for Near-Optimal Variable-Length Error-Correcting Codes. IEEE Commun. Lett. 2019, 23, 398–401. [Google Scholar] [CrossRef]
  9. Chen, Y.-M.; Wu, F.-T.; Li, C.-P.; Varshney, P.K. On the Design of Near-Optimal Variable-Length Error-Correcting Codes for Large Source Alphabets. IEEE Trans. Commun. 2020, 68, 7896–7910. [Google Scholar] [CrossRef]
  10. Adrat, M.; Vary, P. Iterative Source-Channel Decoding: Improved System Design Using EXIT Charts. EURASIP J. Adv. Signal Process. 2005, 2005, 178541. [Google Scholar] [CrossRef] [Green Version]
  11. Schmalen, L.; Adrat, M.; Clevorn, T.; Vary, P. EXIT Chart Based System Design for Iterative Source-Channel Decoding with Fixed-Length Codes. IEEE Trans. Commun. 2011, 59, 2406–2413. [Google Scholar] [CrossRef]
  12. Schmalen, L.; Vary, P. Iterative Source–Channel Decoding with Reduced Error Floors. IEEE J. Sel. Top. Signal Process. 2011, 5, 1577–1587. [Google Scholar] [CrossRef]
  13. Khalil, A.; Minallah, N.; Awan, M.A.; Khan, H.U.; Khan, A.S.; Rehman, A.U. On the Performance of Wireless Video Communication Using Iterative Joint Source Channel Decoding and Transmitter Diversity Gain Technique. Wirel. Commun. Mob. Comput. 2020, 2020, 88. [Google Scholar] [CrossRef]
  14. Minallah, N.; Ullah, K.; Ullah, K.; Khan, I.U.; Khattak, K.S. Efficient Wireless Video Communication using Sophisticated Channel Coding and Transmitter Diversity Gain Technique. 2020. Available online: https://assets.researchsquare.com/files/rs-35714/v1_covered.pdf?c=1631836661 (accessed on 10 July 2021).
  15. Khan, H.U.; Minallah, N.; Masood, A.; Khalil, A.; Frnda, J.; Nedoma, J. Performance Analysis of Sphere Packed Aided Differential Space-Time Spreading with Iterative Source-Channel Detection. Sensors 2021, 21, 5461. [Google Scholar] [CrossRef] [PubMed]
  16. Minallah, N.; Butt, M.F.U.; Khan, I.U.; Ahmed, I.; Khattak, K.S.; Qiao, G.; Liu, S. Analysis of Near-Capacity Iterative Decoding Schemes for Wireless Communication Using EXIT Charts. IEEE Access 2020, 8, 124424–124436. [Google Scholar] [CrossRef]
  17. Minallah, N.; Ahmed, I.; Frnda, J.; Khattak, K.S. Averting BER Floor with Iterative Source and Channel Decoding for Layered Steered Space-Time Codes. Sensors 2021, 21, 6502. [Google Scholar] [CrossRef]
  18. Bocharova, I.E.; Fabregas, A.G.i.; Kudryashov, B.D.; Martinez, A.; Campo, A.T.; Vazquez-Vilar, G. Low-complexity fixed-to-fixed joint source-channel coding. In Proceedings of the 8th International Symposium on Turbo Codes and Iterative Information Processing (ISTC), Bremen, Germany, 18–22 August 2014; pp. 132–136. [Google Scholar]
  19. Dong, Y.; Niu, K.; Dai, J.; Wang, S.; Yuan, Y. Joint source and channel coding using double polar codes. IEEE Commun. Lett. 2021, 25, 2810–2814. [Google Scholar] [CrossRef]
  20. He, J.; Wang, L.; Chen, P. A joint source and channel coding scheme base on simple protograph structured codes. In Proceedings of the 2012 International Symposium on Communications and Information Technologies (ISCIT), Gold Coast, QLD, Australia, 2–5 October 2012; pp. 65–69. [Google Scholar]
  21. Chen, C.; Wang, L.; Xiong, Z. Matching criterion between source statistics and source coding rate. IEEE Commun. Lett. 2015, 19, 1504–1507. [Google Scholar] [CrossRef]
  22. Wu, C.; Chung, W. Iterative source-channel decoding design using distortion based index assignment and joint redundant information. In Proceedings of the SiPS 2013, Taipei, Taiwan, 16–18 October 2013; pp. 95–99. [Google Scholar]
  23. Tuchler, M.; Hagenauer, J. EXIT charts of irregular codes. In Proceedings of the Conference on Information Sciences and Systems, Prineton University, Prineton, NJ, USA, 20–22 March 2002. [Google Scholar]
  24. Minallah, N.; Ullah, K.; Frnda, J.; Cengiz, K.; Javed, M.A. Transmitter Diversity Gain Technique Aided Irregular Channel Coding for Mobile Video Transmission. Entropy 2021, 23, 235. [Google Scholar] [CrossRef] [PubMed]
  25. Schmalen, L.; Vary, P. Error resilient turbo compression of source codec parameters using inner irregular codes. In Proceedings of the 2010 International ITG Conference on Source and Channel Coding (SCC), Siegen, Germany, 18–21 January 2010; pp. 1–6. [Google Scholar]
  26. Maunder, R.G.; Hanzo, L. Near-capacity irregular variable length coding and irregular unity rate coding. IEEE Trans. Wirel. Commun. 2009, 8, 5500–5507. [Google Scholar] [CrossRef]
  27. Hanzo, L. Near-Capacity Variable-Length Coding: Regular and EXIT-Chart-Aided Irregular Designs; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 20. [Google Scholar]
  28. Maunder, R.G.; Zhang, W.; Wang, T.; Hanzo, L. A Unary Error Correction Code for the Near-Capacity Joint Source and Channel Coding of Symbol Values from an Infinite Set. IEEE Trans. Commun. 2013, 61, 1977–1987. [Google Scholar] [CrossRef] [Green Version]
  29. Ashikhmin, A.; Kramer, G.; Brink, S.T. Extrinsic information transfer functions: Model and erasure channel properties. IEEE Trans. Inf. Theory 2004, 50, 2657–2673. [Google Scholar] [CrossRef]
  30. Bahl, L.; Cocke, J.; Jelinek, F.; Raviv, J. Optimal decoding of linear codes for minimizing symbol error rate (corresp.). IEEE Trans. Inf. Theory 1974, 20, 284–287. [Google Scholar] [CrossRef] [Green Version]
  31. Jaspar, X.; Guillemot, C.; Vandendorpe, L. Joint Source–Channel Turbo Techniques for Discrete-Valued Sources: From Theory to Practice. Proc. IEEE 2007, 95, 1345–1361. [Google Scholar] [CrossRef]
  32. El-Hajjar, M.; Hanzo, L. EXIT Charts for System Design and Analysis. IEEE Commun. Surv. Tutor. 2014, 16, 127–153. [Google Scholar] [CrossRef] [Green Version]
  33. Schreckenbach, F.; Gortz, N.; Hagenauer, J.; Bauch, G. Optimization of symbol mappings for bit-interleaved coded Modulation with iterative decoding. IEEE Commun. Lett. 2003, 7, 593–595. [Google Scholar] [CrossRef]
  34. Nguyen, H.V.; Xu, C.; Ng, S.X.; Hanzo, L. Near-Capacity Wireless System Design Principles. IEEE Commun. Surv. Tutor. 2015, 17, 1806–1833. [Google Scholar] [CrossRef] [Green Version]
  35. Jaspar, X.; Vandendorpe, L. Joint source-channel codes based on irregular turbo codes and variable length codes. IEEE Trans. Commun. 2008, 56, 1824–1835. [Google Scholar] [CrossRef]
Figure 1. The system model of proposed JSCC scheme.
Figure 1. The system model of proposed JSCC scheme.
Information 13 00281 g001
Figure 2. Irregular channel encoder.
Figure 2. Irregular channel encoder.
Information 13 00281 g002
Figure 3. The EXIT curves of different source codes.
Figure 3. The EXIT curves of different source codes.
Information 13 00281 g003
Figure 4. EXIT curves of different RNSC codes, AWGN channel, SNR = 8 dB, 16QAM Gray mapping.
Figure 4. EXIT curves of different RNSC codes, AWGN channel, SNR = 8 dB, 16QAM Gray mapping.
Information 13 00281 g004
Figure 5. Exit curve of optimized channel code for the source code with L = 6.
Figure 5. Exit curve of optimized channel code for the source code with L = 6.
Information 13 00281 g005
Figure 6. The EXIT chart of the different JSCC schemes using Irregular RNSC codes. (a) The EXIT chart of the VLC-IrRNSC scheme. (b) The EXIT chart of the LBC-IrRNSC scheme with L = 6. (c) The EXIT chart of the proposed scheme with L = 6.
Figure 6. The EXIT chart of the different JSCC schemes using Irregular RNSC codes. (a) The EXIT chart of the VLC-IrRNSC scheme. (b) The EXIT chart of the LBC-IrRNSC scheme with L = 6. (c) The EXIT chart of the proposed scheme with L = 6.
Information 13 00281 g006
Figure 7. The SER performance of different systems over the AWGN channel.
Figure 7. The SER performance of different systems over the AWGN channel.
Information 13 00281 g007
Figure 8. The RSNR performance of different systems over the AWGN channel.
Figure 8. The RSNR performance of different systems over the AWGN channel.
Information 13 00281 g008
Table 1. Parameters of optimized source codewords with different code lengths.
Table 1. Parameters of optimized source codewords with different code lengths.
LengthCode RateCodewords in DecimalMinimum
Hamming Distance
L = 5 0.7524, 3, 6, 3, 18, 29, 15, 10, 9, 30, 5, 20, 27, 12, 0, 172
L = 6 0.636, 23, 51, 10, 30, 17, 43, 56, 39, 12, 3, 36, 29, 0, 45, 542
L = 7 0.5441, 21, 2, 79, 86, 12, 125, 65, 100, 115, 88, 62, 27, 106, 39, 483
L = 8 0.47120, 102, 85, 75, 51, 45, 30, 0, 135, 153, 170, 180, 204, 210, 225, 2554
Table 2. Parameters of channel code matched with L = 6 source code.
Table 2. Parameters of channel code matched with L = 6 source code.
Source CodeChannel CodeSNR for an
Open Tunnel
L = 6 Regular RNSC code(7, 4)8.2 dB
Irregular RNSC codeSubcodes (3, 2), (7, 6)
Optimum weights [0.36, 064]
7.8 dB
Table 3. Parameters of different source channel coding schemes.
Table 3. Parameters of different source channel coding schemes.
SchemeSource CodeSource
Code Rate
Channel CodeChannel
Code Rate
Overall Rate
SSCCHuffman0.99Rate-3/5 Turbo code2/30.66
VLC-IrRNSC Variable - length   code   d f = 3 0.64Irregular RNSC code33/320.66
LBC-RSC Linear   block   code   L = 5 0.75RSC code8/90.66
LBC-IrRNSC Linear   block   code   L = 6 0.63Irregular RNSC code22/210.66
Proposed   L = 6 optimized   block   code   L = 6 0.63Irregular RNSC code22/210.66
Table 4. Parameters of irregular RNSC code matched with different source code.
Table 4. Parameters of irregular RNSC code matched with different source code.
Source CodeIrregular RNSC Channel Code Bit SNR for an
Open Tunnel
Variable - length   code   d f = 3 Subcodes (3, 2), (6, 7)
optimum weights [0.86, 0.14]
4.6 dB
Linear   block   code   L = 6 Subcodes (3, 2), (7, 6)
optimum weights [0.08, 0.92]
4.6 dB
Proposed   L = 6 Subcodes (3, 2), (7, 6)
optimum weights [0.36, 064]
4.2 dB
Table 5. Convergence performance of different source channel coding schemes.
Table 5. Convergence performance of different source channel coding schemes.
SchemeConvergence Threshold
SSCC5.2 dB
LBC-RSC4.8 dB
VLC-IrRNSC4.6 dB
LBC-IrRNSC4.6 dB
Proposed   L = 6 4.2 dB
Table 6. Simulation decoding time of different source channel coding schemes.
Table 6. Simulation decoding time of different source channel coding schemes.
SchemeDecoding Time (s)
4 dB4.5 dB5 dB
SSCC24.826.822.3
VLC-IrRNSC161.4217.9122.5
LBC - RSC   L = 5 11.619.512.5
LBC - IrRNSC   L = 6 12.416.110.5
Proposed   L = 6 13.311.48.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bao, H.; Zhang, C.; Gao, S. Design and Analysis of Joint Source-Channel Code System with Fixed-Length Code. Information 2022, 13, 281. https://doi.org/10.3390/info13060281

AMA Style

Bao H, Zhang C, Gao S. Design and Analysis of Joint Source-Channel Code System with Fixed-Length Code. Information. 2022; 13(6):281. https://doi.org/10.3390/info13060281

Chicago/Turabian Style

Bao, Han, Can Zhang, and Shaoshuai Gao. 2022. "Design and Analysis of Joint Source-Channel Code System with Fixed-Length Code" Information 13, no. 6: 281. https://doi.org/10.3390/info13060281

APA Style

Bao, H., Zhang, C., & Gao, S. (2022). Design and Analysis of Joint Source-Channel Code System with Fixed-Length Code. Information, 13(6), 281. https://doi.org/10.3390/info13060281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop