Next Article in Journal
A 26–30 GHz GaN HEMT Low-Noise Amplifier Employing a Series Inductor-Based Stability Enhancement Technique
Next Article in Special Issue
A Symmetric Matrix-Aided MIMO to Improve Reliability for Maritime Visible Light Communications
Previous Article in Journal
A Comparative Study of Software Defined Networking Controllers Using Mininet
Previous Article in Special Issue
Multiple-Mode Orthogonal Time Frequency Space with Index Modulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decoding Quadratic Residue Codes Using Deep Neural Networks

1
College of Computer Science, Chongqing University, Chongqing 400044, China
2
Department of Electrical and Computer Engineering, McGill University, Montreal, QC H3A 0G4, Canada
3
College of Automation, Chongqing University, Chongqing 400044, China
4
Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong, China
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(17), 2717; https://doi.org/10.3390/electronics11172717
Submission received: 5 August 2022 / Revised: 25 August 2022 / Accepted: 26 August 2022 / Published: 30 August 2022
(This article belongs to the Special Issue Multirate and Multicarrier Communication)

Abstract

:
In this paper, a low-complexity decoder based on a neural network is proposed to decode binary quadratic residue (QR) codes. The proposed decoder is based on the neural min-sum algorithm and the modified random redundant decoder (mRRD) algorithm. This new method has the same asymptotic time complexity as the min-sum algorithm, which is much lower than the difference on syndromes (DS) algorithm. Simulation results show that the proposed algorithm achieves a gain of more than 0.4 dB when compared to the DS algorithm. Furthermore, a simplified approach based on trapping sets is applied to reduce the complexity of the mRRD. This simplification leads to a slight loss in error performance and a reduction in implementation complexity.

1. Introduction

Binary channel codes are of significant importance in physical-layer digital communications, and the minimum Hamming distance has a direct effect on their error correction performance. As a class of cyclic Bose–Chaudhuri–Hocquenghem (BCH) codes with rates no less than 1/2, the binary quadratic residue (QR) codes [1] generally have large minimum Hamming distances and many of them are known to have the best error performances among binary codes which have similar code lengths and rates. Practically, the (24,12,8) extended QR code has been used in imaging systems for space exploration [2] and high-frequency radio systems [3].
To the best of our knowledge, the algebraic algorithms are the only approaches for decoding QR codes. Until now, the most efficient general algorithm was the DS algorithm [4], which is a look-up table decoding method. The complexity of the DS algorithm grows rapidly as the code length grows. Therefore, the DS algorithm could hardly be implemented on devices with limited hardware resources, especially for QR codes of long block lengths. For example, to decode a (127,64,19) QR code, the DS algorithm may need more than 4 × 10 7 finite field additions and 2 × 10 6 real additions. Recently, Duan et al. [5] developed an improved version of the DS algorithm, i.e., the IDS algorithm, which further speeds up the decoding of QR codes of lengths within 100. However, the IDS algorithm is still too complex to decode many longer QR codes. Therefore, the error-correcting performance of most QR codes of lengths beyond 150 is unknown so far. On the other hand, with the rapid development of graphics processing units, deep learning (DL)-based encoding/decoding schemes have been extensively investigated (see [6,7,8,9,10,11,12,13,14,15] and references therein). They provide alternative solutions to channel decoding and also motivate our work on developing neural decoders for QR codes because neural decoders’ complexity has a lower order of growth than that of the DS algorithm.
In general, DL-aided neural decoders can be designed using either data-driven or model-driven approaches. It is noted that the main disadvantages of pure data-driven decoders/autoencoders lie in the high training complexity and weak scalability to long block lengths, as such decoders only depend on random training data. Additionally, the performance superiority of neural channel codes comes from the continuous codeword distribution as opposed to binary codewords [6,11]. Therefore, they are not compatible with current digital communication systems. As an alternative, model-driven neural decoders are built from widely deployed decoding algorithms and could avoid the aforementioned drawbacks. One of the most commonly used decoding schemes is the belief propagation (BP) algorithm, which performs decoding on a sparse factor graph. It is known that the BP algorithm is optimal for Tanner graphs without cycles. Practically, the algorithm achieves satisfactory performance for decoding LDPC codes with a limited number of short cycles in a graph. Inspired by this fact, researchers have built various BP-driven neural decoders [7,8,10,14,15,16,17,18]. In [16], Gruber et al. observed that a neural network could generalize to codewords which had not been trained in the training stage. In [17], Nachmani et al. proposed a neural network structure by unrolling the iterative decoding structures of the BP algorithm. By doing so, the weights associated with the edges of the original Tanner graph could be optimized during the training process, which enhanced the decoding performance. In [18], Lugosch et al. further reduced the complexity by generalizing the min-sum algorithm to neural networks. Notably, RNN-based and graph-neural-network-based BP-driven neural decoders have been investigated in [10] and [8], respectively. In addition to the aforementioned BP-driven neural decoders, a neural decoder driven by the conventional max-log-maximum a posteriori (MAP) algorithm was designed to decode turbo codes in [12].
It is worth pointing out that the performance of BP decoders deteriorates severely for high-density parity-check (HDPC) codes such as BCH codes and QR codes. It is because the corresponding graphs of these codes always comprise many short cycles. In [10], BP-driven neural decoders were utilized to decode linear codes such as BCH codes. Moreover, the authors of [10] also applied the proposed BP-driven neural decoder into the modified random redundant decoder (mRRD) [19], and the resultant neural mRRD method could approach the maximum likelihood (ML) performance. However, it suffered from an extensive storage and computational complexity corresponding to the demanding permutation group and the number of iterations. For instance, the size of the permutation group of a the (23,12,7) QR code amounts to around 10 7 , which greatly limits the practical circuit implementations.
In [15], a deep-learning-aided node-classified redundant decoding algorithm was presented for decoding BCH codes. The algorithm first partitioned the variable nodes into different categories by the k-median method. Then, a list of permutations was produced for bit positions, and these permutations were sorted according to their reliability metrics. Finally, the BP-driven neural decoder conducted decoding based on the sorted permutations. It was shown that certain permutations indeed performed better for some specific codewords, but a substantial number of computations for evaluating and sorting permutations was still required. More recently, a BP-driven neural decoder imposing a cyclically invariant structure was proposed for decoding BCH codes [7].
In order to address the problem of the inefficient design of BP-driven neural decoders for binary QR codes, this paper simplifies the neural network based on the min-sum algorithm and the mRRD algorithm. The proposed neural decoder utilizes only cyclic permutations that are selected according to the distribution of small trapping sets, instead of using all possible permutations in the permutation group and is hence more practical for hardware implementations. We further point out that most of the min-sum iterations can be processed in parallel. To summarize, the main contributions of this paper include the following:
  • We illustrate that the performance of the BP/min-sum algorithm is highly relevant to small trapping sets rather than four-cycles for some QR codes.
  • We modify the original mRRD using a neural network decoder. We replace the random permutations by a cyclic shift. The cyclic shift is selected based on the distribution of small trapping sets.
  • We obtain the error performances of QR codes of lengths beyond 200 using the proposed Cyclic-mRRD algorithm. The performances of these QR codes are hard to obtain using traditional methods such as the DS algorithm, because their minimum distances are unknown and the decoding algorithm is extremely complex.
The remainder of this paper is structured as follows. Section 2 provides some preliminaries about QR code, mRRD, trapping sets and linear programming decoding. Section 3 introduces the neural BP-based algorithms. Section 4 describes the proposed Cyclic-mRRD and Chase-II-mRRD with a trapping sets analysis. Section 5 gives the simulation results and complexity analysis of the proposed algorithms. Finally, this paper concludes with a summary in Section 6.

2. Preliminaries

2.1. Quadratic Residue Code

An ( n , k ) QR code is a QR code of length n and dimension k. The code length of a QR code is a prime number n which satisfies n = 8 l ± 1 with l being a positive integer. Moreover, the set Q n represents the quadratic residues modulo n, i.e.,
Q n = { i | i j 2 mod n , for j { 1 , 2 , . . . , n 1 } } .
Let m be the smallest positive integer such that n divides 2 m 1 , α be a primitive element in G F ( 2 m ) and β be a primitive nth root of unity in G F ( 2 m ) . Then, we have β = α ( 2 m 1 ) / n , and the generator polynomial of a QR code of length n is given by,
g ( x ) = i Q n ( x β i ) .
The codeword polynomial u ( x ) can be derived from the message polynomial m ( x ) and the generator polynomial g ( x ) using u ( x ) = m ( x ) · g ( x ) .
There are 11 QR codes with a code length below 100. They are the (7,4), (17,9), (23,12), (31,16), (41,21), (47,24), (71,36), (73,37), (79,40), (89,45) and (97,49) QR codes.

2.2. Modified RRD Algorithm

A permutation permutes the order of bits in a codeword. For example, a permutation σ = ( 0 1 2 2 1 0 ) permutes any codeword u = ( u 0 , u 1 , u 2 ) to σ ( u ) = ( u 2 , u 1 , u 0 ) . A permutation group Per ( C ) is a group of permutations which send any codeword in codebook C back to the codebook itself. That is, after applying any permutation σ in the Per ( C ) to any codeword in C , the result is still a codeword in C , i.e.,
Per ( C ) = { σ | σ ( u ) C , u C } .
It is obvious that all cyclic shifts will be in Per ( C ) for any cyclic code. The algorithm to compute the group generators to generate the whole Per ( C ) was proposed in [20].
The mRRD is a modified version of RRD [21]. Compared with RRD, the mRRD has better error performance and a lower complexity. An mRRD decoder can be denoted as mRRD ( W , L , I ) , which means the mRRD decoder contains W parallel pipelines, each pipeline consisting of L BP decoders and every BP decoder conducting I BP iterations, as shown in Figure 1. The mRRD algorithm uses the permutations in the permutation group of a code to permute the input log-likelihood-ratio (LLR) vector before each BP decoder. At the end, the following least metric selector (LMS) is applied to the candidate codeword set U ^ and the received sequence r , and the best codeword is chosen.
LMS : u ^ = argmax u ^ U ^ i = 1 n | r i u i ^ | 2 .

2.3. Trapping Set

The term “trapping set” was introduced by Richardson [22] in 2003. A Tanner graph G can be denoted by G = ( V C , E ) , where V , C , E represent the sets of variable nodes, check nodes and edges connecting the variable nodes and check nodes. Let S be a subset of V , Γ ( S ) be a set of check nodes connected to variable nodes in S and T ( S ) be a subgraph containing S, Γ ( S ) and the edges connecting S and Γ ( S ) . Γ ( S ) can be further divided into two sets: Γ o ( S ) representing check nodes of odd degree and Γ e ( S ) representing check nodes of even degree. Denoting a = | S | and b = | Γ o ( S ) | , then T ( S ) is an ( a , b ) trapping set. See Figure 2 for an exemplary (3,1) trapping set.
Generally speaking, trapping sets with smaller a and b are considered to be more harmful. Hence, they are called dominant trapping sets. The existence of dominant trapping sets can affect the performance of the BP algorithms dramatically.

3. Neural Belief Propagation Network

The classic BP algorithm is a message-passing algorithm performed on the Tanner graph of a code. Let r be the received information vector, V i j be the information passed from variable node v i to check node c j and C j i be the information passed from c j to v i . Assume the codeword u { 0 , 1 } n is sent through a binary input additive white Gaussian noise (BI-AWGN) channel. Then, we have a BPSK-modulated vector y { 1 , + 1 } n where y i = ( 1 ) u i , a noise vector n R n where n i N ( 0 , σ 2 ) and a received vector r = y + n . For log-domain belief propagation (BP) algorithm, the LLR value of v i is:
γ i = ln Pr ( y i = + 1 | r i ) Pr ( y i = 1 | r i ) = 2 r i σ 2 .
The messages from variable nodes to check nodes are given by,
V i j = γ i + j N ( v i ) c j C j i ,
where N ( x ) y means the set of x’s neighbors excluding y. For the first iteration, there are no incoming messages from the check nodes, so the values of C j i are set to 0. Furthermore, the messages from check nodes to variable nodes are given by,
C j i = 2 tan h 1 i N ( c j ) v i tan h ( V i j / 2 ) .
For neural networks, each layer represents a half-BP iteration (check nodes to variable nodes or variable nodes to check nodes). Therefore, it needs two layers (an odd layer and an even layer) to perform a single BP iteration. We denote the weights associated with each layer as w o 2 e and w e 2 o . Accordingly, (6) and (7) are replaced, respectively, by,
V i j = γ i + j N ( v i ) c j w j i e 2 o · C j i ,
C j i = 2 tan h 1 i N ( c j ) v i w i j o 2 e · tan h ( V i j / 2 ) .
An example structure of constructed neural network for the (7,4) QR code is illustrated in Figure 3.
By replacing BP decoders by min-sum decoders, we can obtain neural min-sum decoders as in (10). Here, we used a modified min-sum algorithm called threshold attenuated min-sum algorithm (TAMS) [23], and the neural network version was given by,
C j i = α c min i N ( c j ) v i | w i j o 2 e · V i j | i N ( c j ) v i sign ( w i j o 2 e · V i j ) a a a a if min i N ( c j ) v i | w i j o 2 e · V i j | < τ min i N ( c j ) v i | w i j o 2 e · V i j | i N ( c j ) v i sign ( w i j o 2 e · V i j ) a a a a a a otherwise ,
where α c and τ are an attenuation coefficient and a threshold, respectively. The values of α c and τ were set empirically to 0.7 and 1.5, respectively. Moreover, a sigmoid function was applied to convert LLR to probabilities after the final iteration.
Given a parity check matrix, the network structure is determined. We applied the algorithm in [21] to reduce the cycles in parity check matrices and used the cycle-reduced parity-check matrix to construct the neural network. We used different weights for I times of BP/min-sum iterations, and the parameter space was Ω = { w o 2 e , 1 , w e 2 o , 1 , , w o 2 e , I , w e 2 o , I } . When all weights are set to 1, the neural BP/min-sum turns back to the original BP/min-sum algorithm.
To measure the difference between the output of network and the true codeword, we used the multiloss binary cross-entropy (BCE). Given iterations of I and code length of n, the neural network yielded output O = ( O 1 , O 2 , , O I ) , where O i = ( O 1 i , O 2 i , , O n i ) and 1 i I . The multiloss BCE between the true codeword u and the outputs O was calculated and accounted for the total loss after every BP/min-sum iteration as follows:
L ( O , u ) = 1 I · n j = 1 I i = 1 n u i ln ( O i j ) + ( 1 u i ) ln ( 1 O i j ) .

4. Cyclic mRRD

Trapping sets of small sizes can greatly influence the performance of BP or min-sum algorithms as shown in Figure 4. The trapping sets were found by using the algorithm proposed in [24]. Figure 4 and Figure 5 show that the distribution of errors are more similar to the distribution of trapping sets rather than 4-cycles. That means trapping sets can also greatly affect the BP/min-sum performance for QR codes at high SNR such as LDPC codes, and the effect of short cycles become less important. Referring to Figure 1, the mRRD chooses and applies a random permutation from Per ( C ) before each BP/min-sum decoder, which is equivalent to using a different parity check matrix for each BP/min-sum decoder. By permuting LLRs to other positions, the mRRD prevents nodes from receiving messages from the same group of neighboring nodes in the trapping sets or cycles and a falsely converged variable node may be corrected by the other nodes after the next BP decoder.
As we have analyzed before, Per ( C ) contains not only the cyclic shift, but also many other permutations for a cyclic code. The order of permutation groups for different QR codes are listed in Table 1. We notice that the order of the permutation group of the (23,12) QR code is significantly larger than the others. That is because the permutation group of the (24,12) extended QR code contains two distinct groups of projective special linear (PSL) type, while the permutation groups of most other codes contain only one [25]. Furthermore, the permutation group of the original code is composed by permutations which fix the overall check bit in the permutation group of the extended code.
Each permutation can be stored as a one-dimensional array using Cauchy’s two-line notation with the first line omitted. However, it still requires much memory to store those permutations. Alternatively, we can store the group generators only, which brings additional overheads at runtime to compute the composition of permutations. Furthermore, the implementation of a random permutation on circuits is complex when LLRs can be shuffled to any position. As a result, it needs multiple random swaps between LLR values to perform a single random permutation. Otherwise, we need a more complex design in order to shuffle LLRs in one step.
Cyclic shifts are easier to be implemented in hardware than random permutations. By shifting LLRs to their neighboring positions in a loop, any length of shift permutation can be easily implemented and rapidly performed. By using the same cyclic shift every time, the complexity of the implementation can be further reduced. Considering the above analysis, we can first compute the trapping set distribution D T = [ t 1 , t 2 , , t n ] , where t i is the number of trapping sets associated with node V i . Then, we select a cyclic shift based on the distribution, that is, the bits associated with large numbers of trapping sets will be moved to the positions with less trapping sets. It allows zero overhead during the decoding process. The criterion to select good shifts from all cyclic shift permutations is proposed as Algorithm 1. After performing the algorithm, we can select the cyclic shift which has the highest score. If the algorithm returns multiple shifts with the same score, we choose the shift σ that has the least length (i.e., is closest to 0 or n). For example, suppose the algorithm returns 40 and 23 for the (47,24) QR code, implying that cyclic right shifts of 40 bits and 23 bits are both optimal. However, shifting right by 40 bits can be implemented as shifting left by 7, which is less than 23, so the final result is a cyclic shifting right by 40 bits.
Algorithm 1 Criterion to find the best shift
     Input: Distribution of trapping sets: D T
     Output: The best shift: σ
    for i 1 to n do
        x D T ( D T > > n ) //Measure the change of the number of trapping sets for each bit.
        S c o r e [ i ] x i > 0 x i
    end for
     S { i | S o c r e [ i ] = = max ( S c o r e ) }
    if | S | 1 then
        σ argmax σ S ( | σ n 2 | )
    else
        σ S [ 1 ]
    end if
The proposed method replaces the first random permutation by a random cyclic shift for each pipeline to give every pipeline a different starting point. After that, all the other random permutations in the original mRRD are replaced by a fixed right shift permutation. Since all permutations are replaced by cyclic shifts, we call the new modified mRRD Cyclic-mRRD. The Cyclic-mRRD is given in detail as Algorithm 2.
Algorithm 2 Cyclic-mRRD
     Input: Cyclic right shift: σ , parity check matrix: H
                 code length: n, LLR vector: γ
                 mRRD Size: ( W , L , I )
     Output: Decode results: u ^
     S , F
    for i 1 to W do
         Σ RandomInt ( ) mod n
         w γ > > Σ
        for  j 1 to L do
              w I iterations of BP / min - sum ( w )
              u ^ HardDecision ( w )
             if  u ^ · H T = 0  then
                  u ^ u ^ < < Σ //Rearrange bits to the original order
                  S S { u ^ } //Append u ^ to success set
                 break
              else
                  w w > > σ
                 Σ ( Σ + σ ) mod n //Record total shifted bits
              end if
        end for
         Σ ( Σ σ ) mod n
         u ^ u ^ < < Σ //Rearrange bits to the original order
         F F { u ^ } //Append u ^ to default set
    end for
    if | S | = 0 then
         S F //No valid codewords, select from default set
    end if
     u ^ LMS ( S )

5. Simulation Results

The simulation and training were carried under BI-AWGN channels. All LLRs beyond [ 20 , + 20 ] were clipped to −20/+20. The training set contained 10 6 vectors, which were all-zero codewords corrupted by random noises at E b / N 0 = 3–5 dB, and the batch size was 200, whereas the BER and FER curves were obtained using randomly generated codewords and noises. We used the RMSProp optimizer in which the learning rate was set to 0.01. Other parameters were set to their default as per PyTorch’s implementation.

5.1. Neural BP/Min-Sum

We first demonstrate the neural network versions of the BP and TAMS algorithms. As illustrated in Figure 6, the neural network versions of the BP and TAMS algorithms both outperform the original BP/TAMS algorithm. The neural BP achieves a gain of 1.1 dB compared to BP when the BER is 2 × 10 4 . Neural TAMS has the best performance among these four decoders, and the gain is about 0.6 dB at a BER of 4 × 10 5 , when compared with the original TAMS algorithm. In addition, it has a relatively low complexity compared to the BP algorithm and neural BP. Therefore, we used neural TAMS as the default decoder in the following experiments.
Figure 7 shows the error distributions of the TAMS algorithm and neural TAMS. The errors were counted until more than 5000 bit errors occurred. Though a neural network can mitigate the effect of trapping sets, the mitigation is limited, especially for those nodes which are involved in many trapping sets. So the error occurrences of these nodes become more dominant in a high-SNR regime.

5.2. Neural mRRD/Neural Cyclic-mRRD

Figure 8, Figure 9, Figure 10 and Figure 11 show the comparison on error performances of neural mRRD (5,10,5) and neural Cyclic-mRRD (5,10,5). The results were calculated after 100 frames errors had occurred and more than 1000 frames had been processed. The cyclic shifts were selected by using Algorithm 1 with trapping sets satisfying a + b < 7 and a < 4 .
Referring to Figure 8, for the (47,24) QR code, the neural mRRD achieved a gain of 0.5 dB at BER = 4 × 10 6 , compared with the DS algorithm. The neural Cyclic-mRRD had a slight loss of performance, which was about 0.08 dB at BER = 10 6 , compared to neural mRRD, which randomly chooses permutations from the whole group. The selected cyclic right shift length was 22 based on Algorithm 1. We also observed that the FER of neural Cyclic-mRRD was lower than DS at all SNRs. However, at very low SNR, each incorrect mRRD/Cyclic-mRRD frame introduced additional bit errors compared to the DS algorithm.
Figure 9 shows the FER/BER performance of the neural mRRD and neural Cyclic-mRRD on the (71,36) QR code. The neural mRRD obtained a gain of about 1.0 dB at BER = 2 × 10 5 compared to the DS algorithm. The selected cyclic shift for neural Cyclic-mRRD was a right shift of 36, and the loss of performance compared to the neural mRRD was about 0.3 dB at BER = 10 6 .
Figure 10 shows the FER/BER performance of the neural mRRD and neural Cyclic-mRRD on the (73,37) QR code. The neural mRRD obtained a gain of about 0.5 dB at BER = 3 × 10 6 compared to the DS algorithm. The selected cyclic shift for neural Cyclic-mRRD was a right shift of 31 bits. The performance loss compared to the neural mRRD was about 0.1 dB at BER = 10 6 .
Figure 11 shows the FER/BER performance of the neural mRRD and neural Cyclic-mRRD on the (97,49) QR code. The neural mRRD obtained a gain of about 0.4 dB improvement at BER = 2 × 10 6 compared to the DS algorithm. The selected cyclic shift for neural Cyclic-mRRD was a right shift of 31 bits. The performance loss compared to the neural mRRD was about 0.04 dB at BER = 10 6 .

5.3. Longer QR Codes

Note that we used a different set of hyperparameters to train the neural network for QR codes with code length more than 200. The training set contained 2 × 10 6 vectors which were corrupted by noises with SNR varying from 1 to 8 dB. The learning rate was set to 0.001, and the other hyperparameters remained the same. Furthermore, we used an ordinary BCE loss function instead of a multiloss BCE. We could not obtain the performance of the DS algorithm because the minimum distances of long QR codes are still unknown, and the complexity of the DS algorithm is very high for long QR codes. Neural TAMS and TAMS applied five iterations and Cyclic-mRRD’s size was (5,10,5). Figure 12 shows the BER/FER performance of the neural mRRD and neural Cyclic-mRRD on the (241,121) QR code. The cyclic shift for neural Cyclic-mRRD was a right shift of 191 bits, which was selected by using trapping sets with a 4 and a + b < 8 .
Figure 13 shows the BER/FER performance of the neural mRRD and neural Cyclic-mRRD on the (263,132) QR code. The cyclic shift was a right shift of 232 bits, which was selected by using trapping sets with a 5 and a + b < 8 .

5.4. Complexity Analysis

The proposed Cyclic-mRRD method employs multiple BP/min-sum decoders and more multiplications on every iteration. However, most of them can run in parallel. Assume the number of edges in a Tanner graph is e. Each neural BP/min-sum iteration needs on average 2 e more multiplications than the original BP/min-sum iteration. However, the latency nearly remained the same because different nodes can process incoming messages simultaneously.
For the Cyclic-mRRD ( W , L , I ) , it needs W × L × I BP/min-sum iterations in the worst case, and W × I iterations for the best case, because there are only I iterations to be performed for each pipeline in the best case. Considering that there are no data dependencies among different pipelines, W pipelines can decode the different shifted inputs at the same time. Then, the latency in the worst case reduces to L × I iterations and I iterations for the best case. For our Cyclic-mRRD (5,10,5), the latency in parallel was 50 iterations in the worst case and 5 iterations in the best case.
It is hard to analyze the average complexity of the above decoders. Thus, the average time complexity at different SNRs was obtained by computing the average iterations needed to converge using 10,000 random frames, as shown in Table 2. The legend “serial” means the sum of the number of min-sum iterations from all pipelines, and “parallel” denotes the largest number of iterations among all pipelines. The average serial/parallel iterations were calculated by summing up all iterations for each pipeline and divided by the number of codewords, which represents the time complexity when all pipelines run in serial. We can see that the average complexity of neural Cyclic-mRRD nearly achieved the best complexity at E b / N 0 6 (about 25 iterations in serial and 5 iterations in parallel), and the worst complexity at E b / N 0 = 0 (about 220 iterations in serial and 48 iterations in parallel).

6. Conclusions

In this paper, we proposed several neural-network-based methods to decode binary QR codes. We observed that the performance of the BP/min-sum algorithm was mainly affected by a small part of trapping sets, even though there were a huge number of four-cycles in the parity check matrix of QR codes. Based on this phenomenon, the proposed Cyclic-mRRD moved the nodes out from the trapping sets by using cyclic shifts. The simplification achieved similar performance as mRRD, while reducing the complexity of implementation and memory usage. Both the mRRD and Cyclic-mRRD outperformed the DS algorithm in terms of BER/FER performance, with less asymptotic time complexity.
Finally, we obtained the BER/FER performance of two longer QR codes using the proposed Cyclic-mRRD, though their error performances could not be determined without knowing their minimum distances.
Our future work will include approaching ML performance for longer QR codes with less or the same complexity. Furthermore, there are some possible improvements on the neural network, including training, data quantization and trapping set issues.

Author Contributions

Conceptualization, M.W.; methodology, M.W. and Y.L.; software, M.W. and Y.L.; investigation, M.W.; data curation, M.W.; writing—original draft preparation, M.W.; writing—review and editing, Y.L., H.W., F.C.M.L., R.L. and Y.H.; visualization, M.W.; supervision, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by China Natural Science Foundation (NSF) under Grant 61771081, the Fundamental Research Funds for the Central Universities (China) under Grant cstc2019jcyjmsxmX0110.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prange, E. Some Cyclic Error-Correcting Codes with Simple Decoding Algorithms; Technical Report TN-58-156; Air Force Cambridge Research Center: Cambridge, MA, USA, 1958. [Google Scholar]
  2. Wicker, S.B. Error Control Systems for Digital Communication and Storage; Prentice-Hall, Inc.: Hoboken, NJ, USA, 1994. [Google Scholar]
  3. Honary, B.; Hunt, B.; Maundrell, M. Improving automatic link establishment through a new soft decision trellis decoder for the (24,12) Golay code. In Proceedings of the Sixth International Conference on HF Radio Systems and Techniques, York, UK, 4–7 July 1994; pp. 182–185. [Google Scholar] [CrossRef]
  4. Li, Y.; Duan, Y.; Chang, H.C.; Liu, H.; Truong, T.K. Using the Difference of Syndromes to Decode Quadratic Residue Codes. IEEE Trans. Inf. Theory 2018, 64, 5179–5190. [Google Scholar] [CrossRef]
  5. Duan, Y.; Li, Y. An Improved Decoding Algorithm to Decode Quadratic Residue Codes Based on the Difference of Syndromes. IEEE Trans. Inf. Theory 2020, 66, 5995–6000. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Wu, H.; Coates, M. On the Design of Channel Coding Autoencoders With Arbitrary Rates for ISI Channels. IEEE Wirel. Commun. Lett. 2022, 11, 426–430. [Google Scholar] [CrossRef]
  7. Chen, X.; Ye, M. Cyclically Equivariant Neural Decoders for Cyclic Codes. In Proceedings of the 38th International Conference on Machine Learning, ICML, Virtual Event, 18–24 July 2021; Volume 139, pp. 1771–1780. [Google Scholar]
  8. Satorras, V.G.; Welling, M. Neural Enhanced Belief Propagation on Factor Graphs. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, AISTATS, Virtual Event, 13–15 April 2021; Volume 130, pp. 685–693. [Google Scholar]
  9. Jiang, Y.; Kim, H.; Asnani, H.; Kannan, S.; Oh, S.; Viswanath, P. LEARN Codes: Inventing Low-Latency Codes via Recurrent Neural Networks. IEEE J. Sel. Areas Inf. Theory 2020, 1, 207–216. [Google Scholar] [CrossRef]
  10. Nachmani, E.; Marciano, E.; Lugosch, L.; Gross, W.J.; Burshtein, D.; Be’ery, Y. Deep Learning Methods for Improved Decoding of Linear Codes. IEEE J. Sel. Top. Signal Process. 2018, 12, 119–131. [Google Scholar] [CrossRef]
  11. Jiang, Y.; Kim, H.; Asnani, H.; Kannan, S.; Oh, S.; Viswanath, P. Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, Vancouver, BC, Canada, 8–14 December 2019; pp. 2754–2764. [Google Scholar]
  12. He, Y.; Zhang, J.; Jin, S.; Wen, C.K.; Li, G.Y. Model-Driven DNN Decoder for Turbo Codes: Design, Simulation, and Experimental Results. IEEE Trans. Commun. 2020, 68, 6127–6140. [Google Scholar] [CrossRef]
  13. Kim, H.; Jiang, Y.; Rana, R.; Kannan, S.; Oh, S.; Viswanath, P. Communication Algorithms via Deep Learning. In Proceedings of the 6th International Conference on Learning Representations, ICLR, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  14. Zhang, W.; Zou, S.; Liu, Y. Iterative Soft Decoding of Reed-Solomon Codes Based on Deep Learning. IEEE Commun. Lett. 2020, 24, 1991–1994. [Google Scholar] [CrossRef]
  15. Liu, B.; Xie, Y.; Yuan, J. A Deep Learning Assisted Node-Classified Redundant Decoding Algorithm for BCH Codes. IEEE Trans. Commun. 2020, 68, 5338–5349. [Google Scholar] [CrossRef]
  16. Gruber, T.; Cammerer, S.; Hoydis, J.; Brink, S.t. On deep learning-based channel decoding. In Proceedings of the 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–24 March 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  17. Nachmani, E.; Be’ery, Y.; Burshtein, D. Learning to decode linear codes using deep learning. In Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016; pp. 341–346. [Google Scholar] [CrossRef]
  18. Lugosch, L.; Gross, W.J. Neural offset min-sum decoding. In Proceedings of the International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1361–1365. [Google Scholar] [CrossRef]
  19. Dimnik, I.; Be’ery, Y. Improved random redundant iterative HDPC decoding. IEEE Trans. Commun. 2009, 57, 1982–1985. [Google Scholar] [CrossRef]
  20. Leon, J. Computing automorphism groups of error-correcting codes. IEEE Trans. Inf. Theory 1982, 28, 496–511. [Google Scholar] [CrossRef]
  21. Halford, T.R.; Chugg, K.M. Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes. In Proceedings of the IEEE International Symposium on Information Theory, Seattle, WA, USA, 9–14 July 2006; pp. 2230–2234. [Google Scholar] [CrossRef]
  22. Richardson, T. Error floors of LDPC codes. In Proceedings of the Annual Allerton Conference on Communication Control and Computing, Monticello, IL, USA, 1–3 October 2003; Volume 41, pp. 1426–1435. [Google Scholar]
  23. Hatami, H.; Mitchell, D.G.M.; Costello, D.J.; Fuja, T.E. A Threshold-Based Min-Sum Algorithm to Lower the Error Floors of Quantized LDPC Decoders. IEEE Trans. Commun. 2020, 68, 2005–2015. [Google Scholar] [CrossRef]
  24. Karimi, M.; Banihashemi, A.H. Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes. IEEE Trans. Inf. Theory 2012, 58, 6942–6958. [Google Scholar] [CrossRef]
  25. Rasala, R. Split codes and the Mathieu groups. J. Algebra 1976, 42, 422–471. [Google Scholar] [CrossRef] [Green Version]
Figure 1. An mRRD ( W , L , I ) structure. “ γ ” represents a log-likelihood-ratio vector. A circle with “ σ ” denotes a random permutation, and a square with “BP” stands for I BP iterations. “LMS” denotes the least metric selector.
Figure 1. An mRRD ( W , L , I ) structure. “ γ ” represents a log-likelihood-ratio vector. A circle with “ σ ” denotes a random permutation, and a square with “BP” stands for I BP iterations. “LMS” denotes the least metric selector.
Electronics 11 02717 g001
Figure 2. A (3,1) trapping set.
Figure 2. A (3,1) trapping set.
Electronics 11 02717 g002
Figure 3. The neural network structure of 2 BP/min-sum iterations for the (7,4) QR code according to Equations (8) and (9). Green circles represent odd-layer neurons and red circles even-layer neurons. Circles with black rectangles are neurons with LLR values as input. The purple dashed lines represent w o 2 e weights and blue solid lines represent w e 2 o weights.
Figure 3. The neural network structure of 2 BP/min-sum iterations for the (7,4) QR code according to Equations (8) and (9). Green circles represent odd-layer neurons and red circles even-layer neurons. Circles with black rectangles are neurons with LLR values as input. The purple dashed lines represent w o 2 e weights and blue solid lines represent w e 2 o weights.
Electronics 11 02717 g003
Figure 4. Trapping sets and errors distribution of the (47,24) QR code. More than 5000 errors are counted after 5 TAMS iterations by simulation at E b / N 0 = 8 dB . The trapping sets distribution is calculated by counting the occurrences of nodes in small trapping sets ( a + b < 7 and a < 4 ).
Figure 4. Trapping sets and errors distribution of the (47,24) QR code. More than 5000 errors are counted after 5 TAMS iterations by simulation at E b / N 0 = 8 dB . The trapping sets distribution is calculated by counting the occurrences of nodes in small trapping sets ( a + b < 7 and a < 4 ).
Electronics 11 02717 g004
Figure 5. Distribution of 4-cycles and errors of the (47,24) QR code.
Figure 5. Distribution of 4-cycles and errors of the (47,24) QR code.
Electronics 11 02717 g005
Figure 6. Performance comparison between BP and TAMS algorithms and their neural network versions for the (47, 24) QR code. All decoders applied 5 BP/min-sum iterations.
Figure 6. Performance comparison between BP and TAMS algorithms and their neural network versions for the (47, 24) QR code. All decoders applied 5 BP/min-sum iterations.
Electronics 11 02717 g006
Figure 7. Error distribution comparison between TAMS and Neural TAMS for the (47,24) QR code. All decoders applied 5 min-sum iterations.
Figure 7. Error distribution comparison between TAMS and Neural TAMS for the (47,24) QR code. All decoders applied 5 min-sum iterations.
Electronics 11 02717 g007
Figure 8. BER/FER performance of different decoders for the (47,24) QR code.
Figure 8. BER/FER performance of different decoders for the (47,24) QR code.
Electronics 11 02717 g008
Figure 9. BER/FER performance of different decoders for the (71,36) QR code.
Figure 9. BER/FER performance of different decoders for the (71,36) QR code.
Electronics 11 02717 g009
Figure 10. BER/FER performance of different decoders for the (73,37) QR code.
Figure 10. BER/FER performance of different decoders for the (73,37) QR code.
Electronics 11 02717 g010
Figure 11. BER/FER performance of different decoders for the (97,49) QR code.
Figure 11. BER/FER performance of different decoders for the (97,49) QR code.
Electronics 11 02717 g011
Figure 12. BER/FER performance of different decoders for the (241,121) QR code.
Figure 12. BER/FER performance of different decoders for the (241,121) QR code.
Electronics 11 02717 g012
Figure 13. BER/FER performance of different decoders for the (263,132) QR code.
Figure 13. BER/FER performance of different decoders for the (263,132) QR code.
Electronics 11 02717 g013
Table 1. The order of permutation groups for each QR code.
Table 1. The order of permutation groups for each QR code.
Codes(23,12)(31,16)(47,24)(71,36)
| Per ( C ) | 10,200,96046510812485
Codes(73,37)(79,40)(89,45)(97,49)
| Per ( C ) | 2628308139164656
Table 2. Average complexity on the (47,24) QR code.
Table 2. Average complexity on the (47,24) QR code.
SNR01234567
Iterations (serial)219.9191.3144.889.949.830.925.725.0
Iterations (parallel)48.445.338.327.115.58.15.55.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, M.; Li, Y.; Liu, R.; Wu, H.; Hu, Y.; Lau, F.C.M. Decoding Quadratic Residue Codes Using Deep Neural Networks. Electronics 2022, 11, 2717. https://doi.org/10.3390/electronics11172717

AMA Style

Wang M, Li Y, Liu R, Wu H, Hu Y, Lau FCM. Decoding Quadratic Residue Codes Using Deep Neural Networks. Electronics. 2022; 11(17):2717. https://doi.org/10.3390/electronics11172717

Chicago/Turabian Style

Wang, Ming, Yong Li, Rui Liu, Huihui Wu, Youqiang Hu, and Francis C. M. Lau. 2022. "Decoding Quadratic Residue Codes Using Deep Neural Networks" Electronics 11, no. 17: 2717. https://doi.org/10.3390/electronics11172717

APA Style

Wang, M., Li, Y., Liu, R., Wu, H., Hu, Y., & Lau, F. C. M. (2022). Decoding Quadratic Residue Codes Using Deep Neural Networks. Electronics, 11(17), 2717. https://doi.org/10.3390/electronics11172717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop