Next Article in Journal
Stability Control of Underground Openings Under High-Stress and Deep Mining Environments
Previous Article in Journal
A Mathematical Model for the Initial Interaction Stage Between a Radar System and a Target Using GERT Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data Transmission Error Detection and Correction with Cyclic Redundancy Check and Polar Code Integration with Successive Cancellation Decoding Algorithm

1
School of Cyberspace Security, Hainan University, Haikou 570228, China
2
Key Laboratory of Internet Information Retrieval of Hainan Province, Haikou 570228, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(3), 1124; https://doi.org/10.3390/app15031124
Submission received: 17 December 2024 / Revised: 18 January 2025 / Accepted: 21 January 2025 / Published: 23 January 2025

Abstract

:
In the context of Internet of Things and 5G communication, ensuring the accuracy and reliability of data transmission is critical, especially as the future 6G communication system places a greater emphasis on highly reliable transmission. However, the existing error correction schemes for 5G are found to be insufficient in their error correction capability for the 6G environment. To address this issue, this paper proposes an enhanced error correction scheme, which is an improvement over existing methods. This scheme is based on a combination of cyclic redundancy check (CRC) codes and Polar Codes, optimized through the use of a Successive Cancellation Decoding algorithm. The objective is to enhance the error detection and correction capabilities during data transmission. By leveraging the synergistic application of CRC and Polar Codes, the proposed scheme introduces enhanced sequence interleaving techniques and error pattern analysis, which significantly improve the decoding algorithm’s performance and accuracy. Experimental results demonstrate that the proposed scheme, in comparison to the traditional Successive Cancellation Decoding algorithm, effectively reduces the block error rate (BLER), thereby enhancing the high reliability of data transmission.

1. Introduction

In the rapidly evolving landscape of information technology, from smart homes to industrial automation, and from health and medical applications to intelligent transportation systems, the extensive application of Internet of Things (IoT) devices is becoming increasingly critical. These devices depend on efficient and reliable data transmission technologies to ensure the accuracy and timeliness of their operations [1]. Particularly with the support of modern communication technologies such as 6G, the efficiency and accuracy of data transmission are directly linked to the overall performance and security of the system [2,3].
The CRC is a prevalent error detection method that employs a specific polynomial to generate check bits, thereby enabling the detection of errors during data transmission. Polar Codes, a novel approach to channel coding, have been incorporated into the 5G standard, primarily because of their signal-to-noise ratio (SNR) and their straightforward structure. Moreover, the enhanced polarized codes are capable of fulfilling the demands for high-capacity and high-reliability transmission in the context of 6G [4,5,6,7].
The core of the 3G error correction method is to reduce the transmission error rate through forward error correction (FEC) technology, thereby optimizing network throughput [8]. The 4G error correction method uses turbo code and LDPC code close to the Shannon limit to efficiently achieve a lower bit error rate (BER) and higher data transmission rate, while taking into account the optimization of decoding complexity [9]. The 5G error correction method uses efficient LDPC code and Polar Code, and achieves high error correction capability and flexible adaptation to the transmission requirements of code rate and length through soft decision technology [10]. The 6G error correction method emphasizes the development of advanced channel coding technology that adapts to high-frequency communication, low latency, high reliability and multi-objective optimization requirements, including LDPC, Polar Code, turbo code and their variants, and uses various methods to optimize performance [11]. From 1G to 6G, channel coding has experienced an evolution from analog transmission to digitalization, the introduction of error correction coding, and the realization of ultra-high data rates and low-latency communication. In 6G, more efficient coding technology will be used to meet the future requirements of extremely high reliability, low power consumption, and massive connections [12].
Polar Codes, a recent development in channel coding, have attracted considerable attention due to their theoretical potential to approach channel capacity and their relatively low encoding and decoding complexity [13]. They have been selected as the coding method for the 5G enhanced Mobile Broadband (eMBB) use case and the Physical Broadcast Channel (PBCH) [14,15]. Among the various decoding strategies for Polar Codes, Successive Cancellation Decoding (SC) [13], Successive Cancellation List Decoding (SCL) [16], and Belief Propagation Decoding (BP) [17] are the most widely employed, each offering a different trade-off between performance and complexity, highlighting the versatility of Polar Codes in various communication scenarios. SC decoding is characterized by its low complexity but is outperformed by SCL decoding, which improves decoding performance by maintaining multiple potential paths at a higher complexity. BP decoding, which employs iterative propagation, can provide better performance at lower SNRs but may suffer from convergence issues and stability problems.
Niu et al. [18] reviewed the basic concepts and decoding algorithms of polarized codes, demonstrating the potential to outperform turbo and LDPC codes, but the use of polarized codes alone resulted in decoding performance and channel applicability that are not applicable to future 6G communication systems, but the use of only Polar coding did not perform well enough in terms of channel applicability. Zhang et al. [19] proposed the ES-SCL-Flip decoding algorithm, which enhances the error correction performance of the conventional SCL decoder while minimizing decoding complexity, making it an effective solution for improving Polar Code decoding, particularly in 5G applications. Wang et al. [20] introduced the LS-SCLF decoding algorithm, which improves the error correction performance of Polar Codes with a slight increase in decoding complexity by employing a new flip-bit metric, a perturbation parameter, and a layered search strategy, achieving a good balance between performance and complexity.
Li et al. [21] proposed the Pre-configured Error Pattern Ordered Statistical Decoding (PEPOSD) algorithm, which optimizes the decoding order by pre-generating the error patterns and combining them with the CRC test, thus improving the performance of short CRC–Polar Codes while significantly reducing the decoding complexity, showing a better balance of performance and complexity than the traditional methods. Timokhin et al. [22] proposed an improved SC-Creeper decoding algorithm, which effectively improves the efficiency and performance of Polar Code decoding by combining the threshold control and the cost function, especially in the case of large code lengths, outperforming the traditional Fano decoding algorithm and approaching the CA-SCL-8 decoder, demonstrating better computational efficiency and performance balance. Table 1 is a comparison of various schemes.
In this paper, we introduce a novel error correction coding scheme, which is an enhanced version of the error correction scheme for CRC codes and Polar Codes. This scheme is optimized through Successive Cancellation Decoding algorithms, with the aim of enhancing the error detection and correction capabilities during data transmission. Our main contributions are as follows:
1.
We propose a novel error correction scheme that improves the data error correction performance.
2.
We introduce a new sequence interleaving technique to enhance the decoding reliability of Polar Codes.
3.
The proposed scheme efficiently identifies and corrects errors without requiring additional retransmission by employing dynamic error pattern analysis.
4.
Experimental results demonstrate the effectiveness of the proposed scheme in reducing BER and improving decoding efficiency.
This research not only enhances the reliability of data transmission but also offers substantial technological support for 6G and other high-speed communication technologies, with the potential for widespread application across various critical domains.

2. Methods

2.1. CRC

The CRC is a widely employed technique for detecting errors in digital data [23]. Particularly valuable in data transmission and storage contexts, CRC helps ensure the integrity of data by detecting accidental changes to raw data arising from noise or other disturbances.
CRC operates through polynomial division, a mathematical concept borrowed from algebra, but applied here in a binary framework. Each block of data to be transmitted is taken as a large number, often visualized as a sequence of bits. This number is then divided by a predetermined, non-zero binary polynomial, known as the CRC polynomial. The key mathematical operation involved is the modulo-2 division, which is analogous to the long division one learns in arithmetic, but adjusted for binary calculations. Here, subtraction and addition are equivalent, simplifying the arithmetic operations to XOR operations.
The essence of CRC’s error-checking capability lies in its generation and use of a ‘checksum’—a small block of data derived from the modulo-2 division. Before transmission, the checksum is appended to the data. Upon reception, the combined block of data and checksum is once again divided by the same CRC polynomial. If the remainder of this division is zero, the data are considered unchanged; otherwise, an error is flagged.
Mathematically, if D is the original data represented as a binary number, and P is the CRC polynomial, then the CRC process can be described by the following steps:
1.
Multiply D by 2 n , where n is the degree of P. This step effectively appends n zero bits to the end of D.
2.
Divide the modified D by P and find the remainder R, where R is a binary number less than P in degree.
3.
Append R to the original data D, forming the transmitted message T. On the receiver’s end, T is divided by P. A non-zero remainder indicates that an error has occurred during transmission, prompting corrective measures or retransmission requests.
This method’s reliability and simplicity have made CRC a fundamental tool in digital communications, widely applied in various networking protocols and storage systems to ensure data integrity.

2.2. Polar Code

Polar Codes are known for their low-complexity coding and decoding algorithms that achieve channel capacity for Binary-Input Discrete Memoryless Channel (B-DMC). Polar Codes are based on the concept of channel polar, which ensures that as the code length increases, the channel is polarized into two different sets of channels: one set of fully noisy channels and another set of noise-free channels [22].

2.2.1. Encoding with Polar Codes

The encoding process of Polar Codes is founded on a specific construction of a generator matrix which exploits the recursive nature of code construction. The key operation in Polar Code encoding is the application of a transformation matrix, typically denoted as G, to the input vector u of length N. The matrix G is defined recursively by
G = F n
where F is the basic transformation matrix given by
F = 1 0 1 1
and n is such that N = 2 n . The n denotes the n t h Kronecker power of F, representing the recursive construction of the code.
The vector u contains both data and frozen bits, with frozen bits set at known positions (often zero) and known to both the encoder and decoder, reducing the transmission’s redundancy. The encoded vector x is computed as
x = u · G
This operation linearly transforms the input vector u using the matrix G, resulting in the coded vector x ready for transmission over the channel.

2.2.2. Decoding with Polar Codes

Decoding Polar Codes efficiently is performed using the Successive Cancellation (SC) algorithm [18], which exploits the recursive structure of G. The SC decoder operates by estimating the transmitted bits sequentially, taking advantage of the previously decoded bits to refine the estimates of the following bits. For each bit, the decoder calculates the likelihood of it being a ‘0’ or ‘1’, based on the received values and the decisions made for the preceding bits.
The performance of the SC decoder can be significantly improved with techniques such as Successive Cancellation List (SCL) decoding, which maintains multiple decoding paths simultaneously, or utilizing soft decision decoding to enhance the reliability of each estimated bit.
These decoding techniques ensure that Polar Codes not only achieve high efficiency but also maintain robustness in environments with high noise levels, making them highly suitable for applications requiring reliable data transmission such as in 6G communications.

2.2.3. Successive Cancellation (SC) Algorithm

Decoding Polar Codes effectively often employs the Successive Cancellation (SC) algorithm, which leverages the recursive nature of the transformation matrix G. The SC algorithm is designed to estimate the transmitted bits sequentially, using a series of probability calculations and decisions based on both the received data and previously decoded bits.
The Successive Cancellation algorithm operates by considering the likelihood ratios (LRs) for each bit. The LR for a bit u i is defined as the ratio of the probabilities of u i being 0 to u i being 1, given the received vector y and the estimates of the previous bits u 1 ^ i 1 . This can be mathematically expressed as
L R i = P ( u i = 0 | y , u 1 ^ i 1 ) P ( u i = 1 | y , u 1 ^ i 1 )
The bit u i ^ is then decoded based on this likelihood ratio:
u i ^ = 0 , i f L R i 1 1 , i f L R i < 1
Recursive Calculation of LRs:
The recursive structure of the matrix G allows the LRs to be computed efficiently. Specifically, for a code of length N = 2 n , the decoding operation can be split recursively into two halves. The LRs for the first half of the bits u 1 N 2 and the second half u N 2 + 1 N are computed separately, using the previously decoded bits as follows: For the first half,
L R 1 : N / 2 y 1 N / 2 , u 1 ^ i 1 = f y , u N / 2 + 1 ^ N
For the second half,
L R n / 2 + 1 : N y N / 2 + 1 N , u 1 ^ N / 2 = g y , u 1 ^ i 1 , u N / 2 + 1 ^ i 1
The functions f and g are defined based on the polar transform and take into account both the received vector y and the estimated bits so far. The SC algorithm systematically decodes each bit and uses the estimate to refine subsequent likelihood ratio calculations, moving from the least reliable to the most reliable bits according to the polarizing properties of the code construction.
This recursive and sequential approach allows the SC decoder to efficiently utilize both channel outputs and prior decisions, making it highly effective for the decoding of Polar Codes, especially in scenarios where computational simplicity and performance are critical.
This detailed explanation provides a clear view of how the SC algorithm functions, highlighting its mathematical basis and recursive nature, which are crucial for understanding its application in decoding Polar Codes.

3. Proposed Scheme

Assume that Device_A is the data sender and Device_B is the data receiver. The Device_A process is shown Table 2, and the Device_B process is shown Table 3; the functions crc_generator, crc_interleave, polar_enconding, and polar_sc_decoding are all preknowledge description algorithms, and the main work of this paper is to propose the algorithms of error_detecting, error_correcting, and crc_deinterleave functions.

3.1. CRC Interleaving Algorithm

The CRC sequence in the CRC encoding stage is interleaved to improve the reliability of information transmission, that is, important bits are transmitted through a highly reliable channel. The sequence interleaving scheme performed is introduced below.

3.1.1. Related Definitions and Concepts

(a) Parameters and functions
The following explains the meaning of subsequent parameters or functions:
1.
C R C : Cyclic redundancy code;
2.
k: The length of each message packet (data block), called information bits.
3.
n: The length of the codeword.
4.
Linear n , k code: The codeword length is n, a linear code containing information bits k, and the check bit length is n k .
5.
g x : Generating polynomial of CRC.
6.
r: The highest degree of C R C generated polynomial g x .
7.
m: The order or period of the polynomial g x , such that it is the smallest positive integer that is divisible by x m 1 .
8.
h x : Parity check polynomial of the C R C code.
9.
h i : The coefficient of the i t h item of h x .
10.
H: Parity check matrix of the code.
11.
h i , j : The element with the row number i and column number j of the matrix H.
12.
H c : The matrix after matrix transformation.
13.
H c j : Column j of matrix H c .
14.
f H H c j : The corresponding column number of the j t h column of matrix H c in the matrix H.
15.
b i : the i t h bit of information.
16.
H n k , n : H is a matrix of n k rows and n columns.
(b) Data block construction
The form of the data block before the CRC interleaving process: The input data of CRC encoding are binary information sequences. These sequences are divided into fixed-length message groups (data blocks). The data generated after each message group (data block) are called after CRC encoding. Polynomial calculations are G F 2 performed on binary fields. After the redundant bit information is obtained after the message grouping operation, the original message grouping form is b n 1 , b n 2 , b n 3 , , b n k b n k 1 , b n k 2 , , b 0 , the first k bits are the information bits, and the last n k bits are the information check bits.
(c) Generating and parity check polynomial selection
The selection principle of the cyclic redundancy code generation polynomial g x is as follows: make the row vector Hamming weight of the cyclic redundancy code parity check matrix generated by the selected cyclic redundancy code polynomial as small as possible, so as to use as few bits as possible bit to determine whether there is an error in the check bit of the cyclic redundancy code. So, the cyclic redundancy code-generating polynomial should be selected such that it is g x the smallest positive integer that can be divisible.
The deterministic C R C n , k parity check polynomial h x is as follows:
h x = x m + 1 g x
(d) Determine the parity check matrix H
Hypothesis 1. 
Hypothesis h x = a + a 0 x + a 1 x 2 + + a p 1 x p .
Then, the number of rows of the parity check matrix H is n k , as shown in Equation (9), the number of columns is n, and the last column is composed of the coefficient of h x and the subsequent term n, and the other columns are formed by shifting the last column to the left. The specific moving process is as follows: The first row z ( 0 z < r ) is composed of h x , with the coefficient of the reciprocal term n being circularly moved to the left r z + 1 .
H = a p n k 1 a p n k 2 a p n k 3 a p n k + 2 a p n k + 1 a p n k a p n k 2 a p n k 3 a p n k 4 a p n k + 1 a p n k a p n k 1 a p n k 3 a p n k 4 a p n k 5 a p n k a p n k 1 a p n k 2 . · . · . · a p n 2 a p n 3 a p n 4 a p 1 a p n a p n 1 a p n 1 a p n 2 a p n 3 a p 2 a p 1 a p n a p n 0 a p n 1 a p n 2 a p 3 a p 2 a p 1

3.1.2. Interleaving Process

The interleaving process is a systematic procedure designed to rearrange the columns of a binary matrix to optimize the sequence for error detection and correction. The scheme flow chart is shown in Figure 1. This process involves the following steps:
Step 1: Row-wise Traversal and Column Rearrangement
Initialization: Begin with the first row of the matrix. Traverse each element from left to right.
Condition Check: For each element in the row, the following occurs:
  • If the element is 1, move the corresponding column to the leftmost available position of the matrix, ensuring that the new leftmost column does not already contain a 1.
  • Shift all the other columns (if any) one position to the right to accommodate this movement.
  • If the element is 0, leave the column in its current position.
Example:
  • Suppose the third element in the row is 1. Move the third column to the leftmost position that does not already contain a 1 (e.g., the second column). Shift the remaining columns (fourth, fifth, etc.) to the right.
  • If the fifth element in the row is also 1, move the fifth column to the next available leftmost position (e.g., the third column), and shift the remaining columns accordingly.
  • Repeat this operation until all elements in the row have been processed.
Step 2: Subsequent Row Adjustment
Traverse and Rearrange: After completing Step 1 for the first row, move to the second row. Traverse the columns whose element value is 1 in this row.
Dual Condition Check:
  • If an element in the current column (e.g., second row) and the element directly below it (e.g., third row) are both 1, move the column to the leftmost position where the upper and lower rows of the matrix do not simultaneously contain 1.
  • If a column already satisfies this condition, leave it unchanged.
Example:
  • Suppose the first column has 1 in both the first and second rows. Since no leftmost column exists for this condition, leave the column in place.
  • If the second column has a 1 in the first row and 0 in the second row, do not move the column.
  • Continue processing the second row in this manner until all columns with a 1 are traversed.
Repeat for Remaining Rows: For each subsequent row, repeat Step 2 using the same traversal and rearrangement logic. The starting position for each traversal is the leftmost position not 0 in the previous row.
Step 3: Interleaved Sequence Generation
Column Numbering: After completing the interleaving process, assign a label to each column based on its position in the original matrix. Sequence Formation: Arrange the column numbers in the order they appear in the newly interleaved matrix. The resulting sequence is the interleaved sequence number array:
f H ( H c 0 ) , f H ( H c 1 ) , , f H ( H c k 1 ) .
Step 4: Position Mark Array Generation
Two-Dimensional Array: Create a position mark array, denoted as p o s i t i o n I n f o , which has two rows. The first row indicates the position of each column in the original check matrix H. The second row indicates the position of the last 1 in each row of the interleaved matrix. Purpose: The positions in the second row are used for error detection and correction.
Example: If the last 1 in the first row appears in column 6, and the last 1 in the second row appears in column 10, while the last 1 in the third row appears in column 11, the second row of p o s i t i o n I n f o would be [ 6 , 10 , 11 ] .
The interleaving process concludes with the formation of the following interleaved sequence:
H c = H c 0 , H c 1 , , H c k 1
and the associated position mark array, which are integral to the error detection and correction mechanism.

3.2. Error-Detecting and -Correcting Scheme

3.2.1. Error Detecting

As Figure 2 shows, after column-shifting processes, there are some columns with 1 at the end of the 1 sequence in the ith row, and at the same time, they are at the start of the 1 sequence of the i + 1 th row. We denote the sequence formed by these columns as the Repeated Bits Area (RBA). And the rest of the 1 columns form the Unrepeated Bits Area (UBA), which helps to describe where errors occur.
The basic idea behind the proposed error-detecting and -correcting method is manipulating the bit-flipping based on the capacity or reliability of different channels. And, the proposed scheme is proposed mainly to correct one-bit errors and double-bit errors. However, the terms one-bit error and double-bit error mean that one bit error or two bit errors occurred in two consecutive columns, instead of the whole data block.
The error-correcting process includes two steps:
Error Symptom Classification. The error-correcting process is carried out only if errors occur. Consequently, the two consecutive columns always indicate that the first column has failed to pass the parity check equation. More specifically, for one bit error, if this happens in the RBA, the parity check equations of both consecutive columns are not equal to 0. On the contrary, if this happens in the UBA, the parity check of the first of the two consecutive columns is failed and the second parity check is passed: For double-bit errors, if both errors occur in the RBA, the parity check equations of both consecutive columns are not equal to 0, which is the same as a one-bit error happening in RBA. If both errors happen in the UBA in the first column, then this would have the same symptoms as one error occurring in the RBA. Moreover, if one error occurs in the RBA and one in the UBA in the same column or errors happen in different columns, both of the parity checks would be failed. Table 4 summarizes all these conditions.

3.2.2. Error Correcting

Since the error-correcting process occurs after an error symptom is detected, analyzing the error pattern underlying various symptoms should be the first step of error correction.
According to Table 4, the first parity check is failed while the second parity check is passed, indicating 2 possible error patterns. If both of the parity checks are failed, there are 5 possible error patterns.
The basic idea of error correcting is as follows: based on the error symptom, try to find the highest possible error pattern, then flip the error bits. To decide which one or two bits to flip, the Error Occur Probability (EOP) of these error patterns needs to be calculated and sorted, so that the bit-flipping can applied according to the EOP. The basic idea is to flip the bit or combination of bits with the lowest reliability, which is transmitted through the channel with the lowest capacity, and perform the parity check again; if it passes the check, the error is corrected; otherwise, the correction is failed.
(a) Bit Reliable Probability
Suppose there are two consecutive rows as Figure 3 displays:
The Bit Reliable Probability (BRP) is used to illustrate the probability that a certain bit could cause an error, whose value is equal to 1 minus the polarized channel capacity that transfers the bit:
P b r i , j = 1 C i j
P b r i , j and C i , j represent the BRP and the polarized channel capacity of the bit located in the i t h row and j t h column in the shifted matrix. Then, according to the section X column shift, the BRP relationship of these 1s in Figure 3 is
P b r i + 1 , 9 > P b r i + 1 , 10 > P b r i , 4 > P b r i , 5 > P b r i , 6 > P b r i , 7 > P b r i , 8
For an error pattern including only one bit, the EOP equals the BRP. However, for other error patterns, the EOP equals the multiplication of the bits that relate to it. An error is detected in the two consecutive rows in Figure 3. Table 5 shows the EOP of distinct error patterns based on Table 4.
(b) Error-Correcting Principle
Since trying to manipulate the bit-flipping by assuming all the possible error patterns is resource-consuming, only part of the error patterns should be tested. Consequently, there are two suggested ways to decide which part. One is setting a certain number of error patterns that would be tested under certain error symptoms; the other is setting an EOP number under which the error patterns would be tested.
Suppose the split channel capacity of these columns in Figure 3 is
C i , 4 = 0.42 , C i , 5 = 0.48 , C i , 6 = 0.52 C i , 7 = 0.60 , C i , 8 = 0.64 , C i , 9 = 0.35 , C i , 10 = 0.38
(c) Error correction process:
The error correction process includes the following steps:
Step 1: Determine the error mode: After the error is detected at the first error detection and correction mark position, continue decoding to the first marker position. If the first marker position check result is 0 (the first error mode), perform Step 2; if the first marker position check result is not 0 (the second error mode), execute Step 3. Step 2: The first mark position verification failed, and the first mark position test was successful.
(a) Single-bit error: The error is considered in the non-repeat region of the row. Step a1-1: Select the lowest-channel-reliability bit from the non-repeat area, flip the position, and check the line for errors again. If the test result is 0, the error correction is complete; otherwise, continue the error correction. Step a1-2: Select the bit with the second-lowest channel reliability from the non-repeat area of the line, and check the line for errors again. If the test result is 0, the error correction will be completed (in the application process, only the bit with the second-lowest reliability may not be selected, and the lowest-reliability bit can be selected for flipping one by one); otherwise, execute (b).
(b) Double-bit error: Two errors are considered to occur in the non-repeat region of the row i.
Step b1-1: Combine the first line non-repetition zone bits, select the bit combination with the lowest channel reliability (bit channel reliability multiplication), flip the two positions, and check the line for errors again. If the test result is 0, the error correction is complete; otherwise, the error correction continues.
Step b1-2: Select the bit combination with the second-lowest reliability, flip the two positions, and check the line for errors again. If the test result is 0, the error correction is complete; otherwise, the error correction fails and the error correction ends.
(In the application process, the bit combination with the second-lowest reliability may not be selected according to the device performance and requirements, and the lowest-reliability bit combination can be selected for flip verification.)
Step 3: Error mode: The first mark position check failed, and the first position mark test failed. (This indicates that the same error occurred with high probability in two checks.)
(a) Single-bit error: The error occurs in two adjacent rows.
Step a2-1: Select the lowest-channel-reliability bit from the row repetition area, flip the position of the bit, and check the line for errors again. If the test result is 0, the error correction is complete; otherwise, the error correction continues.
Step a2-2: Select the bit with the second-lowest channel reliability from the row repetition zone, flip the bit, and check the error of the line again. If the test result is 0, the error correction is complete; otherwise, continue the error correction.
(In the application process, the bit combination with the second-lowest reliability may not be selected according to the device performance and requirements, and the lowest-reliability bit combination can be selected for flip verification.)
(b) Two-bit error: Both errors are considered to occur in two adjacent rows, or one in the non-repeat region and one in the non-repeat region.
Step b2-1: Pair the bit bits in the row repeat region; select one bit in the non-repeat region; select one bit in the row non-repeat region and the non-repeat region; calculate the channel reliability of all combinations (multiplication of the bit channel reliability); select the combination with the lowest channel reliability, bit flip the two positions, and recheck the bank. If the test result is 0, the error correction is completed; otherwise, continue error correction.
Step b2-2: Select the bit combination with the second-lowest reliability, flip the two positions, and check the line for errors again. If the test result is 0, the error correction is complete; otherwise, the error correction is ended.
(In the application process, the second-lowest-reliability bit combination may not be selected according to the device performance and requirements, and the lowest-reliability bit combination can be selected for flip verification.)
The error correction rule here is to select the two cases with the highest error probability to try the error correction. Before the error probability calculation, we choose the setting threshold, that is, in each possible error combination, we select the error probability greater than a certain threshold to try error correction.

4. Experiment

An experiment based on the open-source Polar encoding and decoding code by Yu Yongrun on GitHub was performed, incorporating an interleaving function and an error detection module into the Successive Cancellation Decoding algorithm. The experimental results are shown in Figure 4; the green line represents the BER of the Successive Cancellation Decoding algorithm without the error detection and correction module, with an information length of 16 and a CRC length of 8, across different SNRs. The red line represents the scheme after the introduction of the error correction mechanism. The experiment we designed is based on the work of https://github.com/YuYongRun/PolarCodeDecodersInMatlab (accessed on 1 May 2023); because this is deeply optimized and accelerated code, it has the best performance compared to the code reproduced by others. The experimental environment is Matlab2022a on the Windows 11 platform, and the computer configuration is as follows: CPU: 13th Gen Intel(R) Core(TM) i5-13400 2.50 GHz; RAM: 32.0 GB. The experimental samples are randomly generated information bit sequences and Gaussian white noise samples to measure the BLER performance indicators at each signal-to-noise ratio. In addition, we conducted a complexity experiment based on Polar Code, and the experimental results are shown in Figure 5. We also conducted a complexity experiment on the error detection and correction performance of the scheme, and the experimental results are shown in Figure 6 and Figure 7. It can be seen from the complexity experiment that our solution has lower complexity based on Polar Code and can better meet the actual communication efficiency requirements.
Under different SNR levels, the new Successive Cancellation Decoding algorithm (SC+)—represented by the red line—is compared with the existing algorithm (SC)—represented by the green line—in terms of BLER. Throughout the chart, it is evident that the SC+ scheme consistently exhibits a lower BLER at each SNR point compared to the SC scheme, highlighting its performance advantage.
Specifically, in the low-SNR region (5 to 7 dB), the new SC+ scheme demonstrates better performance than the existing SC scheme, maintaining a lower error rate even in the presence of weaker signals. For instance, at an SNR of 5 dB, the BLER of SC+ is approximately 0.456, whereas that of SC is 0.424. Although the BLER for both schemes decreases as the SNR increases, the performance improvement of SC+ is more significant at all SNR points.
In the high-SNR region (9 dB and above), the gap between the SC+ and SC schemes narrows. At an SNR of 9.5 dB, the BLER of SC+ drops to about 0.025, while the BLER of SC is about 0.019, underscoring the clear superiority of SC+ in high-SNR environments.
In summary, the newly proposed SC+ scheme significantly reduces the error rate across the entire range of SNRs compared to the existing SC scheme, particularly in high-SNR environments, and the algorithm proposed in this paper has the advantage of complexity. This performance enhancement suggests that the SC+ scheme has potential significant value in improving the reliability of communications.

5. Conclusions and Future Work

In order to improve the reliability and accuracy of data transmission, especially in environments that require high signal integrity such as 5G and 6G communications, we propose a novel error correction scheme based on CRC codes and Polar Codes by using a new sequence interleaving technique and dynamic error pattern analysis. This approach enables more efficient identification and correction of errors without the need to re-transmit data when errors cannot be corrected. The proposed method significantly reduces the BER compared to conventional decoding techniques. These improvements not only increase the reliability of communication but also reduce delay and computational burden, making the scheme more suitable for modern communication systems that require high efficiency and low latency. Experimental results demonstrate that the proposed scheme outperforms existing error correction methods, particularly in high-signal-to-noise ratio environments.

Future Work

Despite the demonstrated advantages, the proposed error correction scheme has several limitations that require further exploration. First, the effectiveness of the sequence interleaving technique may vary with different types of communication channels and error distributions. Future work could focus on adaptive interleaving strategies that dynamically adjust to the channel conditions. Second, while the dynamic error pattern analysis enhances error correction performance, it introduces additional computational complexity. Research into optimization techniques, such as hardware acceleration or efficient algorithms, could help mitigate this issue. Additionally, the current scheme is primarily evaluated in scenarios with high SNRs; extending the analysis to low-SNR environments would provide a more comprehensive understanding of its performance. Finally, integrating the proposed scheme with emerging technologies, such as quantum communication and machine learning-based decoding, offers promising directions to further enhance error correction capabilities and meet the demands of next-generation communication systems.

Author Contributions

Methodology, F.A., J.Y. and Z.Y.; validation, F.A. and J.Y.; writing—original draft preparation, Z.Y.; writing—review and editing, F.A. and J.Y.; supervision, J.Y.; project administration, F.A. and J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 62162020), the Haikou Science and Technology Special Fund (2420016000142), and the Science Project of Hainan University (KYQD(ZR)20021).

Data Availability Statement

The data involved in our research are all randomly generated binary bits. Since our research has nothing to do with the selection of data, we do not need to disclose the data here.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SNRsignal-to-noise ratio
CRCcyclic redundancy check
BLERblock error rate
IoTInternet of Things
PBCHPhysical Broadcast Channel
SCSuccessive Cancellation Decoding
SCLSuccessive Cancellation List Decoding
BPBelief Propagation Decoding
SCC-SCLSegmented CRC Error Correction-Assisted SCL Decoding
BERbit error rate
PCCParity Check Concatenated
LDPCLow-Density Parity Check
B-DMCBinary-Input Discrete Memoryless Channel
LRslikelihood ratios
UBAUnrepeated Bits Area
RBARepeated Bits Area
EOPError Occur Probability
BRPBit Reliable Probability
SC+new Successive Cancellation Decoding algorithm

References

  1. Ye, J.; Yang, Z. An ECC with Error Detection and against Side Channel Attacks for Resource Constrained Devices. J. King Saud Univ. Comput. Inf. Sci. 2024, 36, 102019. [Google Scholar] [CrossRef]
  2. Sodhro, A.H.; Pirbhulal, S.; Luo, Z.; Muhammad, K.; Zahid, N.Z. Toward 6G Architecture for Energy-Efficient Communication in IoT-Enabled Smart Automation Systems. IEEE Internet Things J. 2021, 8, 5141–5148. [Google Scholar] [CrossRef]
  3. Lai, B.; Wen, J.; Kang, J.; Du, H.; Nie, J.; Yi, C.; Kim, D.I.; Xie, S. Resource-Efficient Generative Mobile Edge Networks in 6G Era: Fundamentals, Framework and Case Study. IEEE Wirel. Commun. 2024, 31, 66–74. [Google Scholar] [CrossRef]
  4. Gautam, A.; Thakur, P.; Singh, G. Unlocking Efficiency and Reliability: Polar Codes in 6G Networks. In Proceedings of the 8th International Conference on Computing, Control and Industrial Engineering (CCIE2024), Wuhan, China, 27–28 April 2024. [Google Scholar]
  5. Miao, S.; Kestel, C.; Johannsen, L.; Geiselhart, M.; Schmalen, L.; Balatsoukas-Stimming, A.; Liva, G.; Wehn, N.; Brink, S.T. Trends in Channel Coding for 6G. Proc. IEEE 2024, 112, 653–675. [Google Scholar] [CrossRef]
  6. Dong, Y.; Dai, J.; Niu, K.; Wang, S.; Yuan, Y. Joint Source-Channel Coding for 6G Communications. China Commun. 2022, 19, 101–115. [Google Scholar] [CrossRef]
  7. Geiselhart, M.; Krieg, F.; Clausius, J.; Tandler, D.; ten Brink, S. 6G: A Welcome Chance to Unify Channel Coding? IEEE Bits Inf. Theory Mag. 2023, 3, 67–80. [Google Scholar] [CrossRef]
  8. Isong, E.B.; Eze, D.U.F. Optimizing a 3G Cellular Wireless Network Using Forward Error Correction. Int. J. Sci. Eng. Res. 2013, 4, 1083–1087. [Google Scholar]
  9. Shah, P.M.; Vyavahare, P.D.; Jain, A. Modern Error Correcting Codes for 4G and beyond: Turbo Codes and LDPC Codes. In Proceedings of the 2015 Radio and Antenna Days of the Indian Ocean (RADIO), Belle Mare, Mauritius, 21–24 September 2015; pp. 1–2. [Google Scholar] [CrossRef]
  10. Wehn, N.; Sahin, O.; Herrmann, M. Forward-Error-Correction for Beyond-5G Ultra-High Throughput Communications. In Proceedings of the 2021 11th International Symposium on Topics in Coding (ISTC), Montreal, QC, Canada, 30 August–3 September 2021; pp. 1–5. [Google Scholar] [CrossRef]
  11. Gautam, A.; Thakur, P.; Singh, G. Advanced Channel Coding Schemes for B5G/6G Networks: State-of-the-art Analysis, Research Challenges and Future Directions. Int. J. Commun. Syst. 2024, 37, e5855. [Google Scholar] [CrossRef]
  12. Rowshan, M.; Qiu, M.; Xie, Y.; Gu, X.; Yuan, J. Channel Coding Toward 6G: Technical Overview and Outlook. IEEE Open J. Commun. Soc. 2024, 5, 2585–2685. [Google Scholar] [CrossRef]
  13. Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
  14. Pamuk, A. An FPGA Implementation Architecture for Decoding of Polar Codes. In Proceedings of the 2011 8th International Symposium on Wireless Communication Systems, Aachen, Germany, 6–9 November 2011; pp. 437–441. [Google Scholar] [CrossRef]
  15. Niu, K.; Wu, B.L.; Dai, J.J.; Yu, S.; Yuan, Y.F. Polar Coded Modulation for 6G System. J. Beijing Univ. Posts Telecommun. 2022, 45, 1. [Google Scholar]
  16. Sharma, A.; Salim, M. Polar Code Appropriateness for Ultra-Reliable and Low-Latency Use Cases of 5G Systems. Int. J. Networked Distrib. Comput. 2019, 7, 93. [Google Scholar] [CrossRef]
  17. Balatsoukas-Stimming, A.; Giard, P.; Burg, A. Comparison of Polar Decoders with Existing Low-Density Parity-Check and Turbo Decoders. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar] [CrossRef]
  18. Niu, K.; Chen, K.; Lin, J.; Zhang, Q. Polar Codes: Primary Concepts and Practical Decoding Algorithms. IEEE Commun. Mag. 2014, 52, 192–203. [Google Scholar] [CrossRef]
  19. Zhang, J.; Qiu, H.; Zhan, L.; Chen, L. An Error Segment Bit-Flip Algorithm for Successive Cancellation List Decoding of Polar Codes. Mathematics 2024, 12, 2072. [Google Scholar] [CrossRef]
  20. Wang, D.; Yin, J.; Xu, Y.; Yang, X.; Xu, Q.; Hua, G. An Improved Bit-Flipping Algorithm of Successive Cancellation List Decoding for Polar Codes. Mathematics 2023, 11, 4462. [Google Scholar] [CrossRef]
  21. Li, X.; Niu, K.; Han, Y.; Dai, J.; Tan, Z.; Guo, Z. Pre-Configured Error Pattern Ordered Statistics Decoding for CRC-Polar Codes. Entropy 2023, 25, 1405. [Google Scholar] [CrossRef] [PubMed]
  22. Timokhin, I.; Ivanov, F. Sequential Polar Decoding with Cost Metric Threshold. Appl. Sci. 2024, 14, 1847. [Google Scholar] [CrossRef]
  23. Tsimbalo, E.; Fafoutis, X.; Piechocki, R.J. CRC Error Correction in IoT Applications. IEEE Trans. Ind. Inform. 2017, 13, 361–369. [Google Scholar] [CrossRef]
Figure 1. Scheme flow chart.
Figure 1. Scheme flow chart.
Applsci 15 01124 g001
Figure 2. The definition of UBA and RBA.
Figure 2. The definition of UBA and RBA.
Applsci 15 01124 g002
Figure 3. Two consecutive rows in shifted matrix.
Figure 3. Two consecutive rows in shifted matrix.
Applsci 15 01124 g003
Figure 4. Comparison of BLER under different SNRs.
Figure 4. Comparison of BLER under different SNRs.
Applsci 15 01124 g004
Figure 5. Polar Code complexity experiment.
Figure 5. Polar Code complexity experiment.
Applsci 15 01124 g005
Figure 6. Error detection complexity experiment.
Figure 6. Error detection complexity experiment.
Applsci 15 01124 g006
Figure 7. Error correction complexity experiment.
Figure 7. Error correction complexity experiment.
Applsci 15 01124 g007
Table 1. Scheme comparison.
Table 1. Scheme comparison.
Research WorkLong Code
Length Adaptability
High ReliabilityBalancing Complexity
and Performance
Niu et al. [18]×××
Zhang et al. [19]××
Wang et al. [20]×
Li et al. [21]×
Timokhin et al. [22]×
Our Scheme
Table 2. Device_A process.
Table 2. Device_A process.
INPUT: i n f o _ b i t s , g x .
1. i n f o _ b i t s _ c r c = c r c _ g e n e r a t o r i n f o _ b i t s , g x ;
2. i n f o _ b i t s _ i n t e r l e a v e d = c r c _ i n t e r l e a v e i n f o _ b i t s _ c r c ;
3. i n f o _ e n c o d e d = p o l a r _ e n c o n d i n g i n f o _ b i t s _ i n t e r l e a v e d .
OUTPUT: i n f o _ e n c o d e d .
Table 3. Device_B process.
Table 3. Device_B process.
INPUT: i n f o _ e n c o d e d , g x .
1. i n f o _ p a r t i a l _ e c o d e d = p o l a r _ s c _ d e c o d i n g i n f o _ e n c o d e d ;
2. e r r o r _ s y n d r o m e = e r r o r _ d e t e c t i n g i n f o _ p a r t i a l _ e c o d e d , g x ;
3. i n f o _ c o r r e c t e d = e r r o r _ c o r r e c t i n g e r r o r _ s y n d r o m e , i n f o r _ p a r t i a l _ d e c o d e d ;
4. i n f o _ b i t s = c r c _ d e i n t e r l e a v e i n f o _ c o r r e c t e d .
OUTPUT: i n f o _ b i t s .
Table 4. Error symptom classification.
Table 4. Error symptom classification.
Row-1Row-2Parity Check
UBARBAUBARBARow-1Row-2
Single Error×
××
Double Errors ×
××
××
××
××
Table 5. Error Occur Probability of error patterns based on error symptoms.
Table 5. Error Occur Probability of error patterns based on error symptoms.
Error SymptomError Occur Probability of Possible Error Patterns
Parity Check1Parity Check2
× P b r i , 4 , P b r i , 5 , P b r i , 6 , P b r i , 4 P b r i , 5 ,
P b r i , 4 P b r i , 6 , P b r i , 5 P b r i , 6 .
×× P b r i , 7 , P b r i , 8 , P b r i , 7 P b r i , 8 , P b r i , 4 P b r i , 7 , P b r i , 4 P b r i , 8 ,
P b r i , 5 P b r i , 7 , P b r i , 5 P b r i , 8 , P b r i , 6 P b r i , 7 , P b r i , 6 P b r i , 8 ,
P b r i , 7 P b r i + 1 , 9 , P b r i , 7 P b r i + 1 , 10 , P b r i , 8 P b r i + 1 , 9 ,
P b r i , 8 P b r i + 1 , 10 , P b r i , 4 P b r i + 1 , 9 , P b r i , 4 P b r i + 1 , 10 ,
P b r i , 5 P b r i + 1 , 9 , P b r i , 5 P b r i + 1 , 10 .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

An, F.; Ye, J.; Yang, Z. Data Transmission Error Detection and Correction with Cyclic Redundancy Check and Polar Code Integration with Successive Cancellation Decoding Algorithm. Appl. Sci. 2025, 15, 1124. https://doi.org/10.3390/app15031124

AMA Style

An F, Ye J, Yang Z. Data Transmission Error Detection and Correction with Cyclic Redundancy Check and Polar Code Integration with Successive Cancellation Decoding Algorithm. Applied Sciences. 2025; 15(3):1124. https://doi.org/10.3390/app15031124

Chicago/Turabian Style

An, Fanglin, Jun Ye, and Zewen Yang. 2025. "Data Transmission Error Detection and Correction with Cyclic Redundancy Check and Polar Code Integration with Successive Cancellation Decoding Algorithm" Applied Sciences 15, no. 3: 1124. https://doi.org/10.3390/app15031124

APA Style

An, F., Ye, J., & Yang, Z. (2025). Data Transmission Error Detection and Correction with Cyclic Redundancy Check and Polar Code Integration with Successive Cancellation Decoding Algorithm. Applied Sciences, 15(3), 1124. https://doi.org/10.3390/app15031124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop