Next Article in Journal
What Do Prescribers of Bone Modifying Agents Know about Medication-Related Osteonecrosis of the Jaw? Is Current Prevention Enough?
Previous Article in Journal
Special Issue “Nuclear Magnetic Resonance (NMR) Spectroscopy in Food Science and Processing”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder

Department of Electronic Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(18), 9225; https://doi.org/10.3390/app12189225
Submission received: 2 August 2022 / Revised: 9 September 2022 / Accepted: 13 September 2022 / Published: 14 September 2022
(This article belongs to the Topic Machine Learning in Communication Systems and Networks)

Abstract

:
In this paper, we show that applying a machine learning technique called auto-decoder (AD) to high-rate and short length Reed–Muller (RM) decoding enables it to achieve maximum likelihood decoding (MLD) performance and faster decoding speed than when fast Hadamard transform (FHT) is applied in additive white Gaussian noise (AWGN) channels. The decoding speed is approximately 1.8 times and 125 times faster than the FHT decoding for R ( 1 , 4 ) and R ( 2 , 4 ) , respectively. The number of nodes in the hidden layer of AD is larger than that of the input layer, unlike the conventional auto-encoder (AE). Two ADs are combined in parallel and merged together, and then cascaded to one fully connected layer to improve the bit error rate (BER) performance of the code.

1. Introduction

Machine learning (ML) techniques are widely used in many fields, such as image recognition, natural language processing, and autonomous driving [1,2,3]. Auto-encoder (AE) unsupervised ML techniques are known for their capacity to extract important features of data while reducing unwanted noise, and thus are useful in generating new images with key features [4]. AEs perform roles such as dimensionality reduction, image denosing, image generation, and abnormality detection, and are used in various fields such as medical care, autonomous driving, and image recognition [5,6,7,8]. In addition, various studies are being undertaken to apply machine learning technology to communication systems, such as channel coding, massive multi-input and multi-output, multiple access, resource allocation, and network security [9,10]. In this study, we modify an AE to a new model called the auto-decoder (AD), which is suitable for reducing the noise that corrupts the transmitted information signal in channel coding. The proposed AD is used to decode Reed–Muller (RM) code of high-rate and short length, which is used in many communication systems, such as long-term evolution (LTE) and fifth-generation wireless (5G) cellular systems [11,12], where the minimum latency delay is 5 ms. The requirement is to further reduce this delay in sixth-generation (6G) wireless systems [13,14]. Since we consider high-rate Reed–Muller code of short length, such as R ( 2 , 4 ) with code rate 0.6875, we use a fast Hadamard transform (FHT) decoding method [15] for performance comparison instead of the recursive decoding of [16,17] which is useful for low-rate RM code. Because the RM code has an extremely simple structure and can be decoded with maximum likelihood decoding (MLD) performance using FHT, it is especially useful in control channels in wireless communication systems. We first illustrate the key differences between the conventional AE and the proposed AD, and then show how to construct the decoder for the RM code using it. After training the AD model for the code, we found that the proposed method showed similar performance to the conventional FHT method with faster decoding speed. For improved performance, we present a parallel auto-decoder (PAD) that combines a couple of ADs in parallel.

2. RM Decoder Based on AD

This section explains the RM decoder using AD and compares the performance of FHT decoding. Table 1 shows the notations used in this paper.

2.1. Auto-Decoder

The AD, which plays a central role in decoding of the RM code, is modified from a conventional AE. Figure 1 shows the basic structures of the conventional AE and the proposed AD, highlighting the difference between the number of nodes in two different hidden layers. The number of nodes in the hidden layer of the AE in Figure 1a is less than that of the input layer. In comparison to the AE, the number of nodes in the hidden layer of the AD in Figure 1b is larger than that of the input layer. Figure 2 shows the typical structure of an AD that is composed of three hidden layers for the decoder; the number of nodes in each layer is presented in Table 2. The number of nodes in the input layer is the same as the length of the codeword n. The number of nodes in the first hidden layer is 2 n , in the second hidden layer is 4 n , in the last hidden layer is again 2 n , and finally the output layer becomes n again. In general, a neural network with a multilayer perceptron shows much better performance than a single-layer perceptron. From this perspective, we can speculate that the hidden layer structure of the AD is more suitable for decoding short length codes, such as RM code, than the AE, in which the number of nodes of the hidden layer becomes smaller as the depth of the hidden layer increases, which may result in a smaller number of nodes, especially in the middle of the hidden layer, resulting in poor decoding performance.

2.2. RM Decoding Model

The RM code of n = 2 m bits with the minimum Hamming distance of 2 m r used to encode k = i r m i bits of message is denoted as R ( r , m ) [15,18]. To illustrate the proposed decoding model, we consider two cases of RM code, R ( 1 , 4 ) and R ( 2 , 4 ) of code length 16, because the model training for longer code consumes more time. Figure 3 shows the proposed RM decoder structure constructed in such a way that the AD of Figure 2 is followed by one fully connected (FC) layer with the number of nodes defined as N = 2 k , and, thus, N = 2 5 for R ( 1 , 4 ) and N = 2 11 for R ( 2 , 4 ) . Because N = 2 k is equivalent to the number of all possible messages transmitted, we use the FC layer as the output layer. Moreover, a softmax activation function is used to normalize the output of the model to a probability distribution over all possible transmitted messages. Figure 4 shows the method used to estimate the transmitted message bits of the (16,11) code of R ( 2 , 4 ) from the output layer. The output node index of the FC layer is the decimal value corresponding to the message bits, and the transmitted message bits are estimated by converting the index of the maximum output node value into k = 11 binary bits.

2.3. Hyperparameters

Table 3 lists the hyperparameters used for training the RM decoder. Let y = 0 , 1 N = y 0 , y 1 , , y j , , y N 1 = 0 , 0 , , 1 , , 0 be the one-hot encoded vector for the message m = m 0 , m 1 , , m k 1 , where j = l = 0 k 1 m l 2 l is the decimal value for m , and all bits of y are zeros, except the value of one at the j-th bit. Let z i be the i-th output of the FC layer. The cross-entropy is given by:
L ( y , z ) = i = 0 N 1 [ y i log z i + ( 1 y i ) log ( 1 z i ) ]
is used as the loss function, and the Adam optimizer is used for model training. The training data set was 2 k × 10 5 , the epoch was 10 2 , and the batch size was set to 10 4 . Normalized validation error (NVE) of [19]
NVE = 1 S s = 1 S B E R A D ( ρ t , ρ v , s ) B E R F H T ( ρ v , s )
is used to select the appropriate signal-to-noise (SNR) ratio for model training, where ρ t and ρ v , s are the SNRs for the training set and the s-th validation set, respectively, and S is the number of all possible different validation sets. B E R A D ( ρ t , ρ v , s ) is the bit error rate when the RM decoder with AD is trained at ρ t and evaluated at ρ v , s . Table 4 shows the NVE values at different training SNRs, from 0 dB to 7 dB with 1 dB intervals, and we observe that NVE = 0.945 at ρ t = 1 dB is the lowest value among them. Thus, the training SNR is set to 1 dB.

2.4. Performance Evaluation

To evaluate the performance of the proposed decoder using AD, we consider two cases of RM code, R ( 1 , 4 ) and R ( 2 , 4 ) whose code structure is simple, with a short message length. Figure 5 shows the BER of RM coding with the AD compared with FHT decoding. The BER at each SNR was calculated when the maximum number of error bits was 500, or the number of codewords generated reached 10 5 . From the graph, we can see that the two methods show almost the same BER performance. Table 5 shows the decoding times for R ( 1 , 4 ) and R ( 2 , 4 ) using FHT decoding and the AD. A computer with a central processing unit (CPU) of Intel i9-7920, graphics processing unit (GPU) of Nvidia Titan XP, and 64 GB random access memory (RAM) was used for the evaluation. If a GPU is used for decoding, the proposed method can have an advantage over the method using FHT, and for a fair comparison, we measure the decoding time using a CPU without a GPU. The decoding time of the proposed method was 1.8 times faster than the decoding time of FHT decoding for R ( 1 , 4 ) and 125 times faster for R ( 2 , 4 ) . This derives from the fact that (16,11) R ( 2 , 4 ) decoding using FHT needs 2 6 FHTs, considering six masking bits, whereas (16,5) R ( 1 , 4 ) decoding needs only one FHT operation [15]. The main difference between the RM decoding using the AD for R ( 1 , 4 ) and R ( 2 , 4 ) is the number of nodes in the FC layer, which increases the number of parameters.

3. PAD

To improve the BER performance of the RM decoder using a single AD, we present a PAD composed of multiple ADs. We call an AD in a PAD a constituent auto-decoder (CAD). To illustrate the structure of a PAD, we set the number of CADs in the PAD to five, as shown in Figure 6, where the output layers of all CADs are simply added at the merge layer with n = 2 m nodes. Each CAD has the same structure, except for the different number of nodes at the first, second, and third hidden layers, as described in Table 6, where the first and the third hidden layers have the same number of nodes; thus, we find the symmetry structure of the PAD based on the second hidden layer or middle layer. As in the case of the AD, the activation functions for the hidden and output layers of CADs in the PAD are exponential linear unit (ELU) and tanh functions, respectively [20,21]. The PAD is followed by the FC layer, and the hyperparameters for the decoder model using PAD are the same as in the case of the AD. Figure 7 shows the BER performance for R ( 1 , 4 ) and R ( 2 , 4 ) using the conventional FHT decoder, and the proposed decoder model denoted as PAD-i, where i CADs are used. The specifications of the computer and the conditions for the BER calculation in Figure 7 for PAD are the same as those used in Figure 5 for the AD. Figure 7a shows the BER for R ( 1 , 4 ) in the range from 0 to 6 dB with 1 dB steps, and (b) in the range of 1.5 dB to 2.5 dB, which provides a better discrimination between different cases where no significant difference can be observed in terms of BER. However, PAD-2 shows the best performance. Figure 7c shows the BER for R ( 2 , 4 ) in the range from 0 to 6 dB with a 1 dB step, and (d) in the range of 1.5 dB to 2.5 dB for better observation. PAD-3 demonstrates the best performance, but there is no significant improvement as we increase the number of CADs in PAD. The comparison of decoding process times and the number of parameters for R ( 1 , 4 ) and R ( 2 , 4 ) using FHT and PADs is described in Table 7, where the higher the number of CADs in PAD, the higher the number of parameters and the complexity, and the longer the decoding process time. In R (1,4), the number of parameters of PAD-1 and PAD-5 are about 10-fold different, but the decoding time is only about 1.3-fold different. This is because the computation of CADs is performed in parallel.

4. Conclusions

In this paper, we proposed R (1,4) and R (2,4) decoders using an AD, with MLD performance and shorter decoding process time than the FHT decoder. The decoding time of the RM decoder using the AD is 1.8 times faster than that using FHT decoding for R (1,4), and 125 times faster for R (2,4). This is because the FHT decoding method relies heavily on masking bits. We presented PAD with multiple CADs to improve the BER performance of the RM decoder using a single AD, and found that PAD-2 and PAD-3 showed the best performance for R (1,4) and R (2,4), respectively; however, the performance difference is not significant. The proposed fast decoding method with MLD performance can be useful in mobile communication systems, such as 5G and 6G, which require low latency and BER. Since the AD shows better performance then the AE when input size is small, it can be useful for noise removal in signal processing with a relatively small data size. The AD can be used not only in communication fields, but also in fields using various sensors. As we have confirmed the performance of the proposed model based on the AD and PAD for high-rate and short-length RM codes in terms of decoding speed and BER, we will extend the result to other error-correction coding schemes.

Author Contributions

Software, H.W.C.; Writing—review & editing, Y.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1F1A1061907).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS’12, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  2. Jiang, K.; Lu, X. Natural language processing and its applications in machine translation: A diachronic review. In Proceedings of the 2020 IEEE 3rd International Conference of Safe Production and Informatization (IICSPI), Chongqing, China, 28–30 November 2020; pp. 210–214. [Google Scholar]
  3. Al-Qizwini, M.; Barjasteh, I.; Al-Qassab, H.; Radha, H. Deep learning algorithm for autonomous driving using GoogLeNet. In Proceedings of the IEEE Intelligent Vehicle Symposium, Los Angeles, CA, USA, 11–14 June 2017; pp. 89–96. [Google Scholar]
  4. Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P. Extracting and composing robust features with denosing autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
  5. Wang, Y.; Yao, H.; Zhao, S. Auto-encoder based dimensionality reduction. Neurocomuting 2016, 184, 232–242. [Google Scholar] [CrossRef]
  6. Gondara, L. Medical Image Denoising Using Convolutional Denoising Autoencoders. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), Barcelona, Spain, 12–15 December 2016; pp. 241–246. [Google Scholar]
  7. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar]
  8. An, J.; Cho, S. Variational Autoencoder Based Anomaly Detection Using Reconstruction Probability; Technical Report; SNU Data Mining Center: Seoul, Korea, 2015. [Google Scholar]
  9. Ly, A.; Yao, Y.-D. A Review of Deep Learning in 5G Research: Channel Coding, Massive MIMO, Multiple Access, Resource Allocation, and Network Security. IEEE Open J. Commun. Soc. 2021, 2, 396–408. [Google Scholar] [CrossRef]
  10. Huang, X.-L.; Ma, X.; Hu, F. Machine learning and intelligent communications. Mob. Netw. Appl. 2018, 23, 68–70. [Google Scholar] [CrossRef]
  11. 3GPP TS36.212: Multiplexing and Channel Coding, V13.11.0 (2021-03). Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2426 (accessed on 12 September 2022).
  12. 3GPP TS38.212: Multiplexing and Channel Coding, V15.10.0 (2020-09). Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3214 (accessed on 12 September 2022).
  13. Saad, W.; Bennis, M.; Chen, M. A vision of 6G wireless systems: Applications, trends, technologies, and open research problems. IEEE Netw. 2020, 34, 134–142. [Google Scholar] [CrossRef]
  14. Abbas, R.; Huang, T.; Shahab, B.; Shirvanimoghaddam, M.; Li, Y.; Vucetic, B. Grant-free non-orthogonal multiple access: A key enabler for 6G-IoT. arXiv 2020, arXiv:2003.10257. [Google Scholar]
  15. Kang, I.; Lee, H.; Han, S.; Park, C.; Soh, J.; Song, Y. Reconstruction method for Reed-Muller codes using fast Hadamard transform. In Proceedings of the 13th International Conference on Advanced Communication Technology (ICACT2011), Gangwon, Korea, 13–16 February 2011; pp. 793–796. [Google Scholar]
  16. Dumer, I. Recursive decoding and its performance for low-rate Reed-Muller codes. IEEE Trans. Inf. Theory 2004, 50, 811–823. [Google Scholar] [CrossRef]
  17. Dumer, I.; Shabunov, K. Soft-decision decoding of Reed-Muller codes: Recursive lists. IEEE Trans. Inf. Theory 2006, 52, 1260–1266. [Google Scholar] [CrossRef] [Green Version]
  18. Lin, S.; Costello, D.J. Error Control Coding: Fundamentals and Applications; Prentice Hall, Inc.: Upper Saddle River, NJ, USA, 1983. [Google Scholar]
  19. Gruber, T.; Cammerer, S.; Hoydis, J.; Brick, S.t. On deep learning-based channel coding. In Proceedings of the IEEE 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–24 March 2017; pp. 1–6. [Google Scholar]
  20. Clevert, D.-A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv 2015, arXiv:1511.07289. [Google Scholar]
  21. Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation functions: Comparison of trends in practice and research for deep learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
Figure 1. Structure of (a) auto-encoder and (b) auto-decoder.
Figure 1. Structure of (a) auto-encoder and (b) auto-decoder.
Applsci 12 09225 g001aApplsci 12 09225 g001b
Figure 2. Structure of auto-decoder with 3 hidden layers.
Figure 2. Structure of auto-decoder with 3 hidden layers.
Applsci 12 09225 g002
Figure 3. Structure of RM decoder based on AD.
Figure 3. Structure of RM decoder based on AD.
Applsci 12 09225 g003
Figure 4. Message estimation of R ( 2 , 4 ) from the output of FC layer.
Figure 4. Message estimation of R ( 2 , 4 ) from the output of FC layer.
Applsci 12 09225 g004
Figure 5. BER of RM decoding with FHT and AD.
Figure 5. BER of RM decoding with FHT and AD.
Applsci 12 09225 g005
Figure 6. Structure of PAD with 5 CADs.
Figure 6. Structure of PAD with 5 CADs.
Applsci 12 09225 g006
Figure 7. BER of RM decoding with FHT and PADs.
Figure 7. BER of RM decoding with FHT and PADs.
Applsci 12 09225 g007
Table 1. Summarizes the notation used in the paper.
Table 1. Summarizes the notation used in the paper.
SymbolDescription
nlength of codeword (bits)
r, mparameters of RM code ( 0 r m )
klength of message (bits)
Nnumber of nodes in FC layer
y one-hot encoding vector
m message vector
z output of FC layer
Snumber of validation sets
ρ t , ρ v , s SNRs for the training set and the s-th validation set
Table 2. Number of nodes for layers in AD.
Table 2. Number of nodes for layers in AD.
LayerNumber of Nodes
inputn
1st hidden 2 n
2nd hidden 4 n
3rd hidden 2 n
outputn
Table 3. Hyperparameters for model training.
Table 3. Hyperparameters for model training.
loss functioncross-entropy
optimizerAdam
training data set 2 k × 10 5
epoch 10 2
batch size 10 4
Table 4. NVE for different training SNRs.
Table 4. NVE for different training SNRs.
Training SNR ( ρ t )0123
NVE0.9720.9450.9821.138
Training SNR ( ρ t )4567
NVE0.9811.3201.5292.489
Table 5. Decoding time of RM code using FHT and AD.
Table 5. Decoding time of RM code using FHT and AD.
RM CodeMethodTime (ms)
R ( 1 , 4 ) FHT0.6012
AD0.3327
R ( 2 , 4 ) FHT46.625
AD0.3704
Table 6. The number of nodes in PAD with 5 CADs.
Table 6. The number of nodes in PAD with 5 CADs.
LayerNumber of Nodes
1st2nd3rd4th5th
CADCADCADCADCAD
1st hiddenn 2 n 3 n 4 n 5 n
2nd hiddenn 4 n 9 n 16 n 25 n
3rd hiddenn 2 n 3 n 4 n 5 n
outputnnnnn
Table 7. RM decoding time using FHT and PADs.
Table 7. RM decoding time using FHT and PADs.
MethodTime (ms)Parameters
R ( 1 , 4 ) R ( 2 , 4 ) R ( 1 , 4 ) R ( 2 , 4 )
FHT0.601246.625--
PAD-10.33270.3704580840,080
PAD-20.34720.3785689641,168
PAD-30.38380.403722,51256,784
PAD-40.40750.436757,72892,000
PAD-50.45200.4854124,864159,136
PAD-60.49720.5241239,312273,584
PAD-70.55080.6225419,536388,032
PAD-80.61980.6352687,072502,480
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cho, H.W.; Song, Y.J. High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder. Appl. Sci. 2022, 12, 9225. https://doi.org/10.3390/app12189225

AMA Style

Cho HW, Song YJ. High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder. Applied Sciences. 2022; 12(18):9225. https://doi.org/10.3390/app12189225

Chicago/Turabian Style

Cho, Hyun Woo, and Young Joon Song. 2022. "High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder" Applied Sciences 12, no. 18: 9225. https://doi.org/10.3390/app12189225

APA Style

Cho, H. W., & Song, Y. J. (2022). High Speed Decoding for High-Rate and Short-Length Reed–Muller Code Using Auto-Decoder. Applied Sciences, 12(18), 9225. https://doi.org/10.3390/app12189225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop