Neural-Network-Assisted Polar Code Decoding Schemes
Abstract
:1. Introduction
2. Preliminaries
2.1. SC Decoding Algorithm
2.2. Decoding of Traditional Special Nodes
2.2.1. Fast Decoding of a Rep Node
2.2.2. Fast Decoding of an SPC Node
2.3. Traditional Fast SC Decoding
3. Neural Network-Assisted Polar Code Decoding Scheme
3.1. System Model
3.2. Neural Network Model and NNN
3.3. Neural Network-Assisted (NNA) Decoding
3.4. Last Subcode Neural Network-Assisted Decoding (LSNNAD) Scheme
3.5. Key-Bit-Based Subcode Neural Network-Assisted Decoding (KSNNAD) Scheme
4. Numerical Simulation
4.1. Performance with Different
4.2. Performance with Different Code Rates
4.3. Performance with Different Code Lengths
4.4. Performance Comparison under AWGN Channel
4.5. Performance Comparison under Rayleigh Channel
4.6. Decoding Complexity
4.6.1. Number of Arithmetic Operations
4.6.2. Decoding Steps
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Helleseth, T.; Klve, T. Algebraic coding theory. In Coding Theory; John Wiley & Sons: San Francisco, CA, USA, 2007; pp. 13–96. [Google Scholar]
- Gallager, R. Low-density parity-check codes. IRE Trans. Inf. Theory. 1962, 8, 21–28. [Google Scholar] [CrossRef] [Green Version]
- Machay, D.J.C. Good codes based on very sparse matrices. IEEE Trans. Inf. Theory 1999, 45, 399–431. [Google Scholar] [CrossRef] [Green Version]
- Berrou, C.; Glavieux, A.; Thitimajshima, P. Near Shannon limit error-correcting coding and decoding: Turbo codes. In Proceedings of the IEEE International Conference on Communications, Geneva, Switzerland, 23–26 May 1993; pp. 1064–1070. [Google Scholar]
- Berrou, C.; Glavieux, A. Near optimum error-correcting coding and decoding: Turbo codes. IEEE Trans. Commun. 1996, 44, 1261–1271. [Google Scholar] [CrossRef] [Green Version]
- Bose, R.C.; Chaudhuri, D.K. On a class of error correcting binary group codes. Inform. Control 1960, 3, 68–79. [Google Scholar] [CrossRef] [Green Version]
- Hocquenghem, A. Codes correcteurs d’erreurs. Chiffres 1959, 2, 147–156. [Google Scholar]
- Reed, I.S.; Solomon, G. Polynomial codes over certain finite fields. J. Soc. Ind. Appl. Math. 1960, 8, 300–304. [Google Scholar] [CrossRef]
- Virerbi, A.J. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inf. Theory 1967, 13, 260–269. [Google Scholar] [CrossRef] [Green Version]
- Arikan, E. Channel polarization: A method for constructing capacity-achieving codes. In Proceedings of the IEEE International Symposium on Information Theory, Toronto, ON, Canada, 6–11 July 2008; pp. 3051–3073. [Google Scholar]
- Shannon, C.D. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.J. Channel Codes: Classical and Modern, 1st ed.; Electronic Industry Press: Beijing, China, 2022; pp. 1–576. [Google Scholar]
- Jin, S.; Liu, X. A memory efficient belief propagation decoder for polar codes. China Commun. 2015, 12, 34–41. [Google Scholar]
- Arikan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE T. Inform. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
- Choi, S.; Yoo, H. Area-efficient early-termination technique for belief-propagation polar decoders. Electronics 2019, 8, 1001. [Google Scholar] [CrossRef] [Green Version]
- Abbas, S.M.; Fan, Y.; Chen, J. Low complexity belief propagation polar code decoder. In Proceedings of the IEEE Workshop on Signal Processing Systems, Hangzhou, China, 14–16 October 2015; pp. 1–6. [Google Scholar]
- Alamdar-yazdi, A.; Kschischang, F.R. A simplified successive-cancellation decoder for polar codes. IEEE Commun. Lett. 2011, 15, 1378–1380. [Google Scholar] [CrossRef]
- Sarkis, G.; Giard, P.; Vardy, A.; Thibeault, C.; Gross, W.J. Fast polar decoders: Algorithm and implementation. IEEE J. Sel. Areas Commun. 2014, 32, 946–957. [Google Scholar] [CrossRef] [Green Version]
- Ardakani, M.H.; Hanif, M.; Ardakani, M.; Tellambura, C. Fast successive-cancellation-based decoders of polar codes. IEEE Trans. Commun. 2019, 67, 2360–2363. [Google Scholar] [CrossRef]
- Chen, K.; Niu, K. Stack decoding of polar codes. Electron. Lett. 2012, 48, 695–696. [Google Scholar]
- Kruse, R.; Mostaghim, S.; Borgelt, C.; Steinbrecher, M. Multi-layer perceptrons. In Computational Intelligence; Springer: Berlin, Germany, 2022; pp. 53–124. [Google Scholar] [CrossRef]
- Strigl, D.; Kofler, K.; Podlipnig, S. Performance and scalability of GPU-based convolutional neural networks. In Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, Pisa, Italy, 17–19 February 2010; pp. 317–324. [Google Scholar]
- Ben, N.M.; Chtourou, M. On the training of recurrent neural networks. In Proceedings of the 8th International Multi-Conference on Systems, Signals & Devices, Sousse, Tunisia, 22–25 March 2011; pp. 1–5. [Google Scholar]
- Cao, Z.; Zhu, H.; Zhao, Y.; Li, D. Learning to denoise and decode: A novel residual neural network decoder for polar codes. In Proceedings of the IEEE 92nd Vehicular Technology Conference, Victoria, BC, Canada, 18 November–16 December 2020; pp. 1–6. [Google Scholar]
- Tal, I.; Vardy, A. List decoding of polar codes. IEEE Trans. Inf. Theory. 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
- Xu, W.H.; Wu, Z.Z.; Ueng, Y.L. Improved polar decoder based on deep learning. In Proceedings of the IEEE International Workshop on Signal Processing Systems, Lorient, France, 3–5 October 2017; pp. 1–6. [Google Scholar]
- Hashemi, S.; Doan, N.; Tonnellier, T.; Gross, W.J. Deep-learning-aided successive-cancellation decoding of polar codes. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 03–06 November 2019; pp. 532–536. [Google Scholar]
- Gruber, T.; Cammerer, S.; Hoydis, J.; Ten Brink, S. On deep learning-based channel decoding. In Proceedings of the 51st Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 22–24 March 2017. [Google Scholar]
- Cammerer, S.; Gruber, T.; Hoydis, J.; Ten Brink, S. Deep learning-based decoding of polar codes via partitioning. In Proceedings of the IEEE Global Communications Conference, Singapore, 4–8 December 2017. [Google Scholar]
- Teng, C.F.; Wu, A.Y. Unsupervised learning for neural network-based polar decoder via syndrome loss. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar]
- Gao, J.; Zhang, D.X.; Dai, J.C. Resnet-like belief-propagation decoding for polar codes. IEEE Wirel. Commun. Le. 2021, 10, 934–937. [Google Scholar] [CrossRef]
- Xu, W.H.; Tan, X.S.; Be’ery, Y.; Ueng, Y.L.; Huang, Y.M.; You, X.H.; Zhang, C. Deep learning-aided belief propagation decoder for polar codes. arXiv 2019, arXiv:Signal Processing. [Google Scholar] [CrossRef]
- Doan, D.; Hashemi, S.H.; Gross, W.J. Neural successive cancellation decoding of polar codes. In Proceedings of the IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications, Kalamata, Greece, 25–28 June 2018; pp. 1–5. [Google Scholar]
- Xu, X. Study on Decoding Algorithms of Polar Codes Based on Deep Learning; Beijing Jiaotong University: Beijing, China, 2019. [Google Scholar]
- Arikan, E. Systematic polar coding. IEEE Commun. Lett. 2011, 15, 860–862. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. Available online: http://arxiv.org/abs/1412.6980 (accessed on 30 January 2017).
- Li, S.B.; Deng, Y.Q.; Gao, X. Generalized segmented bit-flipping scheme for successive cancellation decoding of polar codes with cyclic redundancy check. IEEE Access 2019, 7, 83424–83436. [Google Scholar] [CrossRef]
- Zhang, Z.Y.; Qin, K.J.; Zhang, L.; Chen, G.T. Progressive bit-flipping decoding of polar codes: A critical-set based tree search approach. IEEE Access 2018, 6, 57738–57750. [Google Scholar] [CrossRef]
- Trifonov, P. Recursive trellis decoding techniques of polar codes. In Proceedings of the IEEE International Symposium on Information Theory, Los Angeles, CA, USA, 21–26 June 2020; pp. 407–412. [Google Scholar]
Layer | Output Dimensions | Convolutional Filters | Activation Function |
---|---|---|---|
Input | N | / | / |
Mapper (Lambda) | N | / | / |
Noise (Lambda) | N | / | / |
NN Layer 1 | 16KS | 128 | ReLU |
NN Layer 2 | 8KS | 64 | ReLU |
NN Layer 3 | 4KS | 32 | ReLU |
Output | KS | / | Sigmoid |
NS | R1 | R0 | Rep | SPC | Type-I | Type-II | Type-III | Type-IV | Type-V | NNN | |
---|---|---|---|---|---|---|---|---|---|---|---|
FSCFD [19] | 8 16 32 64 128 | 1 1 0 0 0 | 0 0 1 0 0 | 6 4 0 1 1 | 8 4 1 1 1 | 0 0 0 0 0 | 2 0 1 0 0 | 1 0 1 0 0 | 1 0 0 1 0 | 9 1 0 1 0 | 0 0 0 0 0 |
FSCD | 8 16 32 64 128 | 3 4 1 0 0 | 3 4 2 0 0 | 5 9 0 1 1 | 5 7 1 1 1 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 |
LSNNAD (NS = 16) | 8 16 32 64 128 | 3 5 2 0 0 | 3 4 2 0 0 | 5 9 0 1 1 | 5 7 1 1 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 1 0 0 0 |
KSNNAD (NS = 16) | 8 16 32 64 128 | 3 4 1 0 0 | 3 5 3 1 0 | 5 9 0 1 0 | 5 7 1 1 1 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 0 0 0 0 | 0 1 0 0 0 |
Decoding Step | Arithmetic Operation | FSCFD [19] | FSCD | LSNNAD | KSNNAD |
---|---|---|---|---|---|
Subcode recognition | Comparison | 5984 | 6128 | 6264 | 6315 |
R1 | Addition Multiplication | 9NS − 7 NS − 1 | 2NS 4NS | 2NS 4NS | 2NS 4NS |
Rep | Addition Multiplication | 8NS − 6 NS − 1 | NS 3NS | NS 3NS | NS 3NS |
SPC | Addition Multiplication | 11NS − 7 2NS − 1 | NS 0 | NS 0 | NS 0 |
Type-I | Addition Multiplication | 8NS − 7 NS − 1 | 0 0 | 0 0 | 0 0 |
Type-II | Addition Multiplication | 10NS − 5 NS | 0 0 | 0 0 | 0 0 |
Type-III | Addition Multiplication | 10NS − 3 NS + 1 | 0 0 | 0 0 | 0 0 |
Type-IV | Addition Multiplication | 10NS − 3 NS + 1 | 0 0 | 0 0 | 0 0 |
Type-V | Addition Multiplication | 41NS − 7 69NS − 1 | 0 0 | 0 0 | 0 0 |
NNN | Addition Multiplication | 0 0 | 0 0 | 164 164 | 164 164 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, H.; Zhang, L.; Yan, W.; Ling, Q. Neural-Network-Assisted Polar Code Decoding Schemes. Appl. Sci. 2022, 12, 12700. https://doi.org/10.3390/app122412700
Liu H, Zhang L, Yan W, Ling Q. Neural-Network-Assisted Polar Code Decoding Schemes. Applied Sciences. 2022; 12(24):12700. https://doi.org/10.3390/app122412700
Chicago/Turabian StyleLiu, Hengyan, Limin Zhang, Wenjun Yan, and Qing Ling. 2022. "Neural-Network-Assisted Polar Code Decoding Schemes" Applied Sciences 12, no. 24: 12700. https://doi.org/10.3390/app122412700
APA StyleLiu, H., Zhang, L., Yan, W., & Ling, Q. (2022). Neural-Network-Assisted Polar Code Decoding Schemes. Applied Sciences, 12(24), 12700. https://doi.org/10.3390/app122412700