Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks
Abstract
:1. Introduction
- (1)
- A novel one-shot neural network pruning algorithm based on weight magnitude and gradient momentum is proposed to produce sparse RNNs for solving AMC problems without compromising model performance. Specifically, we demonstrate that it is crucial to retain non-recurrent connections while pruning RNNs.
- (2)
- In addition to the sequential AMC problem, the efficiency of the proposed method is also validated on non-sequential dataset, including MNIST and CIFAR10, with feedforward neural networks.
- (3)
- The experimental results reveal that the proposed pruning method can serve as a regularization technique as the resulting sparse models can outperform their dense counterparts even with high-level sparsity.
2. Methods
2.1. Notation
2.2. Recurrent Neural Networks
2.3. Pruning Method
Algorithm 1 The proposed method |
Require: Training set |
Require: Network F with parameters |
Require: Pruning interval K, hyper-parameter for calculating momentum , hyper-parameter for pruning and , pruning rate p. |
|
3. Experimental Results and Discussions
3.1. Experimental Configuration
3.1.1. MNIST and CIFAR10 Datasets
3.1.2. RadioML 2016.10a Dataset
3.2. Results on Standard MNIST and CIFAR10 Datasets
3.3. Results on RadioML 2016.10a Dataset
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Dobre, O.A.; Abdi, A.; Bar-Ness, Y.; Su, W. Survey of automatic modulation classification techniques: Classical approaches and new trends. IET Commun. 2007, 1, 137–156. [Google Scholar] [CrossRef] [Green Version]
- Huang, S.; Yao, Y.; Wei, Z.; Feng, Z.; Zhang, P. Automatic Modulation Classification of Overlapped Sources Using Multiple Cumulants. IEEE Trans. Veh. Technol. 2017, 66, 6089–6101. [Google Scholar] [CrossRef]
- Dobre, O.A.; Hameed, F. Likelihood-based algorithms for linear digital modulation classification in fading CHANNELS. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, Ottawa, ON, Canada, 7–10 May 2006; pp. 1347–1350. [Google Scholar] [CrossRef]
- Chavali, V.G.; Da Silva, C.R. Classification of digital amplitude-phase modulated signals in time-correlated non-Gaussian channels. IEEE Trans. Commun. 2013, 61, 2408–2419. [Google Scholar] [CrossRef]
- Swami, A.; Sadler, B.M. Hierarchical digital modulation classification using cumulants. IEEE Trans. Commun. 2000, 48, 416–429. [Google Scholar] [CrossRef]
- Yuan, J.; Zhao-yang, Z.; Pei-liang, Q. Modulation classification of communication signals. In Proceedings of the MILCOM 2004 IEEE Military Communications Conference, Monterey, CA, USA, 31 October–3 November 2004. [Google Scholar] [CrossRef]
- Lopatka, J.; Pedzisz, M. Automatic modulation classification using statistical moments and a fuzzy classifier. In Proceedings of the WCC 2000—ICSP 2000, 2000 5th International Conference on Signal Processing Proceedings, 16th World Computer Congress 2000, Beijing, China, 21–25 August 2000; pp. 1500–1506. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies—Proceedings Conference, 2019, Minneapolis, MN, USA, 2 June–June 7 2019; Volume 1, pp. 4171–4186. [Google Scholar]
- Lipton, Z.C.; Berkowitz, J.; Elkan, C. A Critical Review of Recurrent Neural Networks for Sequence Learning. arXiv 2015, arXiv:1506.00019. [Google Scholar]
- Zhang, D.; Ding, W.; Zhang, B.; Xie, C.; Li, H.; Liu, C.; Han, J. Automatic modulation classification based on deep learning for unmanned aerial vehicles. Sensors 2018, 18, 924. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Rajendran, S.; Member, S.; Meert, W.; Giustiniano, D.; Member, S.; Lenders, V.; Pollin, S.; Member, S. Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef] [Green Version]
- Zang, K.; Ma, Z. Automatic Modulation Classification Based on Hierarchical Recurrent Neural Networks with Grouped Auxiliary Memory. IEEE Access 2020, 8, 213052–213061. [Google Scholar] [CrossRef]
- Liao, K.; Zhao, Y.; Gu, J.; Zhang, Y.; Zhong, Y. Sequential Convolutional Recurrent Neural Networks for Fast Automatic Modulation Classification. IEEE Access 2021, 9, 27182–27188. [Google Scholar] [CrossRef]
- Denton, E.; Zaremba, W.; Bruna, J.; LeCun, Y.; Fergus, R. Exploiting linear structure within convolutional networks for efficient evaluation. Adv. Neural Inf. Process. Syst. 2014, 2, 1269–1277. [Google Scholar]
- Ba, L.J.; Caruana, R. Do deep nets really need to be deep? Adv. Neural Inf. Process. Syst. 2014, 3, 2654–2662. [Google Scholar]
- Luo, Y.; Hong, P.; Su, R.; Xue, K. Resource Allocation for Energy Harvesting-Powered D2D Communication Underlaying Cellular Networks. IEEE Trans. Veh. Technol. 2017, 66, 10486–10498. [Google Scholar] [CrossRef]
- Liu, M.; Song, T.; Gui, G. Deep cognitive perspective: Resource allocation for noma-based heterogeneous IoT with imperfect SIC. IEEE Internet Things J. 2019, 6, 2885–2894. [Google Scholar] [CrossRef]
- Wang, Y.; Yang, J.; Liu, M.; Gui, G. LightAMC: Lightweight Automatic Modulation Classification via Deep Learning and Compressive Sensing. IEEE Trans. Veh. Technol. 2020, 69, 3491–3495. [Google Scholar] [CrossRef]
- Carreira-Perpiñán, M.A.; Idelbayev, Y. ’Learning-Compression’ Algorithms for Neural Net Pruning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8532–8541. [Google Scholar] [CrossRef]
- Lee, N.; Ajanthan, T.; Torr, P.H. SNIP: Single-shot network pruning based on connection sensitivity. arXiv 2018, arXiv:1810.02340. [Google Scholar]
- Han, S.; Pool, J.; Tran, J.; Dally, W.J. Learning both weights and connections for efficient neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 1135–1143. [Google Scholar]
- He, Y.; Zhang, X.; Sun, J. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1389–1397. [Google Scholar]
- Li, H.; Samet, H.; Kadav, A.; Durdanovic, I.; Graf, H.P. Pruning filters for efficient convnets. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017)—Conference Track Proceedings, Toulon, France, 24–26 April 2017; pp. 1–13. [Google Scholar]
- Frankle, J.; Carbin, M. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceedings of the 7th International Conference on Learning Representations (ICLR 2019), New Orleans, LA, USA, 6–9 May 2019; pp. 1–42. [Google Scholar]
- Louizos, C.; Welling, M.; Kingma, D.P. Learning sparse neural networks through L0 regularization. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018)—Conference Track Proceedings, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 1–13. [Google Scholar]
- Dettmers, T.; Zettlemoyer, L. Sparse Networks from Scratch: Faster Training without Losing Performance. arXiv 2019, arXiv:1907.04840. [Google Scholar]
- Elman, J.L. Finding structure in time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
- Werbos, P.J. Backpropagation Through Time: What It Does and How to Do It. Proc. IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef] [Green Version]
- Stanley, R.E.; Taraza, D. Bearing characteristic parameters to estimate the optimum counterweight mass of a 6-cylinder in-line engine. Am. Soc. Mech. Eng. Intern. Combust. Engine Div. (Publ.) ICE 2001, 36, 123–135. [Google Scholar] [CrossRef]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
- Reed, R.; Member, S. Pruning Algorithms-A survey. IEEE Trans. Neural Netw. 1993, 4, 740–747. [Google Scholar] [CrossRef] [PubMed]
- Dai, X.; Yin, H.; Jha, N.K. NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm. IEEE Trans. Comput. 2019, 68, 1487–1497. [Google Scholar] [CrossRef] [Green Version]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. J. Mach. Learn. Res. 2010, 9, 249–256. [Google Scholar]
- Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015)—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2323. [Google Scholar] [CrossRef] [Green Version]
- O’Shea, T.J.; Corgan, J.; Clancy, T.C. Convolutional radio modulation recognition networks. Commun. Comput. Inf. Sci. 2016, 629, 213–226. [Google Scholar] [CrossRef] [Green Version]
- Ma, H.; Xu, G.; Meng, H.; Wang, M.; Yang, S.; Wu, R.; Wang, W. Cross model deep learning scheme for automatic modulation classification. IEEE Access 2020, 8, 78923–78931. [Google Scholar] [CrossRef]
- Tekbiyik, K.; Ekti, A.R.; Gorcin, A.; Kurt, G.K.; Kececi, C. Robust and Fast Automatic Modulation Classification with CNN under Multipath Fading Channels. In Proceedings of the IEEE Vehicular Technology Conference, Antwerp, Belgium, 25–28 May 2020. [Google Scholar] [CrossRef]
- Mossad, O.S.; Elnainay, M.; Torki, M. Deep convolutional neural network with multi-task learning scheme for modulations recognition. In Proceedings of the 2019 15th International Wireless Communications and Mobile Computing Conference (IWCMC 2019), Tangier, Morocco, 24–28 June 2019; pp. 1644–1649. [Google Scholar] [CrossRef]
- O’Shea, T.; Hoydis, J. An Introduction to Deep Learning for the Physical Layer. IEEE Trans. Cogn. Commun. Netw. 2017, 3, 563–575. [Google Scholar] [CrossRef] [Green Version]
Lenet-300-100 (MNIST) | Conv2 (CIFAR10) | RNNs (RadioML) | |
---|---|---|---|
Batch size | 100 | 64 | 400 |
Warm up epochs | 1 | 5 | 10 |
Prune frequency | Once per epoch | Once per epoch | Once per epoch |
Prune rate p | 0.005 | 0.002 | 0.002 |
0.2 | 0.3 | 0.3 | |
0.4 | 0.3 | 0.3 | |
2 | 2 | 2 |
Lenet-300-100 on MNIST Accuracy (%) | Conv2 on CIFAR10 Accuracy (%) | |
---|---|---|
Unpruned baseline | 98.16 ± 0.06 | 65.09 ± 0.045 |
Magnitude-based [23] | 98.40 | 68.31 |
Lottery [26] | 98.52 | 69.33 |
Proposed method | 98.62 ± 0.067 | 71.29 ± 0.059 |
AccAS | AccASH | |||
---|---|---|---|---|
Original | Pruned | Original | Pruned | |
2-layer GAM-HRNN | 62.47 [14] | 62.87 ± 0.077 | 92.2 [14] | 92.45 ± 0.083 |
2-layer LSTM | 60.8 ± 0.073 | 61.12 ± 0.11 | 90 [13] | 90.81 ± 0.09 |
2-layer GRU | 61.68 ± 0.087 | 62.06 ± 0.073 | 91.13 ± 0.069 | 91.59 ± 0.063 |
Cross model [40] | 62.41 | - | - | - |
Multipath CNN [41] | - | - | 90.7 | - |
Multitask CNN [42] | 59.43 | - | - | - |
SCRNN [15] | - | - | 92.1 | - |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zang, K.; Wu, W.; Luo, W. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks. Sensors 2021, 21, 6410. https://doi.org/10.3390/s21196410
Zang K, Wu W, Luo W. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks. Sensors. 2021; 21(19):6410. https://doi.org/10.3390/s21196410
Chicago/Turabian StyleZang, Ke, Wenqi Wu, and Wei Luo. 2021. "Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks" Sensors 21, no. 19: 6410. https://doi.org/10.3390/s21196410
APA StyleZang, K., Wu, W., & Luo, W. (2021). Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks. Sensors, 21(19), 6410. https://doi.org/10.3390/s21196410