Radar Target Detection Algorithm Using Convolutional Neural Network to Process Graphically Expressed Range Time Series Signals
Abstract
:1. Introduction
2. Graphical Expression Signal
2.1. Signal Definition and Matched Filtering
2.2. Two-Dimensional Echo Sequence Diagram of Multiple Cycles
3. Design of Radar Target Detection Network
3.1. Structure and Parameters of Convolutional Neural Network
- Input layer: the input layer is used to receive the original input data or images and preprocess them.
- Convolutional layer: The convolutional layer is the core of the convolutional neural network, which acts on the information features of data or images through a certain size of convolution kernel. The main function of the convolutional layer is to extract features, and different features are calculated by different convolution kernels [24]. Assume that the input image is a 5 × 5 matrix, and the radar simulation echo signal image of 12 dB SNR is taken as the input image, as shown in Figure 4a, the amplitude value is shown in Figure 4b, and the convolution kernel is a 3 × 3 matrix as shown in Figure 4c:
- 3.
- Pooling layer: Pooling layer is a “down-sampling” operation, which filters out the minor features in the data and retains the important feature information. Max pooling or average pooling is usually used.
- 4.
- Fully connected layer: After the processing of the convolution layer and the pooling layer, one or two fully connected layers are generally added before the output layer. Each node in the fully connected layer is connected to the nodes in the previous layer, and the previously extracted features are integrated through the fully connected layer to complete the prediction or classification task.
- 5.
- Output layer: the output layer is the final result generation layer.
3.2. Target Detection Network Design
4. Experiments and Discussions
4.1. Processing of Image Blocks
4.2. Validation Experiments of the Detection Method
4.2.1. Experiment I: Validation of the Detection Method
4.2.2. Experiment II: Influence of Image Block Size on Detection
4.2.3. Experiment III: Detection of Moving Scattering Point
4.2.4. Experiment IV: Detection of Two Scattering Points
4.3. Experiment V: Detection of Multi-Scatterer Target
4.3.1. Electromagnetic Simulation Experiment I
4.3.2. Electromagnetic Simulation Experiment II
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Richards, M.A. Fundamentals of Radar Signal Processing, 2nd ed.; McGraw-Hill Education: New York, NY, USA, 2014. [Google Scholar]
- Zhang, F.; Xu, L. Test method of radar to moving targets with high speed under low SNR condition. Shipboard Electron. Countermeas. 2022, 45, 57–61. [Google Scholar]
- Wu, Y.; Xia, Z. Fast maneuvering multiple targets’ detection at low SNR level based on Keystone transform. Comput. Digit. Eng. 2016, 44, 625–629. [Google Scholar]
- Su, J.; Xing, M.; Wang, G.; Bao, Z. High-speed multi-target detection with narrowband radar. IET Radar Sonar Navig. 2010, 4, 595–603. [Google Scholar] [CrossRef]
- Amrouche, N.; Khenchaf, A.; Berkani, D. Detection and Tracking Targets Under Low SNR. In Proceedings of the IEEE International Conference on Industrial Technology, Toronto, ON, Canada, 22–25 March 2017; IEEE: Pistacaway, NJ, USA, 2017. [Google Scholar]
- Davey, S.J.; Rutten, M.G.; Cheung, B. Using Phase to Improve Track-Before-Detect. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 832–849. [Google Scholar] [CrossRef]
- Wang, S.; Zhang, Y. Improved dynamic programming algorithm for low signal-to-noise ratio moving target detection. Syst. Eng. Electron. 2016, 38, 2244–2251. [Google Scholar]
- Pulford, G.W.; La Scala, B.F. Multihypothesis Viterbi Data Association: Algorithm Development and Assessment. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 583–609. [Google Scholar] [CrossRef]
- Zhang, H.; Wen, X.; Zhang, T. The detection of multiple targets with low SNR based on greedy algorithm. Chin. J. Comput. 2008, 31, 142–150. [Google Scholar] [CrossRef]
- Ball, J.E. Low signal-to-noise ratio radar target detection using Linear Support Vector Machines (L-SVM). In Proceedings of the Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; IEEE: Pistacaway, NJ, USA, 2014; pp. 1291–1294. [Google Scholar]
- Niu, W.; Zheng, W.; Yang, Z.; Wu, Y.; Vagvolgyi, B.; Liu, B. Moving Point Target Detection Based on Higher Order Statistics in Very Low SNR. IEEE Geosci. Remote Sens. Lett. 2018, 15, 217–221. [Google Scholar] [CrossRef]
- Guo, R.; Wu, M. A method of target range and velocity measurement based on wideband radar returns. Mod. Radar 2009, 31, 47–50. [Google Scholar]
- Bao, Z.; Xing, M.; Wang, T. Radar Imaging Technology; Publishing House of Electronics Industry: Beijing, China, 2005; pp. 27–30. [Google Scholar]
- Wehner, D.R. High Resolution Radar; Artech House Publisher: Boston, MA, USA, 1995; pp. 33–42. [Google Scholar]
- Dai, Y.; Liu, D.; Hu, Q.; Chen, C.; Wang, Y. Phase compensation accumulation method based for radar echo splitting. Mod. Def. Technol. 2022, 50, 84–89. [Google Scholar]
- Orlenko, V.M.; Shirman, Y.D. Non-coherent integration losses of wideband target detection. In Proceedings of the First European Radar Conference, Amsterdam, The Netherlands, 11–15 October 2004; IEEE: Pistacaway, NJ, USA, 2004. [Google Scholar]
- Dai, F.; Liu, H.; Wu, S. Detection performance comparison for wideband and narrowband radar in noise. J. Electron. Inf. Technol. 2010, 32, 1837–1842. [Google Scholar] [CrossRef]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Ian, G.; Yoshua, B.; Aaron, C. Deep Learning; Massachusetts Institute of Technology Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Hinton, G.; Salakhutdinov, R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–550. [Google Scholar] [CrossRef] [PubMed]
- Bengio, Y. Learning deep architectures for AI. Found. Trends Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef]
- Le, C.Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar]
- Papageorgiou, C.P.; Oren, M.; Poggio, T. A General Framework for Object Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, 23–25 June 1998; IEEE: Pistacaway, NJ, USA, 1998; pp. 555–562. [Google Scholar]
- Li, H.; Zhe, L.; Shen, X.; Brandt, J.; Hua, G. A convolutional neural network cascade for face detection. In Proceedings of the Computer Vision & Pattern Recognition, Boston, MA, USA, 7–12 June 2015; IEEE: Pistacaway, NJ, USA, 2015; pp. 5325–5334. [Google Scholar]
- Zeng, X.; Ouyang, W.; Wang, M.; Wang, X. Deep Learning of Scene-Specific Classifier for Pedestrian Detection; Springer International Publishing: Cham, Switzerland, 2014; pp. 472–487. [Google Scholar]
- Li, X.; Ye, M.; Fu, M.; Xu, P.; Li, T. Domain adaption of vehicle detector based on convolutional neural networks. Int. J. Control. Autom. Syst. 2015, 13, 1020–1031. [Google Scholar] [CrossRef]
- Fu, M.; Deng, M.; Zhang, D. Survey on deep neural network image target detection algorithms. Comput. Syst. Appl. 2022, 31, 35–45. [Google Scholar]
- Vaillant, R.; Monrocq, C.; LeCun, Y. Original approach for the localisation of objects in images. IEE Proc.-Vis. Image Signal Process. 1994, 141, 245–250. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Szegedy, C.; Toshev, A.; Erhan, D. Deep neural networks for object detection. Adv. Neural Inf. Process. Syst. 2013, 26, 2553–2561. [Google Scholar]
- Erhan, D.; Szegedy, C.; Toshev, A.; Anguelov, D. Scalable Object Detection Using Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; IEEE: Pistacaway, NJ, USA, 2014; pp. 2155–2162. [Google Scholar]
- Szegedy, C.; Reed, S.; Erhan, D.; Anguelov, D.; Ioffe, S. Scalable, high-quality object detection. arXiv 2014, arXiv:1412.1441. [Google Scholar]
- Girshick, R.; Iandola, F.; Darrell, T.; Malik, J. Deformable part models are convolutional neural networks. In Proceedings of the Computer Vision & Pattern Recognition, Boston, MA, USA, 7–12 June 2015; IEEE: Pistacaway, NJ, USA, 2015; pp. 437–446. [Google Scholar]
- Li, X.; Ye, M.; Liu, D.; Zhang, F.; Tang, S. Memory-based object detection in surveillance scenes. In Proceedings of the IEEE International Conference on Multimedia & Expo, Seattle, WA, USA, 11–15 July 2016; IEEE: Pistacaway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the Computer Vision & Pattern Recognition, Boston, MA, USA, 7–12 June 2015; IEEE: Pistacaway, NJ, USA, 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; IEEE: Pistacaway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
- Skolnik, M.I. Radar Handbook, 2nd ed.; McGraw-Hill Book Co., Inc.: New York, NY, USA, 1990. [Google Scholar]
Layer Name | Parameter Settings |
---|---|
Input | |
Conv2D | Filters = 6 kernel_size = 5 × 5 |
BN layer | batch_size = 32 |
Activation layer | Activation = ‘sigmoid’ |
Maxpool2D | pool_size = 2 × 2 strides = 2 |
Conv2D | Filters = 16 kernel_size = 5 × 5 |
BN layer | batch_size = 32 |
Activation layer | Activation = ‘sigmoid’ |
Maxpool2D | pool_size = 2 × 2 strides = 2 |
Dense(120) | units = 120 activation = ‘sigmoid’ |
Dense(84) | units = 84 activation = ‘sigmoid’ |
Softmax | units = 2 activation = ‘softmax’ |
Parameters | Symbols | Value |
---|---|---|
Carrier frequency | 10 GHz | |
Bandwidth | 1 GHz | |
Sub-pulse width | ||
Sampling rate | 2 GHz |
Signal Power/dB | Noise Power/dB | SNR/dB | |
---|---|---|---|
1 | 25.3 | 12.7 | 12.6 |
2 | 27.3 | 12.7 | 14.6 |
3 | 25.7 | 12.7 | 13.0 |
4 | 26.2 | 12.7 | 13.5 |
5 | 23.6 | 12.7 | 10.9 |
6 | 24.5 | 12.7 | 11.8 |
7 | 28.6 | 12.8 | 15.8 |
8 | 25.6 | 12.7 | 12.9 |
Serial Number | Coordinate Position/cm | Serial Number | Coordinate Position/cm | Serial Number | Coordinate Position/cm |
---|---|---|---|---|---|
1 | [0, 0] | 10 | [116.7, 92.5] | 19 | [92.5, 116.7] |
2 | [26.2, 7.3] | 11 | [141.1, 118.7] | 20 | [66.3, 92.4] |
3 | [55.5, 27.0] | 12 | [162.8, 117.7] | 21 | [39.1, 127.4] |
4 | [95.5, 9.8] | 13 | [177.4, 133.5] | 22 | [16.1, 156.8] |
5 | [135.5, −7.4] | 14 | [157.7, 148.6] | 23 | [1.5, 159.3] |
6 | [159.3, 1.5] | 15 | [148.6, 157.7] | 24 | [−7.4, 135.5] |
7 | [156.8, 16.1] | 16 | [133.5, 177.4] | 25 | [9.8, 95.5] |
8 | [127.4, 39.1] | 17 | [117.7, 162.8] | 26 | [27.0, 55.5] |
9 | [92.4, 66.3] | 18 | [118.7, 141.1] | 27 | [7.3, 26.2] |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, Y.; Liu, D.; Hu, Q.; Yu, X. Radar Target Detection Algorithm Using Convolutional Neural Network to Process Graphically Expressed Range Time Series Signals. Sensors 2022, 22, 6868. https://doi.org/10.3390/s22186868
Dai Y, Liu D, Hu Q, Yu X. Radar Target Detection Algorithm Using Convolutional Neural Network to Process Graphically Expressed Range Time Series Signals. Sensors. 2022; 22(18):6868. https://doi.org/10.3390/s22186868
Chicago/Turabian StyleDai, Yan, Dan Liu, Qingrong Hu, and Xiaoli Yu. 2022. "Radar Target Detection Algorithm Using Convolutional Neural Network to Process Graphically Expressed Range Time Series Signals" Sensors 22, no. 18: 6868. https://doi.org/10.3390/s22186868
APA StyleDai, Y., Liu, D., Hu, Q., & Yu, X. (2022). Radar Target Detection Algorithm Using Convolutional Neural Network to Process Graphically Expressed Range Time Series Signals. Sensors, 22(18), 6868. https://doi.org/10.3390/s22186868