An Adaptive Focal Loss Function Based on Transfer Learning for Few-Shot Radar Signal Intra-Pulse Modulation Classification
Abstract
:1. Introduction
2. Methods
2.1. Related Work
2.1.1. Convolutional Neural Network
2.1.2. Radar Signal Intra-Pulse Module Classification
2.1.3. Transfer Learning
2.1.4. The Focal Loss Function
2.2. The Proposed Methods
2.2.1. Transfer Learning-Based Convolutional Neural Network
2.2.2. Adaptive Focus Loss Function (AFL)
3. Experiments and Results
3.1. Dataset and Parameters Setting
3.2. Experiments on 1D-TLAFLCNN
3.2.1. Experiments on the Source Domain Network
3.2.2. Experiments on the Target Domain Network
- Initialize the corresponding convolutional layers of the target domain network with the weights learned from the first four convolutional layers of the source domain network, and freeze these weights;
- Randomly initialize the parameters of the fully connected layers using a Gaussian distribution;
- Train the classification layers using the target domain dataset;
- Fine-tune the entire network by unfreezing all convolutional layers and setting a low learning rate (set to 0.0001) to retrain the entire network in order to incrementally fit the pre-trained features to the new data.
3.3. Comparisons with Other Baseline Methods
4. Discussion
4.1. AFL Compared with Different Values of the Focusing Parameter Based on FL
4.2. Effect of Different Noise Environments on Experimental Results
4.3. Effect of Different Sample Sizes on Experimental Results
4.4. Improvement in Training Model Time Consumption
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, S. Research on recognition algorithm for intra pulse modulation of radar signals. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; pp. 1092–1096. [Google Scholar] [CrossRef]
- Jin, Q.; Wang, H.Y.; Ma, F.F. An overview of radar emitter classification and identification methods. Telecommun. Eng. 2019, 59, 360–368. [Google Scholar]
- Ma, X.R.; Liu, D.; Shan, Y.L. Intra-pulse modulation recognition using short-time ramanujan Fourier transform spectrogram. EURASIP J. Adv. Signal Process. 2017, 1, 42. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Huang, Z.L.; Datcu, M.; Pan, Z.X.; Lei, B. Deep SAR-Net: Learning objects from signals. ISPRS J. Photogramm. Remote Sens. 2020, 161, 179–193. [Google Scholar] [CrossRef]
- Wang, X.B.; Huang, G.M.; Zhou, Z.W.; Tian, W.; Yao, J.L.; Gao, J. Radar emitter recognition based on the energy cumulant of short time Fourier transform and reinforced deep belief network. Sensors 2018, 18, 3103. [Google Scholar] [CrossRef] [Green Version]
- Qu, Z.Y.; Wang, W.Y.; Hou, C.B.; Hou, C.F. Radar signal intra-pulse modulation recognition based on convolutional denoising autoencoder and deep convolutional neural network. IEEE Access 2019, 7, 112339–112347. [Google Scholar] [CrossRef]
- Gao, L.P.; Zhang, X.L.; Gao, J.P.; You, S.X. Fusion image based radar signal feature extraction and modulation recognition. IEEE Access 2019, 7, 13135–13148. [Google Scholar] [CrossRef]
- Liu, Z.; Shi, Y.; Zeng, Y.; Gong, Y. Radar emitter signal detection with convolutional neural network. In Proceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT), Jinan, China, 18–20 October 2019; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2019; pp. 48–51. [Google Scholar]
- Sun, J.; Xu, G.L.; Ren, W.J.; Yan, Z.Y. Radar emitter classification based on unidimensional convolutional neural network. IET Radar Sonar Navig. 2018, 12, 862–867. [Google Scholar] [CrossRef]
- Li, X.Q.; Liu, Z.M.; Huang, Z.T.; Liu, W.S. Radar emitter classification with attention-based multi-RNNs. IEEE Commun. Lett. 2020, 24, 2000–2004. [Google Scholar] [CrossRef]
- Wu, B.; Yuan, S.B.; Li, P.; Jing, Z.H.; Huang, S.; Zhao, Y.D. Radar emitter signal recognition based on one-dimensional convolutional neural network with attention mechanism. Sensors 2020, 20, 6350. [Google Scholar] [CrossRef]
- Li, F.Z.; Liu, Y.; Wu, P.X.; Dong, F.; Cai, Q.; Wang, Z. A survey on recent advances in meta-learning. Chin. J. Comput. 2021, 44, 422–446. [Google Scholar] [CrossRef]
- Li, Y.; Ding, Z.; Zhang, C.; Wang, Y.; Chen, J. SAR ship detection based on resnet and transfer learning. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1188–1191. [Google Scholar] [CrossRef]
- Pan, S.J.L.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Dai, W.; Yang, Q.; Xue, G.; Yu, Y. Boosting for transfer learning. Machine Learning. In Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, OR, USA, 20–24 June 2007. [Google Scholar]
- Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and transferring midlevel image representations using convolutional neural networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724. [Google Scholar] [CrossRef] [Green Version]
- Huang, Z.L.; Pan, Z.X.; Lei, B. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef] [Green Version]
- Shang, R.H.; Wang, J.M.; Jiao, L.C.; Stolkin, R.; Hou, B.; Li, Y.Y. SAR targets classification based on deep memory convolution neural networks and transfer parameters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2834–2846. [Google Scholar] [CrossRef]
- Zhang, W.; Zhu, Y.F.; Fu, Q. Deep transfer learning based on generative adversarial networks for SAR target recognition with label limitation. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Huang, Z.L.; Dumitru, C.O.; Pan, Z.X.; Lei, B.; Datcu, M. Classification of large-scale high-resolution SAR images with deep transfer learning. IEEE Geosci. Remote Sens. Lett. 2021, 18, 107–111. [Google Scholar] [CrossRef] [Green Version]
- Rostami, M.; Kolouri, S.; Eaton, E.; Kim, K. Deep transfer learning for few-shot SAR image classification. Remote Sens. 2019, 11, 1374. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cun, Y.L.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Habbard, W.; Jackel, L. Handwritten digit recognition with a back-propagation network. Adv. Neural Inf. Process. Syst. 1990, 2, 396–404. [Google Scholar]
- Cun, L.Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems(ISCAS), Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. Comput. Sci. 2014, 1409, 1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Wei, S.J.; Qu, Q.Z.; Su, H.; Wang, M.; Shi, J.; Hao, X.J. Intra-pulse modulation radar signal recognition based on CLDN network. IET Radar Sonar Navig. 2020, 14, 803–810. [Google Scholar] [CrossRef]
- Pan, Y.W.; Yang, S.H.; Peng, H.; Li, T.Y.; Wang, W.Y. Specific emitter identification based on deep residual networks. IEEE Access 2019, 7, 54425–54434. [Google Scholar] [CrossRef]
- Shao, L.; Zhu, F.; Li, X.L. Transfer learning for visual categorization: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1019–1034. [Google Scholar] [CrossRef] [PubMed]
- Varga, D. No-reference video quality assessment using multi-pooled, saliency weighted deep features and decision fusion. Sensors 2022, 22, 2209. [Google Scholar] [CrossRef] [PubMed]
- Varga, D. Multi-pooled inception features for no-reference image quality assessment. Appl. Sci. 2020, 10, 2186. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Zhang, S.H.; Qin, Y.; Estupinan, E. A systematic review of deep transfer learning for machinery fault diagnosis. Neurocomputing 2020, 407, 121–135. [Google Scholar] [CrossRef]
- Wang, Q.; Du, P.F.; Yang, J.Y.; Wang, G.H.; Lei, J.J.; Hou, C.P. Transferred deep learning based waveform recognition for cognitive passive radar. Signal Process. 2019, 155, 259–267. [Google Scholar] [CrossRef]
- Shrivastava, A.; Gupta, A.; Girshick, R. Training region-based object detectors with online hard example mining. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 761–769. [Google Scholar] [CrossRef] [Green Version]
- Tian, X.W.; Wu, D.; Wang, R.; Cao, X.C. Focal text: An accurate text detection with focal loss. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 2984–2988. [Google Scholar] [CrossRef]
- Chen, M.Q.; Fang, L.; Liu, H.F. FR-NET: Focal loss constrained deep residual networks for segmentation of cardiac MRI. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 764–767. [Google Scholar] [CrossRef]
- Su, H.; Wei, S.J.; Wang, M.K.; Zhou, L.M.; Shi, J.; Zhang, X.L. Ship detection based on retinaNet-Plus for high-resolution SAR imagery. In Proceedings of the 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China, 26–29 November 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Nagi, J.; Ducatelle, F.; Caro, G.A.D.; Cireşan, D.; Meier, U.; Giusti, A.; Nagi, F.; Schmidhuber, J.; Gambardella, L.M. Max-pooling convolutional neural networks for vision-based hand gesture recognition. In Proceedings of the 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 16–18 November 2011; pp. 342–347. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
- Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
- Xu, B.; Wang, N.Y.; Chen, T.Q.; Li, M. Empirical evaluation of rectified activations in convolutional network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
- Xu, J.; Li, Z.S.; Du, B.W.; Zhang, M.M.; Liu, J. Reluplex made more practical: Leaky ReLU. In Proceedings of the 2020 IEEE Symposium on Computers and Communications (ISCC), Rennes, France, 7–10 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Yu, W.; Yang, K.Y.; Yao, H.X.; Sun, X.S.; Xu, P.F. Exploiting the complementary strengths of multi-layer CNN features for image retrieval. Neurocomputing 2017, 237, 235–241. [Google Scholar] [CrossRef]
- Li, B.; Liu, Y.; Wang, X. Gradient Harmonized Single-Stage Detector. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8577–8584. [Google Scholar] [CrossRef]
- Figueroa, R.L.; Zeng-Treitler, Q.; Kandula, S.; Ngo, L.H. Predicting sample size required for classification performance. BMC Med. Inform. Decis. Mak. 2012, 12, 8. [Google Scholar] [CrossRef] [Green Version]
Layer | Kernel Size | Channel | Max Pooling | |
---|---|---|---|---|
Source Network Target Network | Conv1 | 1 × 5 | 32 | 1 × 2 |
Conv2 | 1 × 5 | 64 | 1 × 2 | |
Conv3 | 1 × 5 | 128 | 1 × 4 | |
Conv4 | 1 × 3 | 256 | 1 × 4 | |
Source Network | Conv5 | 1 × 3 | 256 | 1 × 4 |
Type | Carrier Frequency | Parameter | |
---|---|---|---|
Source Domain | SCF | 50~500 MHz | None |
LFM | 100~400 MHz | Bandwidth: 20 MHz~150 MHz | |
SFM | 100~400 MHz | Bandwidth: 20 MHz~100 MHz | |
Target Domain | BPSK | 100~300 MHz | 5,7,11,13-bit Barker code |
BFSK | 100~400 MHz | 5,7,11,13-bit Barker code | |
100~400 MHz | |||
QFSK | 100~300 MHz | 16-bit Frank code | |
100~300 MHz | |||
100~300 MHz | |||
100~300 MHz | |||
FRANK | 100~400 MHz | Phase number: 4–6 | |
EQFM | 100~300 MHz | Bandwidth: 5 MHz to 100 MHz | |
DLFM | 100~300 MHz | Bandwidth: 10 MHz to 150 MHz | |
MLFM | 100~300 MHz | Bandwidth: 30 MHz to 100 MHz Bandwidth: 30 MHz to 100 MHz Segment: 20–80% | |
100~300 MHz | |||
LFM-BPSK | 100~300 MHz | Bandwidth: 5 MHz to 150 MHz 5,7,11,13-bit Barker code | |
BPSK-BFSK | 100~400 MHz | 5,7,11,13-bit Barker code | |
100~400 MHz |
Signal Number | Type | |
---|---|---|
Source Domain | 5000 per signal | SCF |
LFM | ||
SFM | ||
Target Domain | Each type of signal increase from 50 to 140 with a step of 10, constituting 10 training sets with different sample sizes respectively. | BPSK |
BFSK | ||
QFSK | ||
FRANK | ||
EQFM | ||
DLFM | ||
MLFM | ||
LFM–BPSK | ||
BPSK–BFSK |
SNR/dB | −5 | −4 | −3 | −2 | −1 | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|---|---|---|---|---|
SCF | 0.9897 | 0.9923 | 0.9980 | 0.9990 | 0.9969 | 0.9980 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
LFM | 0.9840 | 0.9884 | 0.9917 | 0.9915 | 0.9990 | 1.0000 | 0.9990 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
SMF | 0.9812 | 0.9842 | 0.9863 | 0.9888 | 0.9990 | 0.9969 | 0.9970 | 0.9983 | 1.0000 | 1.0000 | 1.0000 |
Average | 0.9850 | 0.9883 | 0.9920 | 0.9931 | 0.9983 | 0.9983 | 0.9987 | 0.9994 | 1.0000 | 1.0000 | 1.0000 |
Size | −5 dB | −4 dB | −3 dB | −2 dB | −1 dB | 0 dB | 1 dB | 2 dB | 3 dB | 4 dB | 5 dB | Noiseless | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SNR | |||||||||||||
50 | 0.8353 | 0.8444 | 0.8781 | 0.8996 | 0.9031 | 0.9116 | 0.9148 | 0.9227 | 0.9347 | 0.9353 | 0.9365 | 0.9646 | |
60 | 0.8400 | 0.8549 | 0.8841 | 0.9059 | 0.9208 | 0.9219 | 0.9171 | 0.9246 | 0.9348 | 0.9372 | 0.9440 | 0.9735 | |
70 | 0.8448 | 0.8616 | 0.8966 | 0.9172 | 0.9264 | 0.9287 | 0.9377 | 0.9391 | 0.9441 | 0.9506 | 0.9511 | 0.9746 | |
80 | 0.8463 | 0.8637 | 0.9067 | 0.9211 | 0.9273 | 0.9296 | 0.9433 | 0.9444 | 0.9454 | 0.9516 | 0.9512 | 0.9754 | |
90 | 0.8480 | 0.8661 | 0.9168 | 0.9238 | 0.9381 | 0.9333 | 0.9452 | 0.9461 | 0.9456 | 0.9529 | 0.9539 | 0.9778 | |
100 | 0.8496 | 0.8777 | 0.9201 | 0.9265 | 0.9416 | 0.9405 | 0.9487 | 0.9464 | 0.9468 | 0.9549 | 0.9588 | 0.9798 | |
110 | 0.8557 | 0.8804 | 0.9216 | 0.9326 | 0.9423 | 0.9429 | 0.9511 | 0.9466 | 0.9480 | 0.9593 | 0.9594 | 0.9811 | |
120 | 0.8581 | 0.8919 | 0.9266 | 0.9397 | 0.9427 | 0.9461 | 0.9525 | 0.9557 | 0.9589 | 0.9639 | 0.9648 | 0.9845 | |
130 | 0.8701 | 0.9164 | 0.9278 | 0.9397 | 0.9464 | 0.9524 | 0.9562 | 0.9566 | 0.9642 | 0.9691 | 0.9767 | 0.9863 | |
140 | 0.8777 | 0.9206 | 0.9341 | 0.9411 | 0.9514 | 0.9649 | 0.9674 | 0.9709 | 0.9698 | 0.9786 | 0.9854 | 0.9889 |
Focal Loss | Transfer Learning | Adaptive Focus Loss | |
---|---|---|---|
1D-CNN | no | no | no |
1D-FLCNN ) | yes | no | no |
1D-TLCNN | no | yes | no |
1D-TLFLCNN ) | yes | yes | no |
1D-TLAFLCNN | no | yes | yes |
Number of Epochs | Total Time (RTX3070) | |
---|---|---|
1D-TLAFLCNN | 90 | 258 s |
1D-TLFLCNN ) | 100 | 284 s |
1D-TLCNN | 120 | 355 s |
1D-FLCNN ) | 140 | 416 s |
1D-CNN | 160 | 483 s |
Datasets | 50 | 60 | 70 | 80 | 90 | 100 | 110 | 120 | 130 | 140 | AA |
---|---|---|---|---|---|---|---|---|---|---|---|
1D-TLAFLCNN | 0.9116 | 0.9219 | 0.9287 | 0.9296 | 0.9333 | 0.9405 | 0.9429 | 0.9461 | 0.9524 | 0.9649 | 0.9372 |
1D-TLFLCNN ) | 0.9007 | 0.9177 | 0.9185 | 0.9278 | 0.9261 | 0.9378 | 0.9407 | 0.9370 | 0.9384 | 0.9587 | 0.9303 |
1D-TLCNN | 0.8938 | 0.9111 | 0.9223 | 0.9278 | 0.9317 | 0.9389 | 0.9395 | 0.9407 | 0.9454 | 0.9619 | 0.9313 |
1D-FLCNN ) | 0.9027 | 0.8963 | 0.9114 | 0.9222 | 0.9212 | 0.9288 | 0.9153 | 0.9315 | 0.9352 | 0.9524 | 0.9217 |
1D-CNN | 0.8074 | 0.8189 | 0.8351 | 0.8413 | 0.8456 | 0.8473 | 0.8504 | 0.8596 | 0.8704 | 0.8644 | 0.8441 |
CNN-Qu | 0.8286 | 0.8293 | 0.8389 | 0.8373 | 0.8411 | 0.8423 | 0.8439 | 0.8501 | 0.8577 | 0.8633 | 0.8433 |
CNN-Wu | 0.8751 | 0.8765 | 0.8801 | 0.8834 | 0.8862 | 0.8897 | 0.8935 | 0.8991 | 0.9033 | 0.9055 | 0.8892 |
CNN-Wei | 0.8177 | 0.8209 | 0.8237 | 0.8277 | 0.8307 | 0.8348 | 0.8455 | 0.8397 | 0.8508 | 0.8559 | 0.8347 |
SNR | −5 dB | −4 dB | −3 dB | −2 dB | −1 dB | 0 dB | 1 dB | 2 dB | 3 dB | 4 dB | 5 dB | Noiseless |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1D-TLAFLCNN | 0.8525 | 0.8777 | 0.9112 | 0.9247 | 0.9340 | 0.9372 | 0.9434 | 0.9453 | 0.9492 | 0.9553 | 0.9581 | 0.9786 |
1D-TLFLCNN ) | 0.8486 | 0.8728 | 0.9063 | 0.9218 | 0.9306 | 0.9323 | 0.9365 | 0.9401 | 0.9444 | 0.9499 | 0.9557 | 0.9702 |
1D-TLCNN | 0.8426 | 0.8619 | 0.8843 | 0.9049 | 0.9129 | 0.9235 | 0.9277 | 0.9302 | 0.9326 | 0.9405 | 0.9406 | 0.9605 |
1D-FLCNN ) | 0.8393 | 0.8652 | 0.8945 | 0.9039 | 0.9133 | 0.9176 | 0.9296 | 0.9344 | 0.9356 | 0.9403 | 0.9427 | 0.9642 |
1D-CNN | 0.8117 | 0.8267 | 0.8524 | 0.8781 | 0.8947 | 0.9126 | 0.9205 | 0.9250 | 0.9268 | 0.9316 | 0.9338 | 0.9502 |
CNN-Qu | 0.7436 | 0.7633 | 0.8161 | 0.8391 | 0.8498 | 0.8516 | 0.8636 | 0.8693 | 0.8702 | 0.8725 | 0.8746 | 0.9011 |
CNN-Wu | 0.7624 | 0.7913 | 0.8218 | 0.8308 | 0.8474 | 0.8510 | 0.8588 | 0.8697 | 0.8711 | 0.8782 | 0.8868 | 0.9377 |
CNN-Wei | 0.8095 | 0.8390 | 0.8511 | 0.8744 | 0.8962 | 0.8976 | 0.9159 | 0.9232 | 0.9322 | 0.9352 | 0.9361 | 0.9599 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jing, Z.; Li, P.; Wu, B.; Yuan, S.; Chen, Y. An Adaptive Focal Loss Function Based on Transfer Learning for Few-Shot Radar Signal Intra-Pulse Modulation Classification. Remote Sens. 2022, 14, 1950. https://doi.org/10.3390/rs14081950
Jing Z, Li P, Wu B, Yuan S, Chen Y. An Adaptive Focal Loss Function Based on Transfer Learning for Few-Shot Radar Signal Intra-Pulse Modulation Classification. Remote Sensing. 2022; 14(8):1950. https://doi.org/10.3390/rs14081950
Chicago/Turabian StyleJing, Zehuan, Peng Li, Bin Wu, Shibo Yuan, and Yingchao Chen. 2022. "An Adaptive Focal Loss Function Based on Transfer Learning for Few-Shot Radar Signal Intra-Pulse Modulation Classification" Remote Sensing 14, no. 8: 1950. https://doi.org/10.3390/rs14081950
APA StyleJing, Z., Li, P., Wu, B., Yuan, S., & Chen, Y. (2022). An Adaptive Focal Loss Function Based on Transfer Learning for Few-Shot Radar Signal Intra-Pulse Modulation Classification. Remote Sensing, 14(8), 1950. https://doi.org/10.3390/rs14081950