Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks
Abstract
:1. Introduction
- (1)
- DL-based SEI is used to identify hardware differences between radiation sources, is it robust? Will it be affected by adversarial examples?
- (2)
- Does it work properly after being affected against these attacks? What is the form of adversarial examples against SEI? What are the characteristics of the attack signal?
- (3)
- Is there any way to improve system performance against these attacks? Can the recognition performance of the system be fully recovered?
1.1. Contribution
- (1)
- The security and robustness of DL-based SEI under adversarial attacks are studied for the first time, and the in-depth system analysis answers the above questions, which has certain guiding significance for the practical application of this technology in the future.
- (2)
- The concept of adversarial attack and defensive training is introduced into the SEI problem, two new scenarios are designed on the basis of the original SEI, the specific implementation methods are given, and rigorous experiments are carried out on the real-world and simulated datasets. Scenarios include ① attack scenario: malicious devices use adversarial attacks to fool DL-based SEI, and ② defense scenario: adversarial training is performed to improve system performance to correctly identify illegal devices.
- (3)
- In the attack scene, the adversarial perturbations and system loss are investigated based on three adversarial example generation methods. The waveform characteristics of adversarial examples, degree of performance degradation, and influences on different emitters are analyzed and studied. It is discovered that DL-based SEI is very vulnerable to adversarial attacks. Adversarial perturbation, even with a quite low energy, can make the DL-based SEI fail, with much higher destructiveness than the white noise of the same strength. The adversarial perturbations at the strength of −25 dB on the real-world data can reduce the recognition performance from 99% to below 10%.
- (4)
- In the defense scenario, a corresponding adversarial training (AT) method inspired by the normalized training in [30], which deals with adversarial attacks, is proposed to enhance the robustness of DL-based SEI. Through AT, the highest improvement of the performance can be more than 60%. Facing the attacks at the strength of −32 dB, performance of DL-based SEI recovers from 55.29% to above 85.59%. AT also improves the robustness of the system to white noise.
- (5)
- In addition, it is also found that different datasets are affected by attacks differently, and the improvement effect after AT is also different; there are also differences between individual emitters. Moreover, there is a certain threshold for the improvement effect of AT.
1.2. Related Work
1.2.1. DL-Based SEI
1.2.2. Adversarial Attack
1.2.3. Adversarial Attack in Communication Signal Processing
1.2.4. Adversarial Training
1.3. Organization
2. Problem Formulation
2.1. Transmitter Distortion Model
2.1.1. Filter Distortion (FD)
2.1.2. I/Q Quadrature Modulation Errors (IQE)
2.1.3. Carrier Leakage and Spurious Tone (CST)
2.1.4. Power Amplifier Nonlinear Distortion (PAD)
3. Adversarial Attack and Adversarial Training
3.1. Adversarial Attack
3.1.1. Definition
3.1.2. Type of Attack
3.1.3. Adversarial Example Generation
FGSM
Algorithm 1: FGSM. |
Input: Original signal example x; ground-truth label l; Loss function L of classifier; perturbation size . Output: Adversarial example 1. Calculate the gradient 2. Acquire by applying the gradient method as
3. Applying the perturbation to the original sample as: 4. Return |
PGD
Algorithm 2: PGD. |
Input: Original signal example x; ground-truth label l; Loss function L of classifier; perturbation size ; Iteration number N. Output: Adversarial example 1. Initialize 2. for to do 3. Calculate the gradient 4. Update by applying the gradient method, and use to clip as
5. Applying the perturbation to the original sample as: 6. end for 7. Return |
C&W
Algorithm 3: C&W. |
Input: Original signal example x; ground-truth label l; C&W loss function of classifier; Perturbation size ; Iteration number N. Output: Adversarial example 1. Initialize 2. for to do 3. Calculate the gradient 4. Update by applying the gradient method, and use to clip as
5. Applying the perturbation to the original sample as: 6. end for 7. Return |
3.2. Adversarial Training
3.2.1. Training Method
4. Scenario Description and Implementation
4.1. Natural SEI
4.1.1. Model
4.1.2. Implementation
4.2. Attack: SEI under Adversarial Attacks
4.2.1. Model
4.2.2. Implementation
4.3. Defense: AT-SEI under Adversarial Attacks
Implementation
5. Experimental Results
5.1. Dataset and Configure
5.1.1. Dataset 1
5.1.2. Dataset 2
5.1.3. Configure
Recognition Rate
Parameter
5.2. Experiment on Dataset 1
5.2.1. Natural SEI
5.2.2. Adversarial Attacks
Overall Performance
Emitter
Waveform
- The overall signal waveform after adding the attack is similar to the original signal. The difference is so subtle that the human eye can-not detect the difference; however, these perturbation can make the recognition performance of SEI significantly worse.
- The waveform of the perturbation is not significantly regular. The normalized energy amplitude does not exceed 0.02, which is very low compared with the original signal.
- The PDF of the waveform did not change significantly before and after being attacked.
5.2.3. Performance after AT
Performance of Each Emitter
5.2.4. Intensity Analysis
- 1.
- The directly trained DNN can achieve a highly accurate recognition rate without attack. However, in the presence of an adversarial attack, the performance degrades rapidly and drops below 10% at −25 dB, demonstrating the vulnerability of the DNN-based SEI.
- 2.
- It can be seen that the result curve of the adversarial attack is at the bottom of the figure. All three attack methods are effective for DNN-based SEI, and minor perturbations can cause significant performance degradation, for their result lines are on the bottom of the figure. Among them, C&W and PGD attacks are more effective (because the implement of C&W is based on PGD framework in Section 3.1.3), their performance are similar, and FGSM has a slightly lesser effect on the performance. It is worth noting that the line of “AWGN” is higher than the lines of adversarial attacks, it means that the adversarial examples are more destructive to DNN than the white noise of the same strength.
- 3.
- After AT, the performance of direct recognition is slightly reduced compared to that without AT but can still reach 96.49%. However, the DNN’s performance is significantly improved compared to that of direct training when facing adversarial attacks. In particular, under the FGSM attack, the accurate recognition rate of 69% at −22 dB is still achieved, as marked in the figure. Compared with the case without AT, there is an improvement of more than 60%. Moreover, the line of “AWGN(AT)” is obviously higher that “AWGN”, it means that AT can improve the neural network’s robustness to additive Gaussian white noise.
- 4.
- The system fails even after AT when a well-designed adversarial perturbation with PSR > −10 dB is added. It shows that the trained DNN for SEI is concerned with the fine-grained variability at that level on Dataset 1. Additionally, the RF fingerprints between emitters are very subtle. Thus, it reaches a performance boundary when faced with an adversarial attack above that strength, losing the recognition effect.
5.3. Validation on Dataset 2
- 1.
- The performance of DNN, which has originally , decreases rapidly when encountering adversarial attacks. Compared to C&W and PGD, FGSM has a smaller impact. Also, unlike Dataset 1, Dataset 2 is more sensitive to counter perturbations and requires a lower attack intensity to compromise the system. It is more vulnerable because the differences between transmitters are set to be small, as shown in the Table 2.
- 2.
- The adversarial perturbation is more destructive compared to AWGN of the same intensity. The perturbations are carefully designed to be able to affect the DL-based SEI with less energy.
- 3.
- AT can improve the robustness to adversarial examples. The enhancement of AT can also reach 25.72% (PSR = −70 dB), 24.04% (PSR = −50 dB). Also, AT enhances the robustness of the system to AWGN. However, the enhancement effect here is lower compared to Dataset 1.
- 4.
- The AT fails when the strength of perturbations reaches PSR = −35 dB. The differences between the emitters in Dataset 2 are more subtle, so that the performance boundary is at PSR = −35 dB.
5.4. Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Talbot, I.K.; Duley, R.P.; Hyatt, H.M. Specific Emitter Identification and Verification. Technol. Rev. J. 2003, 113, 113–133. [Google Scholar]
- Zhang, J.; Wang, F.; Dobre, O.A.; Zhong, Z. Specific Emitter Identification via Hilbert–Huang Transform in Single-Hop and Relaying Scenarios. IEEE Trans. Inf. Forensics Secur. 2016, 11, 1192–1205. [Google Scholar] [CrossRef]
- Man, P.; Ding, C.; Ren, W.; Xu, G. A Specific Emitter Identification Algorithm under Zero Sample Condition Based on Metric Learning. Remote Sens. 2021, 13, 4919. [Google Scholar] [CrossRef]
- Peng, L.; Zhang, J.; Liu, M.; Hu, A. Deep Learning Based RF Fingerprint Identification Using Differential Constellation Trace Figure. IEEE Trans. Veh. Technol. 2020, 69, 1091–1095. [Google Scholar] [CrossRef]
- Xu, Q.; Zheng, R.; Saad, W.; Han, Z. Device Fingerprinting in Wireless Networks: Challenges and Opportunities. IEEE Commun. Surv. Tutorials 2016, 18, 94–104. [Google Scholar] [CrossRef] [Green Version]
- Wang, W.; Sun, Z.; Piao, S.; Zhu, B.; Ren, K. Wireless Physical-Layer Identification: Modeling and Validation. IEEE Trans. Inf. Forensics Secur. 2016, 11, 2091–2106. [Google Scholar] [CrossRef] [Green Version]
- Gok, G.; Alp, Y.K.; Arikan, O. A New Method for Specific Emitter Identification With Results on Real Radar Measurements. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3335–3346. [Google Scholar] [CrossRef]
- Sankhe, K.; Belgiovine, M.; Zhou, F.; Angioloni, L.; Restuccia, F.; D’Oro, S.; Melodia, T.; Ioannidis, S.; Chowdhury, K. No Radio Left Behind: Radio Fingerprinting Through Deep Learning of Physical-Layer Hardware Impairments. IEEE Trans. Cogn. Commun. Netw. 2020, 6, 165–178. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Huang, Z.; Li, B. Radio Frequency Fingerprint Extraction based on Feature Inhomogeneity. IEEE Internet Things J. 2022, 9, 17292–17308. [Google Scholar] [CrossRef]
- Nguyen, D.D.N.; Sood, K.; Nosouhi, M.R.; Xiang, Y.; Gao, L.; Chi, L. RF Fingerprinting based IoT Node Authentication using Mahalanobis Distance Correlation Theory. IEEE Netw. Lett. 2022, 4, 78–81. [Google Scholar] [CrossRef]
- Gope, P.; Sikdar, B.; Millwood, O. A scalable protocol level approach to prevent machine learning attacks on PUF-based authentication mechanisms for Internet-of-Medical-Things. IEEE Trans. Ind. Informat. 2021, 18, 1971–1980. [Google Scholar] [CrossRef]
- McGinthy, J.M.; Wong, L.J.; Michaels, A.J. Groundwork for Neural Network-Based Specific Emitter Identification Authentication for IoT. IEEE Internet Things J. 2019, 6, 6429–6440. [Google Scholar] [CrossRef]
- Shen, G.; Zhang, J.; Marshall, A.; Peng, L.; Wang, X. Radio Frequency Fingerprint Identification for LoRa Using Deep Learning. IEEE J. Sel. Areas Commun. 2021, 39, 2604–2616. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Yang, A.; Huang, Z. Radio Frequency Fingerprint Extraction in Specific Emitter Identification. J. Radars 2020, 9. [Google Scholar]
- Guo, S.; Akhtar, S.; Mella, A. A Method for Radar Model Identification Using Time-Domain Transient Signals. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3132–3149. [Google Scholar] [CrossRef]
- Ureten, O.; Serinken, N. Bayesian detection of radio transmitter turn-on transients. In Proceedings of the IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP’99), Antalya, Turkey, 20–23 June 1999. [Google Scholar]
- Zhao, C.; Huang, L.; Hu, L.; Yan, Y. Transient fingerprint feature extraction for WLAN cards based on polynomial fitting. In Proceedings of the 2011 6th International Conference on Computer Science & Education (ICCSE), Singapore, 3–5 August 2011. [Google Scholar]
- Ru, X.; Liu, Z.; Huang, Z.T.; Jiang, W.L. Evaluation of unintentional modulation for pulse compression signals based on spectrum asymmetry. IET Radar Sonar Navig. 2017, 11, 656–663. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Yang, A.; Huang, Z. Radio Frequency Fingerprint Extraction Based on Multi-Dimension Approximate Entropy. IEEE Signal Process. Lett. 2020, 27, 471–475. [Google Scholar] [CrossRef]
- Rajendran, S.; Sun, Z.; Lin, F.; Ren, K. Injecting Reliable Radio Frequency Fingerprints Using Metasurface for The Internet of Things. IEEE Trans. Inf. Forensics Secur. 2020, 16, 1896–1911. [Google Scholar] [CrossRef]
- Youssef, K.; Bouchard, L.; Haigh, K.; Silovsky, J.; Thapa, B.; Valk, C.V. Machine Learning Approach to RF Transmitter Identification. IEEE J. Radio Freq. Identif. 2018, 2, 197–205. [Google Scholar] [CrossRef] [Green Version]
- Du, M.; He, X.; Cai, X.; Bi, D. Balanced Neural Architecture Search and Its Application in Specific Emitter Identification. IEEE Trans. Signal Process. 2021, 69, 5051–5065. [Google Scholar] [CrossRef]
- Huang, X.; Kroening, D.; Ruan, W.; Sharp, J.; Sun, Y.; Thamo, E.; Wu, M.; Yi, X. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 2020, 37, 100270. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. Comput. Sci. 2014. [Google Scholar]
- Ke, D.; Huang, Z.; Wang, X.; Sun, L. Application of Adversarial Examples in Communication Modulation Classification. In Proceedings of the 2019 International Conference on Data Mining Workshops (ICDMW), Beijing, China, 8–11 November 2019; pp. 877–882. [Google Scholar] [CrossRef]
- Raymond, D.R.; Midkiff, S.F. Denial-of-service in wireless sensor networks: Attacks and defenses. IEEE Pervasive Comput. 2008, 7, 74–81. [Google Scholar] [CrossRef]
- Ohigashi, T.; Morii, M. A practical message falsification attack on WPA. Proc. JWIS 2009, 54, 66. [Google Scholar]
- Kannhavong, B.; Nakayama, H.; Nemoto, Y.; Kato, N.; Jamalipour, A. A survey of routing attacks in mobile ad hoc networks. IEEE Wirel. Commun. 2007, 14, 85–91. [Google Scholar] [CrossRef]
- Balakrishnan, S.; Gupta, S.; Bhuyan, A.; Wang, P.; Koutsonikolas, D.; Sun, Z. Physical layer identification based on spatial–temporal beam features for millimeter-wave wireless networks. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1831–1845. [Google Scholar] [CrossRef] [Green Version]
- Lyu, C.; Huang, K.; Liang, H.N. A Unified Gradient Regularization Family for Adversarial Examples. In Proceedings of the 2015 IEEE International Conference on Data Mining (ICDM), Atlantic City, NJ, USA, 14–17 November 2015. [Google Scholar]
- Baldini, G.; Gentile, C.; Giuliani, R.; Steri, G. Comparison of techniques for radiometric identification based on deep convolutional neural networks. Electron. Lett. 2019, 55, 90–92. [Google Scholar] [CrossRef]
- Wong, L.J.; Headley, W.C.; Andrews, S.; Gerdes, R.M.; Michaels, A.J. Clustering learned CNN features from raw I/Q data for emitter identification. In Proceedings of the MILCOM 2018—2018 IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 29–31 October 2018; pp. 26–33. [Google Scholar]
- Wong, L.J.; Headley, W.C.; Michaels, A.J. Specific emitter identification using convolutional neural network-based IQ imbalance estimators. IEEE Access 2019, 7, 33544–33555. [Google Scholar] [CrossRef]
- Riyaz, S.; Sankhe, K.; Ioannidis, S.; Chowdhury, K. Deep learning convolutional neural networks for radio identification. IEEE Commun. Mag. 2018, 56, 146–152. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 630–645. [Google Scholar]
- Pan, Y.; Yang, S.; Peng, H.; Li, T.; Wang, W. Specific emitter identification based on deep residual networks. IEEE Access 2019, 7, 54425–54434. [Google Scholar] [CrossRef]
- Zhang, T.; Ren, P.; Ren, Z. Deep Radio Fingerprint ResNet for Reliable Lightweight Device Identification. In Proceedings of the 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall), Norman, OK, USA, 27–30 September 2021; pp. 1–6. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. Comput. Sci. 2013. [Google Scholar]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial Examples in the Physical World. 2016. Available online: https://arxiv.org/abs/1607.02533 (accessed on 24 July 2022).
- Papernot, N.; Mcdaniel, P.; Wu, X.; Jha, S.; Swami, A. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2016. [Google Scholar]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017. [Google Scholar]
- Sadeghi, M.; Larsson, E.G. Adversarial Attacks on Deep-Learning Based Radio Signal Classification. IEEE Wirel. Commun. Lett. 2018, 8, 213–216. [Google Scholar] [CrossRef] [Green Version]
- Kokalj-Filipovic, S.; Miller, R.; Chang, N.; Lau, C.L. Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training. In Proceedings of the 2019 International Conference on Military Communications and Information Systems (ICMCIS), Budva, Montenegro, 14–15 May 2019. [Google Scholar]
- Lin, Y.; Zhao, H.; Ma, X.; Tu, Y.; Wang, M. Adversarial Attacks in Modulation Recognition With Convolutional Neural Networks. IEEE Trans. Reliab. 2021, 70, 389–401. [Google Scholar] [CrossRef]
- Hameed, M.Z.; Gyorgy, A.; Gunduz, D. The Best Defense Is a Good Offense: Adversarial Attacks to Avoid Modulation Detection. IEEE Trans. Inf. Forensics Secur. 2021, 16, 1074–1087. [Google Scholar] [CrossRef]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Ilyas, A.; Santurkar, S.; Tsipras, D.; Engstrom, L.; Tran, B.; Madry, A. Adversarial Examples Are Not Bugs, They Are Features. In Proceedings of the NeurIPS Conference, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Liu, A.; Liu, X.; Zhang, C.; Yu, H.; Liu, Q.; He, J. Training Robust Deep Neural Networks via Adversarial Noise Propagation. IEEE Trans. Image Process. 2019, 30, 5769–5781. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Huang, Z. Unintentional modulation microstructure enlargement. J. Syst. Eng. Electron. 2022, 33, 522–533. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Huang, Z. Unintentional modulation evaluation in time domain and frequency domain. Chin. J. Aeronaut. 2021, 35, 376–389. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Zhao, Y.; Huang, Z.; Du, C. Intrinsic Low-Dimensional Nonlinear Manifold Structure of Radio Frequency Signals. IEEE Commun. Lett. 2022. [Google Scholar] [CrossRef]
- Huang, Y.; Zheng, H. Theoretical performance analysis of radio frequency fingerprinting under receiver distortions. Wirel. Commun. Mob. Comput. 2015, 15, 823–833. [Google Scholar] [CrossRef]
- Yiwei, P.; Sihan, Y.; Hua, P.; Tianyun, L.; Wenya, W. Specific emitter identification using signal trajectory image. J. Electron. Inf. Technol. 2020, 42, 941–949. [Google Scholar]
- He, B.; Wang, F. Cooperative Specific Emitter Identification via Multiple Distorted Receivers. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3791–3806. [Google Scholar] [CrossRef]
- Huang, Y.; Zheng, H. Radio frequency fingerprinting based on the constellation errors. In Proceedings of the 2012 18th Asia-Pacific Conference on Communications (APCC), Jeju, Korea, 15–17 October 2012. [Google Scholar]
- Naveed Akhtar; Ajmal Mian; Navid Kardan;Mubarak Shah Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 2021, 9, 155161–155196. [CrossRef]
- Zhang, H.; Wang, J. Defense against adversarial attacks using feature scattering-based adversarial training. Adv. Neural Inf. Process. Syst. 2019, 32, 1831–1841. [Google Scholar]
- Manoj, B.R.; Sadeghi, M.; Larsson, E.G. Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Wang, Y.; Gui, G.; Gacanin, H.; Ohtsuki, T.; Dobre, O.A.; Poor, H.V. An Efficient Specific Emitter Identification Method Based on Complex-Valued Neural Networks and Network Compression. IEEE J. Sel. Areas Commun. 2021, 39, 2305–2317. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Van Der Maaten, L. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res. 2014, 15, 3221–3245. [Google Scholar]
Emitter ID | Sample | Emitter ID | Sample |
---|---|---|---|
R1 | 334 | R11 | 321 |
R2 | 398 | R12 | 302 |
R3 | 417 | R13 | 280 |
R4 | 398 | R14 | 308 |
R5 | 262 | R15 | 303 |
R6 | 300 | R16 | 292 |
R7 | 304 | R17 | 300 |
R8 | 241 | R18 | 301 |
R9 | 305 | R19 | 304 |
R10 | 289 | R20 | 306 |
Emitter ID | FD | IQE | CST | PAD | ||||||
---|---|---|---|---|---|---|---|---|---|---|
T1 | (1, 0.06, 4) | (1, 0.0309, 4) | 0.9996 | −0.016 | 0.0072 | 0.0197 | 1 | 0.03 | 0.1 | |
T2 | (1, 0.085, 4) | (1, 0.0308, 4) | 0.9997 | −0.014 | 0.0074 | 0.0193 | 1 | 0.05 | 0.09 | |
T3 | (1, 0.073, 4) | (1, 0.0307, 4) | 0.9998 | −0.012 | 0.0076 | 0.0189 | 1 | 0.07 | 0.08 | |
T4 | (1, 0.04, 4) | (1, 0.0306, 4) | 0.9999 | −0.010 | 0.0078 | 0.0185 | 1 | 0.09 | 0.07 | |
T5 | (1, 0.06, 4) | (1, 0.0305, 4) | 1.0000 | −0.008 | 0.0080 | 0.0181 | 1 | 0.11 | 0.06 | |
T6 | (1, 0.085, 4) | (1, 0.0304, 4) | 1.0001 | −0.006 | 0.0082 | 0.0177 | 1 | 0.13 | 0.05 | |
T7 | (1, 0.073, 4) | (1, 0.0303, 4) | 1.0002 | −0.004 | 0.0084 | 0.0173 | 1 | 0.15 | 0.04 | |
T8 | (1, 0.04, 4) | (1, 0.0302, 4) | 1.0003 | −0.002 | 0.0086 | 0.0169 | 1 | 0.17 | 0.03 | |
T9 | (1, 0.03, 4) | (1, 0.0301, 4) | 1.0004 | 0.000 | 0.0088 | 0.0165 | 1 | 0.19 | 0.02 | |
T10 | (1, 0.03, 4) | (1, 0.0300, 4) | 1.0005 | 0.002 | 0.0090 | 0.0161 | 1 | 0.21 | 0.01 | |
T11 | (1, 0.03, 4) | (1, 0.0299, 4) | 1.0006 | 0.004 | 0.0092 | 0.0157 | 1 | 0.25 | 0 | |
T12 | (1, 0.03, 4) | (1, 0.0298, 4) | 1.0007 | 0.006 | 0.0094 | 0.0153 | 1 | 0.3 | −0.01 | |
T13 | (1, 0.03, 4) | (1, 0.0297, 4) | 1.0008 | 0.008 | 0.0096 | 0.0149 | 1 | 0.35 | −0.02 | |
T14 | (1, 0.03, 4) | (1, 0.0296, 4) | 1.0009 | 0.010 | 0.0098 | 0.0145 | 1 | 0.4 | −0.03 | |
T15 | (1, 0.03, 4) | (1, 0.0295, 4) | 1.0010 | 0.012 | 0.0100 | 0.0141 | 1 | 0.45 | 0.1 | |
T16 | (1, 0.03, 4) | (1, 0.0294, 4) | 1.0011 | 0.014 | 0.0102 | 0.0137 | 1 | 0.5 | 0.2 | |
T17 | (1, 0.03, 4) | (1, 0.0293, 4) | 1.0012 | 0.016 | 0.0104 | 0.0133 | 1 | 0.55 | 0.3 | |
T18 | (1, 0.03, 4) | (1, 0.0292, 4) | 1.0013 | 0.018 | 0.0106 | 0.0129 | 1 | 0.6 | 0.4 | |
T19 | (1, 0.03, 4) | (1, 0.0291, 4) | 1.0014 | 0.020 | 0.0108 | 0.0125 | 1 | 0.65 | 0.5 | |
T20 | (1, 0.03, 4) | (1, 0.0290, 4) | 1.0015 | 0.022 | 0.0110 | 0.0121 | 1 | 0.7 | 0.6 |
Parameter | Value | Parameter | Value |
---|---|---|---|
Sampling Rate | 20 MHz | Modulation | QPSK |
Symbol Rate | 5 MHz | Roll-off factor | 0.35 |
Carrier Frequency | 1 MHz | Codes/sample | 50 |
Emitter ID | T1–T20 | Sample/emitter | 3000 |
Method | No AT | AT | Improvement |
---|---|---|---|
Natural | 99.52 | 96.49 | −3.03 |
FGSM | 62.84 | 89.79 | 26.95 |
C&W | 50.24 | 82.14 | 31.90 |
PGD | 52.79 | 84.85 | 32.06 |
Average (attacks) | 55.29 | 85.59 | 30.30 |
R1 | R2 | R3 | R4 | R5 | R6 | R7 | R8 | R9 | R10 | |
---|---|---|---|---|---|---|---|---|---|---|
Natural | 100.00 | 97.67 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
FGSM | 65.52 | 84.09 | 84.31 | 2.70 | 95.24 | 88.89 | 100.00 | 80.00 | 100.00 | 6.67 |
C&W | 48.28 | 72.73 | 74.51 | 2.70 | 90.48 | 80.56 | 100.00 | 48.00 | 86.67 | 0.00 |
PGD | 55.17 | 77.27 | 74.51 | 2.70 | 95.24 | 83.33 | 100.00 | 56.00 | 86.67 | 0.00 |
Average | 67.24 | 82.94 | 83.33 | 27.03 | 95.24 | 88.19 | 100.00 | 71.00 | 93.33 | 26.67 |
Natural (AT) | 100.00 | 100.00 | 98.18 | 87.88 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 88.00 |
FGSM (AT) | 100.00 | 100.00 | 98.18 | 81.82 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 76.00 |
C&W (AT) | 100.00 | 100.00 | 98.18 | 69.70 | 100.00 | 95.65 | 100.00 | 88.89 | 100.00 | 24.00 |
PGD (AT) | 100.00 | 100.00 | 98.18 | 69.70 | 100.00 | 100.00 | 100.00 | 94.44 | 100.00 | 36.00 |
Average (AT) | 100.00 | 100.00 | 98.18 | 77.27 | 100.00 | 98.91 | 100.00 | 95.83 | 100.00 | 56.00 |
R11 | R12 | R13 | R14 | R15 | R16 | R17 | R18 | R19 | R20 | |
Natural | 100.00 | 100.00 | 100.00 | 100.00 | 96.43 | 100.00 | 100.00 | 100.00 | 100.00 | 96.77 |
FGSM | 43.75 | 44.83 | 85.71 | 5.00 | 11.11 | 91.67 | 33.33 | 76.32 | 79.31 | 71.43 |
C&W | 15.63 | 37.93 | 75.00 | 0.00 | 3.70 | 91.67 | 21.43 | 39.47 | 75.86 | 39.29 |
PGD | 18.75 | 31.03 | 82.14 | 0.00 | 3.70 | 91.67 | 21.43 | 50.00 | 75.86 | 50.00 |
Average | 44.53 | 53.45 | 85.71 | 26.25 | 28.74 | 93.75 | 44.05 | 66.45 | 82.76 | 64.37 |
Natural (AT) | 87.88 | 100.00 | 100.00 | 96.00 | 88.24 | 100.00 | 85.71 | 100.00 | 100.00 | 97.56 |
FGSM (AT) | 84.85 | 81.48 | 100.00 | 88.00 | 58.82 | 100.00 | 39.29 | 100.00 | 100.00 | 82.93 |
C&W (AT) | 78.79 | 59.26 | 100.00 | 84.00 | 32.35 | 100.00 | 28.57 | 100.00 | 100.00 | 65.85 |
PGD (AT) | 81.82 | 66.67 | 100.00 | 88.00 | 50.00 | 100.00 | 32.14 | 100.00 | 100.00 | 68.29 |
Average (AT) | 83.33 | 76.85 | 100.00 | 89.00 | 57.35 | 100.00 | 46.43 | 100.00 | 100.00 | 78.66 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, L.; Ke, D.; Wang, X.; Huang, Z.; Huang, K. Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks. Remote Sens. 2022, 14, 4996. https://doi.org/10.3390/rs14194996
Sun L, Ke D, Wang X, Huang Z, Huang K. Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks. Remote Sensing. 2022; 14(19):4996. https://doi.org/10.3390/rs14194996
Chicago/Turabian StyleSun, Liting, Da Ke, Xiang Wang, Zhitao Huang, and Kaizhu Huang. 2022. "Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks" Remote Sensing 14, no. 19: 4996. https://doi.org/10.3390/rs14194996
APA StyleSun, L., Ke, D., Wang, X., Huang, Z., & Huang, K. (2022). Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks. Remote Sensing, 14(19), 4996. https://doi.org/10.3390/rs14194996