Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS)
Abstract
:1. Introduction
1.1. Research Contributions
- i.
- Review of some important adversarial attacks on learning models and the preparation of a taxonomy thereof.
- ii.
- Summary of significant methods of adversarial attacks and their attacking mechanism.
- iii.
- Extending the use of a Generative Adversarial Networks (GAN)-based adversarial attacking mechanism for the cyber security domain. This includes a discussion of the generation of tabular adversarial datasets for cyber security, which are different from image and video datasets.
- iv.
- Tabular adversarial data generation based on the TON_IoT Network dataset through the use of the GAN model.
- v.
- Evaluation of the performance of learning models including Random Forest (RF), Artificial Neural Network (ANN), and Long Short-Term Memory (LSTM) against evasion and poisoning-based adversarial attacks.
- vi.
- Proposing an adversarial learning-based security mechanism for Cyber-Physical Systems (CPS) and the evaluation of model performance under various scenarios.
- vii.
- Generalizing the scalability and effectiveness of the proposed methodology by evaluating it on three learning models i.e., RF, ANN, and LSTM.
- viii.
- Analyzing the computational requirements of the proposed methodology so as to assess its feasibility in constrained CPS networks.
1.2. Paper Organization
2. Adversarial Consideration
2.1. Adversarial Attacks
2.1.1. Adversarial Attack Methods
2.1.2. Adversarial Defenses
3. Proposed Methodology
3.1. Problem Formulation
3.1.1. GAN Attack
3.1.2. Adversarial Learning
3.2. Adversarial Dataset Generation
4. Results and Discussion
4.1. Case 1: Performance on Original Dataset
4.2. Case 2: Evasion Attack
4.3. Case 3: Data Poisoning Attack
4.4. Case 4: Adversarial Learning
4.5. Discussion
5. Conclusions and Future Direction
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wazid, M.; Das, A.K.; Chamola, V.; Park, Y. Uniting cyber security and machine learning: Advantages, challenges and future research. ICT Express 2022, 8, 313–321. [Google Scholar] [CrossRef]
- Ahmad, Z.; Singh, Y.; Kumar, P.; Zrar, K. Intelligent and secure framework for critical infrastructure (CPS): Current trends, challenges, and future scope. Comput. Commun. 2022, 193, 302–331. [Google Scholar] [CrossRef]
- Li, J.; Liu, Y.; Chen, T.; Xiao, Z.; Li, Z.; Wang, J. Adversarial attacks and defenses on cyber-physical systems: A survey. IEEE Internet Things J. 2020, 7, 5103–5115. [Google Scholar] [CrossRef]
- Wang, Y.; Mianjy, P.; Arora, R. Robust Learning for Data Poisoning Attacks. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 1–11. [Google Scholar]
- Shanthini, A.; Vinodhini, G.; Chandrasekaran, R.M.; Supraja, P. A taxonomy on impact of label noise and feature noise using machine learning techniques. Soft Comput. 2019, 23, 8597–8607. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, M.; Chen, C. A Deep-Learning intelligent system incorporating data augmentation for Short-Term voltage stability assessment of power systems. Appl. Energy 2022, 308, 118347. [Google Scholar] [CrossRef]
- Stouffer, K.; Stouffer, K.; Abrams, M. Guide to Industrial Control Systems (ICS); Security NIST Special Publication 800-82 Guide to Industrial Control Systems (ICS) Security; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015.
- Freitas De Araujo-Filho, P.; Kaddoum, G.; Campelo, D.R.; Gondim Santos, A.; Macedo, D.; Zanchettin, C. Intrusion Detection for Cyber-Physical Systems Using Generative Adversarial Networks in Fog Environment. IEEE Internet Things J. 2021, 8, 6247–6256. [Google Scholar] [CrossRef]
- Li, Y.; Wei, X.; Li, Y.; Dong, Z.; Shahidehpour, M. Detection of False Data Injection Attacks in Smart Grid: A Secure Federated Deep Learning Approach. IEEE Trans. Smart Grid 2022, 13, 4862–4872. [Google Scholar] [CrossRef]
- Sarker, I.H.; Abushark, Y.B.; Alsolami, F.; Khan, A.I. IntruDTree: A machine learning based cyber security intrusion detection model. Symmetry 2020, 12, 754. [Google Scholar] [CrossRef]
- Sheikh, Z.A.; Singh, Y.; Tanwar, S.; Sharma, R.; Turcanu, F. EISM-CPS: An Enhanced Intelligent Security Methodology for Cyber-Physical Systems through Hyper-Parameter Optimization. Mathematics 2023, 11, 189. [Google Scholar] [CrossRef]
- Rosenberg, I.; Shabtai, A.; Elovici, Y.; Rokach, L. Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain. ACM Comput. Surv. 2021, 54, 1–36. [Google Scholar] [CrossRef]
- Jadidi, Z.; Pal, S.; Nayak, N.; Selvakkumar, A.; Chang, C.-C.; Beheshti, M.; Jolfaei, A. Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems. In Proceedings of the International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 25–28 July 2022; IEEE: Honolulu, HI, USA, 2022. [Google Scholar]
- Boesch, G. What Is Adversarial Machine Learning? Attack Methods in 2023. [Online]. Available online: https://viso.ai/deep-learning/adversarial-machine-learning/ (accessed on 3 January 2023).
- Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE Trans. Neural Networks Learn. Syst. 2019, 30, 2805–2824. [Google Scholar] [CrossRef] [Green Version]
- Fawzi, O.; Frossard, P. Universal adversarial perturbations. arXiv 2016, arXiv:1610.08401. [Google Scholar]
- Adate, A.; Saxena, R. Understanding How Adversarial Noise Affects Single Image Classification. In Proceedings of the International Conference on Intelligent Information Technologies, Chennai, India, 20–22 December 2017. [Google Scholar]
- Pengcheng, L.; Yi, J.; Zhang, L. Query-Efficient Black-Box Attack by Active Learning. In Proceedings of the IEEE International Conference on Data Mining (ICDM), Singapore, 17–20 November 2018; IEEE: Singapore, 2018. [Google Scholar]
- Clements, J.; Yang, Y.; Sharma, A.A.; Hu, H.; Lao, Y. Rallying Adversarial Techniques against Deep Learning for Network Security. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021. [Google Scholar] [CrossRef]
- Qiu, H.; Dong, T.; Zhang, T.; Lu, J.; Memmi, G.; Qiu, M. Adversarial Attacks against Network Intrusion Detection in IoT Systems. IEEE Internet Things J. 2021, 8, 10327–10335. [Google Scholar] [CrossRef]
- Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting Adversarial Attacks with Momentum. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 9185–9193. [Google Scholar]
- Wang, D.D.; Li, C.; Wen, S.; Xiang, Y. Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-Task Training. IEEE Trans. Dependable Secur. Comput. 2020, 19, 953–965. [Google Scholar] [CrossRef]
- Cisse, M.; Adi, Y.; Neverova, N.; Keshet, J. Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. In Proceedings of the 31st International Conference on Neural Information Processing Systems: NIPS’17, Long Beach, CA, USA, 4–9 December 2017; Volume 2017, pp. 6978–6988. [Google Scholar]
- Papernot, N.; Mcdaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of the 1st IEEE European Symposium on Security and Privacy, Saarbruecken, Germany, 21–24 March 2016. [Google Scholar]
- Ilyas, A.; Engstrom, L.; Athalye, A.; Lin, J. Black-box Adversarial Attacks with Limited Queries and Information. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Baluja, S.; Fischer, I. Adversarial Transformation Networks: Learning to Generate Adversarial Examples. arXiv 2017, arXiv:1703.09387. [Google Scholar]
- Fawzi, A.; Frossard, P. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar] [CrossRef] [Green Version]
- Chen, P. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 15–26. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial machine learning at scale. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017; pp. 1–17. [Google Scholar]
- Sarkar, S.; Mahbub, U. UPSET and ANGRI: Breaking High Performance Image Classifiers. arXiv 2017. [Google Scholar] [CrossRef]
- Zhu, C.; Ronny Huang, W.; Shafahi, A.; Li, H.; Taylor, G.; Studer, C.; Goldstein, T. Transferable clean-label poisoning attacks on deep neural nets. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 13141–13154. [Google Scholar]
- Biggio, B.; Nelson, B.; Laskov, P. Support vector machines under adversarial label noise. J. Mach. Learn. Res. 2011, 20, 97–112. [Google Scholar]
- Zhang, H.; Cheng, N.; Zhang, Y.; Li, Z. Label flipping attacks against Naive Bayes on spam filtering systems. Appl. Intell. 2021, 51, 4503–4514. [Google Scholar] [CrossRef]
- Gangavarapu, T.; Jaidhar, C.D.; Chanduka, B. Applicability of Machine Learning in Spam and Phishing Email Filtering: Review and Approaches. Artif. Intell. Rev. 2020, 53, 5019–5081. [Google Scholar] [CrossRef]
- Paudice, A.; Muñoz-González, L.; Lupu, E.C. Label Sanitization Against Label Flipping Poisoning Attacks. In ECML PKDD 2018 Workshops—ECML PKDD 2018; Lecture Notes in Computer Science, LNAI; Springer: Cham, Switzerland, 2019; Volume 11329, pp. 5–15. [Google Scholar] [CrossRef] [Green Version]
- Xiao, H.; Xiao, H.; Eckert, C. Adversarial label flips attack on support vector machines. Front. Artif. Intell. Appl. 2012, 242, 870–875. [Google Scholar] [CrossRef]
- Xiao, H.; Biggio, B.; Nelson, B.; Xiao, H.; Eckert, C.; Roli, F. Support vector machines under adversarial label contamination. Neurocomputing 2015, 160, 53–62. [Google Scholar] [CrossRef]
- Taheri, R.; Javidan, R.; Shojafar, M.; Pooranian, Z.; Miri, A.; Conti, M. On defending against label flipping attacks on malware detection systems. Neural Comput. Appl. 2020, 32, 14781–14800. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014; pp. 1–10. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–11. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 1–28. [Google Scholar]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar] [CrossRef] [Green Version]
- Lin, Z.; Shi, Y.; Xue, Z. IDSGAN: Generative Adversarial Networks for Attack Generation Against Intrusion Detection. In Proceedings of the PAKDD 2022: Advances in Knowledge Discovery and Data Mining, Chengdu, China, 16–19 May 2022; pp. 79–91. [Google Scholar]
- Bodkhe, U.; Mehta, D.; Tanwar, S.; Bhattacharya, P.; Singh, P.K.; Hong, W.-C. A Survey on Decentralized Consensus Mechanisms for Cyber Physical Systems. IEEE Access 2020, 8, 54371–54401. [Google Scholar] [CrossRef]
- Papernot, N.; Mcdaniel, P.; Goodfellow, I. Practical Black-Box Attacks against Machine Learning. In Proceedings of the ASIA CCS ’17: 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates, 2–6 April 2017; pp. 506–519. [Google Scholar]
- Dziugaite, G.K.; Roy, D.M. A study of the effect of JPG compression on adversarial images. arXiv 2016, arXiv:1608.00853. [Google Scholar]
- Hosseini, H.; Chen, Y.; Kannan, S.; Zhang, B.; Poovendran, R. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems. arXiv 2017, arXiv:1703.04318. [Google Scholar]
- Xie, C.; Wang, J.; Zhang, Z.; Zhou, Y.; Xie, L.; Yuille, A. Adversarial Examples for Semantic Segmentation and Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22-29 October 2017; Volume 1, pp. 1369–1378. [Google Scholar]
- Soll, M.; Hinz, T.; Magg, S.; Wermter, S. Evaluating Defensive Distillation for Defending Text Processing Neural Networks Against Adversarial Examples. In Proceedings of the 28th International Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019. [Google Scholar]
- Lyu, C. A Unified Gradient Regularization Family for Adversarial Examples. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015. [Google Scholar]
- Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv 2018, arXiv:1704.01155. [Google Scholar]
- Kalavakonda, R.R.; Vikram, N.; Masna, R.; Bhuniaroy, A. A Smart Mask for Active Defense Against Coronaviruses and Other Airborne Pathogens. IEEE Consum. Electron. Mag. 2020, 10, 72–79. [Google Scholar] [CrossRef]
- Wu, E.Q.; Zhou, G.; Zhu, L.; Wei, C.; Ren, H.; Sheng, R.S.F. Rotated Sphere Haar Wavelet and Deep Contractive Auto-Encoder Network with Fuzzy Gaussian SVM for Pilot’s Pupil Center Detection. IEEE Trans. Cybern. 2019, 51, 332–345. [Google Scholar] [CrossRef]
- Sayed, E.; Member, S.; Yang, Y.; Member, S. A Comprehensive Review of Flux Barriers in Interior Permanent Magnet Synchronous Machines. IEEE Access 2019, 7, 149168–149181. [Google Scholar] [CrossRef]
- Esmaeilpour, M.; Member, S.; Cardinal, P.; Lameiras, A. Multi-Discriminator Sobolev Defense-GAN Against Adversarial Attacks for End-to-End Speech Systems. IEEE Trans. Inf. Forensics Secur. 2022, 17, 2044–2058. [Google Scholar] [CrossRef]
- Liao, F.; Liang, M.; Dong, Y.; Pang, T.; Hu, X. Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1778–1787. [Google Scholar]
- Grosse, K.; Manoharan, P.; Papernot, N.; Backes, M.; McDaniel, P. On the (Statistical) Detection of Adversarial Examples. arXiv 2017, arXiv:1702.06280. [Google Scholar]
- Alsaedi, A.; Moustafa, N.; Tari, Z.; Mahmood, A.; Anwar, A. TON-IoT telemetry dataset: A new generation dataset of IoT and IIoT for data-driven intrusion detection systems. IEEE Access 2020, 8, 165130–165150. [Google Scholar] [CrossRef]
- Moustafa, N. A new distributed architecture for evaluating AI-based security systems at the edge: Network TON_IoT datasets. Sustain. Cities Soc. 2021, 72, 102994. [Google Scholar] [CrossRef]
- Zantedeschi, V.; Nicolae, M.I.; Rawat, A. Efficient defenses against adversarial atacks. In AISec’17: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security; Dallas, TX, USA, 3 November 2017, Association for Computing Machinery: New York, NY, USA, 2017; pp. 39–49. [Google Scholar] [CrossRef] [Green Version]
- Unreal Person, This Person Does Not Exist. Available online: https://www.unrealperson.com/ (accessed on 4 May 2023).
Ref. | Year | ML Model | Dataset | Adversarial Method | AML Taxonomy | Tabular Dataset | CPS Domain | GAN | Poisonous Attacks | Evasion Attacks | Adversarial Learning | Computational Time |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Jadidi et al. [13] | 2022 | ANN | Bot–IoT, and Modbus IoT | FGSM | ✗ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ |
Clements et al. [19] | 2021 | KitNET (ensemble of AE and NN) | Real IoT Dataset, and Mirai | FGSM, JSMA, C&W, and ENM | ✗ | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ |
Qiu et al. [20] | 2021 | Kitsune NIDS (AE-based) | Mirai, and Video Streaming | Gradient Based Saiency Map | ✗ | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ |
Proposed | 2023 | RF, ANN, and LSTM | ToN_IoT Network | GAN | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Adversarial Attack Method | Description | Equation/Methodology | Advantage | Disadvantage |
---|---|---|---|---|
Limited-memory BFGS (L-BFGS) [39] | To minimize the number of perturbations, the L-BFGS-based non-linear gradient-based numerical optimization method is used. It uses a box-constraint based optimization method. | Effectively generates adversarial samples. | It is a very computationally intensive, time-consuming, and impractical method. | |
Fast Gradient Sign Method (FGSM) [40] | Fast and simple gradient-based method for generating adversarial samples. It minimizes the maximum number of perturbations required to cause misclassification. Its mechanism is based on finding a small noise vector and the corresponding sign of elements of the gradient of the cost function. | where is an adversarial sample, ∈ is a noise vector, is gradiant of x, and J(w, x, y) is the cost utilized to train the model with w as model parameters, x as model input, and y as model output. | Computationally efficient as compared to L-BFGS. | It adds perturbation to each feature. |
Projected Gradient Descent (PGD) [41] | Unlike FGSM, which utilizes a one-step method for generating adversarial samples, the PGD is a multi-step variant of it. | It invokes the strongest attack and is more powerful than FGSM. | Computationally more intensive than FGSM. | |
Jacobian-based Saliency Map Attack (JSMA) [24] | It uses feature selection to minimize the number of features to perform perturbation on. It uses saliency value in decreasing order to iteratively perform flat perturbation on features. | The Jacobian matrix of sample x is | Only a few features are perturbed. | Computationally more intensive than FGSM. |
Deepfool Attack | It is an untargeted adversarial sample generation method. The method is based on minimizing the Euclidean distance between original samples and perturbed samples. It estimates decision boundaries between classes and iteratively adds perturbations. | --- | Effective in generating adversarial samples with fewer perturbations and higher misclassification rate. | Computationally intensive than JSMA and FGSM. Moreover, it likely generates non-optimal adversarial samples. |
Carlini & Wagner Attack (C&W) [42] | For adversarial sample generation, it utilizes L-BFGS-based optimization problems except for the usage of its box constraints and uses different objective functions. | where D is the distance metric and is based on finding a minimum value δ which when added to the input sample x misclassifies to a new target class t. | Very effective in generating adversarial samples. This efficient method has defeated many state-of-the-art adversarial defense methods such as adversarial learning, defensive distillation, etc. | Computationally more intensive than FGSM, JSMA, and Deepfool. |
Generative Adversarial Networks (GAN) [43] | Based on a two-player minimax game containing Generator G and Discriminator D. Generates adversarial attack data samples to bypass or deceive detection mechanisms. | where Ex: Expected value of overall data instances, Ez: expected value over all random inputs to the generator, Pdata (x): probability distribution of original data, Pz(z); distribution of the noise, D(X): discriminators estimate the probability of real data instances, and D(G(Z)): Discriminators estimate the probability of an adversarial data instance. | Generates adversarial/attack data similar to original data with the ability to evade defense mechanisms | Complexity and computational requirements of training the GAN model, and limitation of generating samples with little representative data. |
Case | Case Name | Description | Training Phase | Testing Phase | Model | AC | PR | RC | F1 | Training Time | Testing Time | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ODS | ADS | ODS | ADS | ||||||||||
1 | Performance on Original Dataset | Performance evaluation on original TON IoT Network Dataset. | ✓ | ✗ | ✓ | ✗ | RF | 99 | 100 | 100 | 100 | 32 s | 2 s |
ANN | 98 | 99 | 99 | 99 | 18 m 22 s | 11 s | |||||||
LSTM | 97 | 98 | 98 | 98 | * 45 m 10 s | * 8 s | |||||||
2 | Evasion Attack | Evaluation of adversarial impact by testing the model on adversarial/generated dataset | ✓ | ✗ | ✓ | ✓ | RF | 61 | 61 | 61 | 61 | 32 s | 2 s |
ANN | 43 | 47 | 44 | 45 | 18 m 22 s | 21 s | |||||||
LSTM | 57 | 57 | 57 | 57 | * 45 m 10 s | * 18 s | |||||||
3 | Poisoning Attack | Performing data poisoning attack on training data | ✓ | ✓ | ✓ | ✗ | RF | 65 | 42 | 65 | 51 | 8 m 51 s | 2 s |
ANN | 65 | 42 | 65 | 51 | 32 m 23 s | 8 s | |||||||
LSTM | 67 | 72 | 67 | 69 | * 58 m 16 s | * 15 s | |||||||
4 | Adversarial Learning | Use of adversarial learning to enhance the model performance by learning the adversarial patterns. | ✓ | ✓ | ✓ | ✓ | RF | 96 | 96 | 96 | 96 | 8 m 51 s | 7 s |
ANN | 80 | 85 | 81 | 83 | 32 m 23 s | 15 s | |||||||
LSTM | 98 | 98 | 98 | 98 | * 58 m 16 s | * 22 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sheikh, Z.A.; Singh, Y.; Singh, P.K.; Gonçalves, P.J.S. Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS). Sensors 2023, 23, 5459. https://doi.org/10.3390/s23125459
Sheikh ZA, Singh Y, Singh PK, Gonçalves PJS. Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS). Sensors. 2023; 23(12):5459. https://doi.org/10.3390/s23125459
Chicago/Turabian StyleSheikh, Zakir Ahmad, Yashwant Singh, Pradeep Kumar Singh, and Paulo J. Sequeira Gonçalves. 2023. "Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS)" Sensors 23, no. 12: 5459. https://doi.org/10.3390/s23125459
APA StyleSheikh, Z. A., Singh, Y., Singh, P. K., & Gonçalves, P. J. S. (2023). Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS). Sensors, 23(12), 5459. https://doi.org/10.3390/s23125459