An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm
Abstract
:1. Introduction
- It can reduce the trust of a classifier by utilizing class ambiguity.
- It can quantify output which is different from the original class.
- It may force classifier to generate results which resemble a targeted output class.
- It may use a targeted output to add noise and misclassify it further as another targeted class.
- For adversarial attack detection, a random neural network based intrusion detection system (RNN-ADV) is presented.
- Adversarial samples are crafted by computing forward derivative using Jacobian Saliency Map Attacks (JSMA) algorithm.
- Performance of RNN-ADV is compared with Multi-Layer Perceptron based deep neural networks in terms of accuracy, precision, recall, F1 score and total number of epochs.
2. Background
Adversarial Random Neural Network Model
- It can reach neuron with probability of as an excitation signal.
- It can reach neuron with probability of as an inhibitory signal.
- It can depart the neural network with probability of .
- is the firing transmission rate by which information flows towards other neighbouring neurons.
- and denominates the weight updates for neuron i and j.
3. Methodology
3.1. Jacobian Saliency Map Attacks
- is the perturbation vector;
- is the relevant norm for RNN input comparison;
- is the required adversarial output data points/features;
- = is the adversarial sample.
Algorithm 1 Adversarial sample generation. |
|
3.2. Distance Metrics
- ensures the resemblance between original sample and adversarial sample
- guarantees that the adversary is incorrectly classified by the model
- confirms the correct categorization of the normal samples
- metric is used to identify the features that have been perturbed between the original sample s and adversarial sample
- is also reffered as Euclidean norm since it measures the euclidean distance between original sample s and adversarial sample .
- is used to measure the maximal change in the features during adversarial attack crafting. Mathematically it is written as:
The Dataset and Pre-Processing
- Vast feature space which consists of many redundant and obsolete records.
- Ambiguous network attack class definitions.
- No cross validation employed to consider the possibility of dropped packets during data collection phase.
- is required output
- is input‘
4. Experimental Results and Analysis
4.1. Accuracy
4.2. Precision
4.3. Recall
4.4. F1-Score
4.5. Scenario I
4.6. Scenario II
4.7. Scenario III
5. Conclusions
- To execute transferability analysis and understanding the effects of adversarial attacks in a deep network where more number of hidden layers are used and trained with different learning rates.
- To craft adversarial samples using other techniques such as Fast Gradient Sign Method (FGSM), DeepFool and CW attack algorithms.
- To understand the reliability of RNN-ADV with different datasets such as CICIDS2017 and UNSW-NB15 etc.
- To optimize and fine-tune the network by choosing different training algorithms such as Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO).
Author Contributions
Funding
Conflicts of Interest
References
- Ferdowsi, A.; Saad, W. Generative Adversarial Networks for Distributed Intrusion Detection in the Internet of Things. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019. [Google Scholar]
- Usama, M.; Qadir, J.; Al-Fuqaha, A.; Hamdi, M. The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles’ Heel of Cognitive Networks? IEEE Netw. 2019, 34, 196–203. [Google Scholar] [CrossRef]
- Qureshi, A.U.H.; Larijani, H.; Mtetwa, N.; Javed, A.; Ahmad, J. RNN-ABC: A New Swarm Optimization Based Technique for Anomaly Detection. Computers 2019, 8, 59. [Google Scholar] [CrossRef] [Green Version]
- Qureshi, A.; Larijani, H.; Javed, A.; Mtetwa, N.; Ahmad, J. Intrusion Detection Using Swarm Intelligence. In Proceedings of the 2019 UK/ China Emerging Technologies (UCET), Glasgow, UK, 21–22 August 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Wang, Z. Deep Learning-Based Intrusion Detection With Adversaries. IEEE Access 2018, 6, 38367–38384. [Google Scholar] [CrossRef]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS P), Saarbrücken, Germany, 21–24 March 2016; pp. 372–387. [Google Scholar] [CrossRef] [Green Version]
- Martins, N.; Cruz, J.M.; Cruz, T.; Abreu, P.H. Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts. In Progress in Artificial Intelligence; Moura Oliveira, P., Novais, P., Reis, L.P., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 256–267. [Google Scholar]
- Brendel, W.; Rauber, J.; Kurakin, A.; Papernot, N.; Veliqi, B.; Mohanty, S.P.; Laurent, F.; Salathé, M.; Bethge, M.; Yu, Y.; et al. Adversarial Vision Challenge. In The NeurIPS ’18 Competition; Escalera, S., Herbrich, R., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 129–153. [Google Scholar]
- Rawat, S.; Srinivasan, A.; R, V. Intrusion detection systems using classical machine learning techniques versus integrated unsupervised feature learning and deep neural network. arXiv 2019, arXiv:1910.01114. [Google Scholar]
- Singh, K.; Mathai, K.J. Performance Comparison of Intrusion Detection System Between Deep Belief Network (DBN)Algorithm and State Preserving Extreme Learning Machine (SPELM) Algorithm. In Proceedings of the 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 20–22 February 2019; pp. 1–7. [Google Scholar] [CrossRef]
- Chaithanya, P.S.; Gauthama Raman, M.R.; Nivethitha, S.; Seshan, K.S.; Sriram, V.S. An Efficient Intrusion Detection Approach Using Enhanced Random Forest and Moth-Flame Optimization Technique. In Computational Intelligence in Pattern Recognition; Das, A.K., Nayak, J., Naik, B., Pati, S.K., Pelusi, D., Eds.; Springer: Singapore, 2020; pp. 877–884. [Google Scholar]
- Apruzzese, G.; Andreolini, M.; Colajanni, M.; Marchetti, M. Hardening Random Forest Cyber Detectors Against Adversarial Attacks. arXiv 2019, arXiv:1912.03790. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; Mukhopadhyay, D. Adversarial Attacks and Defences: A Survey. arXiv 2018, arXiv:1810.00069. [Google Scholar]
- Qiu, S.; Liu, Q.; Zhou, S.; Wu, C. Review of Artificial Intelligence Adversarial Attack and Defense Technologies. Appl. Sci. 2019, 9, 909. [Google Scholar] [CrossRef] [Green Version]
- Gelenbe, E. Random Neural Networks with Negative and Positive Signals and Product Form Solution. Neural Comput. 1989, 1, 502–510. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Qureshi, A.; Larijani, H.; Ahmad, J.; Mtetwa, N. A Novel Random Neural Network Based Approach for Intrusion Detection Systems. In Proceedings of the 2018 10th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 19–21 September 2018; pp. 50–55. [Google Scholar] [CrossRef] [Green Version]
- Qureshi, A.U.H.; Larijani, H.; Ahmad, J.; Mtetwa, N. A Heuristic Intrusion Detection System for Internet-of-Things (IoT). In 2019 Springer Science and Information (SAI) Computing Conference; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
- Datasets Available For Intrusion Detection. Available online: https://www.unb.ca/cic/datasets/index.html (accessed on 17 July 2020).
- Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A Detailed Analysis of the KDD CUP 99 Data Set. Technical report. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009. [Google Scholar]
- NSL-KDD | Datasets | Research | Canadian Institute for Cybersecurity |. Available online: http://www.unb.ca/cic/datasets/nsl.html (accessed on 3 May 2018).
- Papernot, N.; Faghri, F.; Carlini, N.; Goodfellow, I.; Feinman, R.; Kurakin, A.; Xie, C.; Sharma, Y.; Brown, T.; Roy, A.; et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv 2018, arXiv:1610.00768. [Google Scholar]
Algorithm | Distance Metric | Perturbation Metric |
---|---|---|
Jacobian Saliency Map Attacks Algorithm (JSMA) | 0.5 |
Attack Label | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
Normal | 82.14 | 99.42 | 91.82 | 95.46 |
Denial-of-Service (DoS) | 92.62 | 99.12 | 94.24 | 96.61 |
Probe | 91.51 | 97.21 | 75.25 | 85.55 |
User-to-Root (U2R) | 44.82 | 92.74 | 48.77 | 63.92 |
Root-to-Local (R2L) | 61.35 | 95.24 | 39.24 | 55.58 |
Attack Label | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
Normal | 63.41 | 53.87 | 42.61 | 47.58 |
Denial-of-Service (DoS) | 71.38 | 47.23 | 31.22 | 37.59 |
Probe | 79.49 | 32.41 | 30.54 | 31.44 |
User-to-Root (U2R) | 39.25 | 3.94 | 5.67 | 4.64 |
Root-to-Local (R2L) | 52.91 | 27.81 | 28.48 | 28.10 |
Attack Label | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
Normal | 60.58 | 40.70 | 29.21 | 34.01 |
Denial-of-Service (DoS) | 67.97 | 33.18 | 18.99 | 24.15 |
Probe | 71.41 | 21.01 | 16.43 | 18.43 |
User-to-Root (U2R) | 32.11 | 1.28 | 2.47 | 1.68 |
Root-to-Local (R2L) | 48.17 | 11.62 | 17.84 | 17.84 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qureshi, A.U.H.; Larijani, H.; Yousefi, M.; Adeel, A.; Mtetwa, N. An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm. Computers 2020, 9, 58. https://doi.org/10.3390/computers9030058
Qureshi AUH, Larijani H, Yousefi M, Adeel A, Mtetwa N. An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm. Computers. 2020; 9(3):58. https://doi.org/10.3390/computers9030058
Chicago/Turabian StyleQureshi, Ayyaz Ul Haq, Hadi Larijani, Mehdi Yousefi, Ahsan Adeel, and Nhamoinesu Mtetwa. 2020. "An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm" Computers 9, no. 3: 58. https://doi.org/10.3390/computers9030058
APA StyleQureshi, A. U. H., Larijani, H., Yousefi, M., Adeel, A., & Mtetwa, N. (2020). An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm. Computers, 9(3), 58. https://doi.org/10.3390/computers9030058